Sep 13 10:25:25.954274 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat Sep 13 08:30:13 -00 2025 Sep 13 10:25:25.954297 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:25:25.954306 kernel: BIOS-provided physical RAM map: Sep 13 10:25:25.954312 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 10:25:25.954319 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 10:25:25.954325 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 10:25:25.954333 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 10:25:25.954339 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 10:25:25.954352 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 10:25:25.954359 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 10:25:25.954365 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 10:25:25.954372 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 10:25:25.954378 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 10:25:25.954385 kernel: NX (Execute Disable) protection: active Sep 13 10:25:25.954395 kernel: APIC: Static calls initialized Sep 13 10:25:25.954402 kernel: SMBIOS 2.8 present. Sep 13 10:25:25.954411 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 10:25:25.954418 kernel: DMI: Memory slots populated: 1/1 Sep 13 10:25:25.954425 kernel: Hypervisor detected: KVM Sep 13 10:25:25.954441 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 10:25:25.954448 kernel: kvm-clock: using sched offset of 5390290193 cycles Sep 13 10:25:25.954455 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 10:25:25.954463 kernel: tsc: Detected 2794.748 MHz processor Sep 13 10:25:25.954471 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 10:25:25.954481 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 10:25:25.954488 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 10:25:25.954495 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 10:25:25.954503 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 10:25:25.954520 kernel: Using GB pages for direct mapping Sep 13 10:25:25.954528 kernel: ACPI: Early table checksum verification disabled Sep 13 10:25:25.954535 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 10:25:25.954542 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:25:25.954552 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:25:25.954559 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:25:25.954566 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 10:25:25.954573 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:25:25.954580 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:25:25.954588 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:25:25.954595 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:25:25.954602 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 10:25:25.954614 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 10:25:25.954622 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 10:25:25.954629 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 10:25:25.954636 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 10:25:25.954644 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 10:25:25.954651 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 10:25:25.954660 kernel: No NUMA configuration found Sep 13 10:25:25.954667 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 10:25:25.954675 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 13 10:25:25.954682 kernel: Zone ranges: Sep 13 10:25:25.954690 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 10:25:25.954697 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 10:25:25.954704 kernel: Normal empty Sep 13 10:25:25.954711 kernel: Device empty Sep 13 10:25:25.954719 kernel: Movable zone start for each node Sep 13 10:25:25.954726 kernel: Early memory node ranges Sep 13 10:25:25.954736 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 10:25:25.954743 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 10:25:25.954750 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 10:25:25.954758 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 10:25:25.954765 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 10:25:25.954772 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 10:25:25.954780 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 10:25:25.954790 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 10:25:25.954798 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 10:25:25.954807 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 10:25:25.954816 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 10:25:25.954827 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 10:25:25.954836 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 10:25:25.954844 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 10:25:25.954852 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 10:25:25.954859 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 10:25:25.954867 kernel: TSC deadline timer available Sep 13 10:25:25.954874 kernel: CPU topo: Max. logical packages: 1 Sep 13 10:25:25.954883 kernel: CPU topo: Max. logical dies: 1 Sep 13 10:25:25.954891 kernel: CPU topo: Max. dies per package: 1 Sep 13 10:25:25.954898 kernel: CPU topo: Max. threads per core: 1 Sep 13 10:25:25.954905 kernel: CPU topo: Num. cores per package: 4 Sep 13 10:25:25.954913 kernel: CPU topo: Num. threads per package: 4 Sep 13 10:25:25.954920 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 13 10:25:25.954927 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 10:25:25.954935 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 10:25:25.954942 kernel: kvm-guest: setup PV sched yield Sep 13 10:25:25.954964 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 10:25:25.954974 kernel: Booting paravirtualized kernel on KVM Sep 13 10:25:25.954981 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 10:25:25.954989 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 10:25:25.954997 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 13 10:25:25.955004 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 13 10:25:25.955011 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 10:25:25.955018 kernel: kvm-guest: PV spinlocks enabled Sep 13 10:25:25.955026 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 10:25:25.955034 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:25:25.955047 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 10:25:25.955054 kernel: random: crng init done Sep 13 10:25:25.955062 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 10:25:25.955069 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 10:25:25.955077 kernel: Fallback order for Node 0: 0 Sep 13 10:25:25.955084 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 13 10:25:25.955091 kernel: Policy zone: DMA32 Sep 13 10:25:25.955099 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 10:25:25.955108 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 10:25:25.955116 kernel: ftrace: allocating 40125 entries in 157 pages Sep 13 10:25:25.955123 kernel: ftrace: allocated 157 pages with 5 groups Sep 13 10:25:25.955130 kernel: Dynamic Preempt: voluntary Sep 13 10:25:25.955138 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 10:25:25.955149 kernel: rcu: RCU event tracing is enabled. Sep 13 10:25:25.955157 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 10:25:25.955165 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 10:25:25.955174 kernel: Rude variant of Tasks RCU enabled. Sep 13 10:25:25.955184 kernel: Tracing variant of Tasks RCU enabled. Sep 13 10:25:25.955191 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 10:25:25.955199 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 10:25:25.955206 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:25:25.955214 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:25:25.955222 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:25:25.955229 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 10:25:25.955237 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 10:25:25.955253 kernel: Console: colour VGA+ 80x25 Sep 13 10:25:25.955260 kernel: printk: legacy console [ttyS0] enabled Sep 13 10:25:25.955268 kernel: ACPI: Core revision 20240827 Sep 13 10:25:25.955276 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 10:25:25.955286 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 10:25:25.955293 kernel: x2apic enabled Sep 13 10:25:25.955301 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 10:25:25.955311 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 10:25:25.955319 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 10:25:25.955329 kernel: kvm-guest: setup PV IPIs Sep 13 10:25:25.955337 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 10:25:25.955345 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 10:25:25.955352 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 10:25:25.955360 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 10:25:25.955368 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 10:25:25.955375 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 10:25:25.955383 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 10:25:25.955391 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 10:25:25.955401 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 10:25:25.955409 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 10:25:25.955416 kernel: active return thunk: retbleed_return_thunk Sep 13 10:25:25.955424 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 10:25:25.955440 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 10:25:25.955448 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 10:25:25.955455 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 10:25:25.955464 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 10:25:25.955474 kernel: active return thunk: srso_return_thunk Sep 13 10:25:25.955482 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 10:25:25.955490 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 10:25:25.955498 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 10:25:25.955506 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 10:25:25.955513 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 10:25:25.955521 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 10:25:25.955529 kernel: Freeing SMP alternatives memory: 32K Sep 13 10:25:25.955536 kernel: pid_max: default: 32768 minimum: 301 Sep 13 10:25:25.955546 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 13 10:25:25.955554 kernel: landlock: Up and running. Sep 13 10:25:25.955561 kernel: SELinux: Initializing. Sep 13 10:25:25.955571 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 10:25:25.955579 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 10:25:25.955587 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 10:25:25.955595 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 10:25:25.955602 kernel: ... version: 0 Sep 13 10:25:25.955610 kernel: ... bit width: 48 Sep 13 10:25:25.955620 kernel: ... generic registers: 6 Sep 13 10:25:25.955627 kernel: ... value mask: 0000ffffffffffff Sep 13 10:25:25.955635 kernel: ... max period: 00007fffffffffff Sep 13 10:25:25.955642 kernel: ... fixed-purpose events: 0 Sep 13 10:25:25.955650 kernel: ... event mask: 000000000000003f Sep 13 10:25:25.955658 kernel: signal: max sigframe size: 1776 Sep 13 10:25:25.955665 kernel: rcu: Hierarchical SRCU implementation. Sep 13 10:25:25.955673 kernel: rcu: Max phase no-delay instances is 400. Sep 13 10:25:25.955681 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 13 10:25:25.955690 kernel: smp: Bringing up secondary CPUs ... Sep 13 10:25:25.955698 kernel: smpboot: x86: Booting SMP configuration: Sep 13 10:25:25.955706 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 10:25:25.955713 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 10:25:25.955721 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 10:25:25.955729 kernel: Memory: 2428916K/2571752K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54088K init, 2876K bss, 136904K reserved, 0K cma-reserved) Sep 13 10:25:25.955737 kernel: devtmpfs: initialized Sep 13 10:25:25.955745 kernel: x86/mm: Memory block size: 128MB Sep 13 10:25:25.955753 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 10:25:25.955763 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 10:25:25.955770 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 10:25:25.955778 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 10:25:25.955786 kernel: audit: initializing netlink subsys (disabled) Sep 13 10:25:25.955794 kernel: audit: type=2000 audit(1757759123.196:1): state=initialized audit_enabled=0 res=1 Sep 13 10:25:25.955801 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 10:25:25.955809 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 10:25:25.955816 kernel: cpuidle: using governor menu Sep 13 10:25:25.955824 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 10:25:25.955834 kernel: dca service started, version 1.12.1 Sep 13 10:25:25.955842 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 13 10:25:25.955849 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 10:25:25.955857 kernel: PCI: Using configuration type 1 for base access Sep 13 10:25:25.955865 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 10:25:25.955872 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 10:25:25.955880 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 10:25:25.955888 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 10:25:25.955896 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 10:25:25.955905 kernel: ACPI: Added _OSI(Module Device) Sep 13 10:25:25.955913 kernel: ACPI: Added _OSI(Processor Device) Sep 13 10:25:25.955920 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 10:25:25.955928 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 10:25:25.955936 kernel: ACPI: Interpreter enabled Sep 13 10:25:25.955956 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 10:25:25.955975 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 10:25:25.955983 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 10:25:25.955991 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 10:25:25.956001 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 10:25:25.956009 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 10:25:25.956213 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 10:25:25.956341 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 10:25:25.956471 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 10:25:25.956483 kernel: PCI host bridge to bus 0000:00 Sep 13 10:25:25.956615 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 10:25:25.956816 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 10:25:25.956933 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 10:25:25.957063 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 10:25:25.957174 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 10:25:25.957283 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 10:25:25.957392 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 10:25:25.957561 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 13 10:25:25.957702 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 13 10:25:25.957867 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 13 10:25:25.958006 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 13 10:25:25.958169 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 13 10:25:25.958321 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 10:25:25.958481 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 13 10:25:25.958611 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 13 10:25:25.958732 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 13 10:25:25.958858 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 10:25:25.959030 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 13 10:25:25.959155 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 13 10:25:25.959275 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 13 10:25:25.959399 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 10:25:25.959606 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 13 10:25:25.959744 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 13 10:25:25.959866 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 13 10:25:25.960007 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 10:25:25.960130 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 13 10:25:25.960266 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 13 10:25:25.960389 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 10:25:25.960570 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 13 10:25:25.960705 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 13 10:25:25.960826 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 13 10:25:25.960982 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 13 10:25:25.961107 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 13 10:25:25.961118 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 10:25:25.961130 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 10:25:25.961138 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 10:25:25.961146 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 10:25:25.961153 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 10:25:25.961161 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 10:25:25.961168 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 10:25:25.961176 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 10:25:25.961184 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 10:25:25.961191 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 10:25:25.961201 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 10:25:25.961208 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 10:25:25.961216 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 10:25:25.961224 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 10:25:25.961231 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 10:25:25.961239 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 10:25:25.961246 kernel: iommu: Default domain type: Translated Sep 13 10:25:25.961254 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 10:25:25.961262 kernel: PCI: Using ACPI for IRQ routing Sep 13 10:25:25.961269 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 10:25:25.961279 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 10:25:25.961286 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 10:25:25.961406 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 10:25:25.961537 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 10:25:25.961656 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 10:25:25.961666 kernel: vgaarb: loaded Sep 13 10:25:25.961674 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 10:25:25.961682 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 10:25:25.961693 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 10:25:25.961701 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 10:25:25.961709 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 10:25:25.961716 kernel: pnp: PnP ACPI init Sep 13 10:25:25.961863 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 10:25:25.961875 kernel: pnp: PnP ACPI: found 6 devices Sep 13 10:25:25.961883 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 10:25:25.961891 kernel: NET: Registered PF_INET protocol family Sep 13 10:25:25.961902 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 10:25:25.961909 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 10:25:25.961917 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 10:25:25.961925 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 10:25:25.961933 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 10:25:25.961941 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 10:25:25.961962 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 10:25:25.961970 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 10:25:25.961978 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 10:25:25.961988 kernel: NET: Registered PF_XDP protocol family Sep 13 10:25:25.962103 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 10:25:25.962214 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 10:25:25.962323 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 10:25:25.962445 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 10:25:25.962557 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 10:25:25.962668 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 10:25:25.962678 kernel: PCI: CLS 0 bytes, default 64 Sep 13 10:25:25.962689 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 10:25:25.962697 kernel: Initialise system trusted keyrings Sep 13 10:25:25.962704 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 10:25:25.962712 kernel: Key type asymmetric registered Sep 13 10:25:25.962720 kernel: Asymmetric key parser 'x509' registered Sep 13 10:25:25.962727 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 10:25:25.962735 kernel: io scheduler mq-deadline registered Sep 13 10:25:25.962743 kernel: io scheduler kyber registered Sep 13 10:25:25.962750 kernel: io scheduler bfq registered Sep 13 10:25:25.962760 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 10:25:25.962768 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 10:25:25.962776 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 10:25:25.962784 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 10:25:25.962792 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 10:25:25.962799 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 10:25:25.962807 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 10:25:25.962815 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 10:25:25.962823 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 10:25:25.962992 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 10:25:25.963005 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 10:25:25.963120 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 10:25:25.963234 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T10:25:25 UTC (1757759125) Sep 13 10:25:25.963347 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 10:25:25.963357 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 10:25:25.963365 kernel: NET: Registered PF_INET6 protocol family Sep 13 10:25:25.963373 kernel: Segment Routing with IPv6 Sep 13 10:25:25.963384 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 10:25:25.963392 kernel: NET: Registered PF_PACKET protocol family Sep 13 10:25:25.963400 kernel: Key type dns_resolver registered Sep 13 10:25:25.963407 kernel: IPI shorthand broadcast: enabled Sep 13 10:25:25.963415 kernel: sched_clock: Marking stable (3126003162, 109869158)->(3270169112, -34296792) Sep 13 10:25:25.963423 kernel: registered taskstats version 1 Sep 13 10:25:25.963438 kernel: Loading compiled-in X.509 certificates Sep 13 10:25:25.963446 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: cbb54677ad1c578839cdade5ab8500bbdb72e350' Sep 13 10:25:25.963454 kernel: Demotion targets for Node 0: null Sep 13 10:25:25.963464 kernel: Key type .fscrypt registered Sep 13 10:25:25.963472 kernel: Key type fscrypt-provisioning registered Sep 13 10:25:25.963480 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 10:25:25.963488 kernel: ima: Allocated hash algorithm: sha1 Sep 13 10:25:25.963495 kernel: ima: No architecture policies found Sep 13 10:25:25.963503 kernel: clk: Disabling unused clocks Sep 13 10:25:25.963511 kernel: Warning: unable to open an initial console. Sep 13 10:25:25.963519 kernel: Freeing unused kernel image (initmem) memory: 54088K Sep 13 10:25:25.963527 kernel: Write protecting the kernel read-only data: 24576k Sep 13 10:25:25.963537 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 13 10:25:25.963544 kernel: Run /init as init process Sep 13 10:25:25.963552 kernel: with arguments: Sep 13 10:25:25.963560 kernel: /init Sep 13 10:25:25.963567 kernel: with environment: Sep 13 10:25:25.963575 kernel: HOME=/ Sep 13 10:25:25.963582 kernel: TERM=linux Sep 13 10:25:25.963590 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 10:25:25.963599 systemd[1]: Successfully made /usr/ read-only. Sep 13 10:25:25.963620 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 10:25:25.963632 systemd[1]: Detected virtualization kvm. Sep 13 10:25:25.963640 systemd[1]: Detected architecture x86-64. Sep 13 10:25:25.963648 systemd[1]: Running in initrd. Sep 13 10:25:25.963656 systemd[1]: No hostname configured, using default hostname. Sep 13 10:25:25.963667 systemd[1]: Hostname set to . Sep 13 10:25:25.963675 systemd[1]: Initializing machine ID from VM UUID. Sep 13 10:25:25.963683 systemd[1]: Queued start job for default target initrd.target. Sep 13 10:25:25.963692 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:25:25.963701 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:25:25.963710 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 10:25:25.963719 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 10:25:25.963727 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 10:25:25.963739 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 10:25:25.963748 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 10:25:25.963757 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 10:25:25.963765 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:25:25.963774 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:25:25.963783 systemd[1]: Reached target paths.target - Path Units. Sep 13 10:25:25.963791 systemd[1]: Reached target slices.target - Slice Units. Sep 13 10:25:25.963801 systemd[1]: Reached target swap.target - Swaps. Sep 13 10:25:25.963810 systemd[1]: Reached target timers.target - Timer Units. Sep 13 10:25:25.963820 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 10:25:25.963829 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 10:25:25.963837 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 10:25:25.963846 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 13 10:25:25.963854 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:25:25.963863 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 10:25:25.963873 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:25:25.963881 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 10:25:25.963890 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 10:25:25.963899 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 10:25:25.963909 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 10:25:25.963918 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 13 10:25:25.963929 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 10:25:25.963937 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 10:25:25.963963 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 10:25:25.963972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:25:25.963980 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 10:25:25.963992 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:25:25.964001 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 10:25:25.964035 systemd-journald[220]: Collecting audit messages is disabled. Sep 13 10:25:25.964057 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 10:25:25.964081 systemd-journald[220]: Journal started Sep 13 10:25:25.964100 systemd-journald[220]: Runtime Journal (/run/log/journal/908668a050e7470d82bdb4e335fcee25) is 6M, max 48.6M, 42.5M free. Sep 13 10:25:25.958340 systemd-modules-load[223]: Inserted module 'overlay' Sep 13 10:25:25.967964 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 10:25:25.971380 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 10:25:26.055058 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 10:25:26.055520 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:25:26.090258 kernel: Bridge firewalling registered Sep 13 10:25:26.057794 systemd-tmpfiles[237]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 13 10:25:26.058687 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 13 10:25:26.086850 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 10:25:26.090501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:25:26.091982 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:25:26.097178 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 10:25:26.100161 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:25:26.109313 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 10:25:26.120933 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:25:26.123666 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 10:25:26.124930 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:25:26.128218 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 10:25:26.130893 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 10:25:26.164476 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:25:26.184198 systemd-resolved[258]: Positive Trust Anchors: Sep 13 10:25:26.184213 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 10:25:26.184243 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 10:25:26.186764 systemd-resolved[258]: Defaulting to hostname 'linux'. Sep 13 10:25:26.188118 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 10:25:26.193299 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:25:26.291001 kernel: SCSI subsystem initialized Sep 13 10:25:26.301988 kernel: Loading iSCSI transport class v2.0-870. Sep 13 10:25:26.346989 kernel: iscsi: registered transport (tcp) Sep 13 10:25:26.369984 kernel: iscsi: registered transport (qla4xxx) Sep 13 10:25:26.370056 kernel: QLogic iSCSI HBA Driver Sep 13 10:25:26.392489 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 10:25:26.427865 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:25:26.429392 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 10:25:26.535523 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 10:25:26.537755 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 10:25:26.595994 kernel: raid6: avx2x4 gen() 29822 MB/s Sep 13 10:25:26.612981 kernel: raid6: avx2x2 gen() 30086 MB/s Sep 13 10:25:26.630020 kernel: raid6: avx2x1 gen() 24137 MB/s Sep 13 10:25:26.630067 kernel: raid6: using algorithm avx2x2 gen() 30086 MB/s Sep 13 10:25:26.648079 kernel: raid6: .... xor() 18292 MB/s, rmw enabled Sep 13 10:25:26.648122 kernel: raid6: using avx2x2 recovery algorithm Sep 13 10:25:26.668989 kernel: xor: automatically using best checksumming function avx Sep 13 10:25:26.889999 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 10:25:26.901142 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 10:25:26.904160 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:25:26.944207 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 13 10:25:26.950903 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:25:26.954685 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 10:25:26.985312 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Sep 13 10:25:27.021807 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 10:25:27.025704 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 10:25:27.109235 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:25:27.116241 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 10:25:27.172990 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 10:25:27.175990 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 10:25:27.178543 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 10:25:27.178569 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 10:25:27.178580 kernel: GPT:9289727 != 19775487 Sep 13 10:25:27.178593 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 10:25:27.181565 kernel: GPT:9289727 != 19775487 Sep 13 10:25:27.181587 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 10:25:27.181598 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:25:27.203982 kernel: libata version 3.00 loaded. Sep 13 10:25:27.206985 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 13 10:25:27.208474 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:25:27.208602 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:25:27.209986 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:25:27.215539 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:25:27.218783 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:25:27.233889 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 10:25:27.234120 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 10:25:27.234147 kernel: AES CTR mode by8 optimization enabled Sep 13 10:25:27.237635 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 13 10:25:27.237803 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 13 10:25:27.237995 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 10:25:27.239981 kernel: scsi host0: ahci Sep 13 10:25:27.249973 kernel: scsi host1: ahci Sep 13 10:25:27.253008 kernel: scsi host2: ahci Sep 13 10:25:27.263984 kernel: scsi host3: ahci Sep 13 10:25:27.264567 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 10:25:27.321883 kernel: scsi host4: ahci Sep 13 10:25:27.322071 kernel: scsi host5: ahci Sep 13 10:25:27.322221 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 13 10:25:27.322244 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 13 10:25:27.322255 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 13 10:25:27.322265 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 13 10:25:27.322275 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 13 10:25:27.322286 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 13 10:25:27.324324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:25:27.351714 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 10:25:27.354402 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 10:25:27.363045 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 10:25:27.371656 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 10:25:27.372842 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 10:25:27.578159 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 10:25:27.578246 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 10:25:27.578257 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 10:25:27.579988 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 10:25:27.580074 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 10:25:27.580975 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 10:25:27.581992 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 10:25:27.582012 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 10:25:27.582615 kernel: ata3.00: applying bridge limits Sep 13 10:25:27.584058 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 10:25:27.584074 kernel: ata3.00: configured for UDMA/100 Sep 13 10:25:27.585994 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 10:25:27.635973 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 10:25:27.636194 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 10:25:27.662113 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 10:25:27.665120 disk-uuid[636]: Primary Header is updated. Sep 13 10:25:27.665120 disk-uuid[636]: Secondary Entries is updated. Sep 13 10:25:27.665120 disk-uuid[636]: Secondary Header is updated. Sep 13 10:25:27.669980 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:25:27.673980 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:25:28.080422 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 10:25:28.082044 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 10:25:28.115711 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:25:28.118207 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 10:25:28.119635 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 10:25:28.153142 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 10:25:28.676992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:25:28.677924 disk-uuid[637]: The operation has completed successfully. Sep 13 10:25:28.714572 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 10:25:28.714734 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 10:25:28.753560 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 10:25:28.781863 sh[665]: Success Sep 13 10:25:28.800816 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 10:25:28.800896 kernel: device-mapper: uevent: version 1.0.3 Sep 13 10:25:28.800914 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 13 10:25:28.810981 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 13 10:25:28.842688 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 10:25:28.847209 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 10:25:28.863500 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 10:25:28.869018 kernel: BTRFS: device fsid fbf3e737-db97-4ff7-a1f5-c4d4b7390663 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (677) Sep 13 10:25:28.869070 kernel: BTRFS info (device dm-0): first mount of filesystem fbf3e737-db97-4ff7-a1f5-c4d4b7390663 Sep 13 10:25:28.871175 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:25:28.876456 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 10:25:28.876506 kernel: BTRFS info (device dm-0): enabling free space tree Sep 13 10:25:28.877917 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 10:25:28.879139 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 13 10:25:28.881071 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 10:25:28.882158 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 10:25:28.884076 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 10:25:28.914821 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Sep 13 10:25:28.914883 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:25:28.914895 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:25:28.918978 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:25:28.919051 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:25:28.925027 kernel: BTRFS info (device vda6): last unmount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:25:28.926734 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 10:25:28.930065 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 10:25:29.023026 ignition[754]: Ignition 2.22.0 Sep 13 10:25:29.023041 ignition[754]: Stage: fetch-offline Sep 13 10:25:29.023081 ignition[754]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:25:29.023092 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:25:29.023211 ignition[754]: parsed url from cmdline: "" Sep 13 10:25:29.023215 ignition[754]: no config URL provided Sep 13 10:25:29.023222 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 10:25:29.023230 ignition[754]: no config at "/usr/lib/ignition/user.ign" Sep 13 10:25:29.023263 ignition[754]: op(1): [started] loading QEMU firmware config module Sep 13 10:25:29.023268 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 10:25:29.031905 ignition[754]: op(1): [finished] loading QEMU firmware config module Sep 13 10:25:29.035161 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 10:25:29.039081 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 10:25:29.072437 ignition[754]: parsing config with SHA512: 35142d716f415771d6e647e66ae6a35ba900686c34a7003eda628b21bd993dc01c35430ff7331ebea7b9ab0eee2896fb3ad124228a07b68d8f5de70d135330f9 Sep 13 10:25:29.076795 unknown[754]: fetched base config from "system" Sep 13 10:25:29.076814 unknown[754]: fetched user config from "qemu" Sep 13 10:25:29.079025 ignition[754]: fetch-offline: fetch-offline passed Sep 13 10:25:29.079990 ignition[754]: Ignition finished successfully Sep 13 10:25:29.084399 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 10:25:29.087908 systemd-networkd[856]: lo: Link UP Sep 13 10:25:29.087920 systemd-networkd[856]: lo: Gained carrier Sep 13 10:25:29.089513 systemd-networkd[856]: Enumeration completed Sep 13 10:25:29.089882 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:25:29.089888 systemd-networkd[856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 10:25:29.090657 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 10:25:29.091382 systemd[1]: Reached target network.target - Network. Sep 13 10:25:29.091467 systemd-networkd[856]: eth0: Link UP Sep 13 10:25:29.091746 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 10:25:29.092039 systemd-networkd[856]: eth0: Gained carrier Sep 13 10:25:29.092049 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:25:29.092728 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 10:25:29.111010 systemd-networkd[856]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 10:25:29.129984 ignition[860]: Ignition 2.22.0 Sep 13 10:25:29.129997 ignition[860]: Stage: kargs Sep 13 10:25:29.130147 ignition[860]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:25:29.130159 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:25:29.130978 ignition[860]: kargs: kargs passed Sep 13 10:25:29.131028 ignition[860]: Ignition finished successfully Sep 13 10:25:29.137521 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 10:25:29.140851 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 10:25:29.176748 ignition[870]: Ignition 2.22.0 Sep 13 10:25:29.176761 ignition[870]: Stage: disks Sep 13 10:25:29.176914 ignition[870]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:25:29.176926 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:25:29.178448 ignition[870]: disks: disks passed Sep 13 10:25:29.178499 ignition[870]: Ignition finished successfully Sep 13 10:25:29.183236 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 10:25:29.184551 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 10:25:29.185716 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 10:25:29.186187 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 10:25:29.186517 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 10:25:29.186814 systemd[1]: Reached target basic.target - Basic System. Sep 13 10:25:29.188548 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 10:25:29.214538 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 13 10:25:29.222221 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 10:25:29.224962 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 10:25:29.414007 kernel: EXT4-fs (vda9): mounted filesystem 1fad58d4-1271-484a-aa8e-8f7f5dca764c r/w with ordered data mode. Quota mode: none. Sep 13 10:25:29.414933 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 10:25:29.415780 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 10:25:29.418920 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 10:25:29.423051 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 10:25:29.423560 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 10:25:29.423601 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 10:25:29.423623 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 10:25:29.434525 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 10:25:29.436685 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 10:25:29.440484 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Sep 13 10:25:29.442465 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:25:29.442488 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:25:29.445668 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:25:29.445734 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:25:29.447843 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 10:25:29.481358 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 10:25:29.487005 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Sep 13 10:25:29.492633 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 10:25:29.498816 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 10:25:29.601549 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 10:25:29.604498 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 10:25:29.606312 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 10:25:29.630019 kernel: BTRFS info (device vda6): last unmount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:25:29.646129 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 10:25:29.681108 ignition[1002]: INFO : Ignition 2.22.0 Sep 13 10:25:29.681108 ignition[1002]: INFO : Stage: mount Sep 13 10:25:29.683104 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:25:29.683104 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:25:29.685366 ignition[1002]: INFO : mount: mount passed Sep 13 10:25:29.686275 ignition[1002]: INFO : Ignition finished successfully Sep 13 10:25:29.688554 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 10:25:29.690016 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 10:25:29.868821 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 10:25:29.870349 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 10:25:29.996979 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Sep 13 10:25:29.998972 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:25:29.999036 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:25:30.002062 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:25:30.002088 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:25:30.003976 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 10:25:30.058965 ignition[1031]: INFO : Ignition 2.22.0 Sep 13 10:25:30.058965 ignition[1031]: INFO : Stage: files Sep 13 10:25:30.060727 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:25:30.060727 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:25:30.063285 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Sep 13 10:25:30.065155 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 10:25:30.065155 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 10:25:30.069487 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 10:25:30.070979 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 10:25:30.070979 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 10:25:30.070257 unknown[1031]: wrote ssh authorized keys file for user: core Sep 13 10:25:30.074926 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 10:25:30.077170 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 10:25:30.134625 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 10:25:30.459101 systemd-networkd[856]: eth0: Gained IPv6LL Sep 13 10:25:30.530095 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 10:25:30.532094 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 10:25:30.533879 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 10:25:30.535487 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 10:25:30.537193 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 10:25:30.538843 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 10:25:30.540656 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 10:25:30.542287 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 10:25:30.544021 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 10:25:30.549031 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 10:25:30.551045 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 10:25:30.551045 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 10:25:30.555568 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 10:25:30.555568 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 10:25:30.555568 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 10:25:30.960662 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 10:25:31.453192 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 10:25:31.453192 ignition[1031]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 10:25:31.457337 ignition[1031]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 10:25:31.463857 ignition[1031]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 10:25:31.463857 ignition[1031]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 10:25:31.463857 ignition[1031]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 10:25:31.468145 ignition[1031]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 10:25:31.469982 ignition[1031]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 10:25:31.471811 ignition[1031]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 10:25:31.471811 ignition[1031]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 10:25:31.493721 ignition[1031]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 10:25:31.499190 ignition[1031]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 10:25:31.500759 ignition[1031]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 10:25:31.500759 ignition[1031]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 13 10:25:31.500759 ignition[1031]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 10:25:31.500759 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 10:25:31.500759 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 10:25:31.500759 ignition[1031]: INFO : files: files passed Sep 13 10:25:31.500759 ignition[1031]: INFO : Ignition finished successfully Sep 13 10:25:31.509496 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 10:25:31.511280 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 10:25:31.514325 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 10:25:31.530446 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 10:25:31.530564 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 10:25:31.535477 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 10:25:31.539488 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:25:31.539488 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:25:31.542574 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:25:31.546345 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 10:25:31.549118 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 10:25:31.551381 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 10:25:31.634266 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 10:25:31.634403 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 10:25:31.635546 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 10:25:31.638525 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 10:25:31.638916 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 10:25:31.643873 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 10:25:31.679574 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 10:25:31.681294 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 10:25:31.703171 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:25:31.703503 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:25:31.705627 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 10:25:31.707740 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 10:25:31.707874 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 10:25:31.711259 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 10:25:31.711797 systemd[1]: Stopped target basic.target - Basic System. Sep 13 10:25:31.712290 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 10:25:31.712607 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 10:25:31.717917 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 10:25:31.720002 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 13 10:25:31.722017 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 10:25:31.723828 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 10:25:31.725842 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 10:25:31.726481 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 10:25:31.726785 systemd[1]: Stopped target swap.target - Swaps. Sep 13 10:25:31.727242 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 10:25:31.727356 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 10:25:31.733235 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:25:31.735588 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:25:31.735853 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 10:25:31.735933 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:25:31.736185 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 10:25:31.736301 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 10:25:31.743181 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 10:25:31.743300 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 10:25:31.743785 systemd[1]: Stopped target paths.target - Path Units. Sep 13 10:25:31.744189 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 10:25:31.744275 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:25:31.744556 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 10:25:31.744864 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 10:25:31.751475 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 10:25:31.751563 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 10:25:31.753299 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 10:25:31.753386 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 10:25:31.755528 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 10:25:31.755640 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 10:25:31.758752 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 10:25:31.758854 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 10:25:31.760439 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 10:25:31.762904 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 10:25:31.764418 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 10:25:31.764575 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:25:31.765097 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 10:25:31.765205 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 10:25:31.769320 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 10:25:31.779061 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 10:25:31.800976 ignition[1086]: INFO : Ignition 2.22.0 Sep 13 10:25:31.800976 ignition[1086]: INFO : Stage: umount Sep 13 10:25:31.802734 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:25:31.802734 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:25:31.802734 ignition[1086]: INFO : umount: umount passed Sep 13 10:25:31.802734 ignition[1086]: INFO : Ignition finished successfully Sep 13 10:25:31.803489 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 10:25:31.806231 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 10:25:31.806362 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 10:25:31.807010 systemd[1]: Stopped target network.target - Network. Sep 13 10:25:31.808542 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 10:25:31.808595 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 10:25:31.810377 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 10:25:31.810424 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 10:25:31.812365 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 10:25:31.812417 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 10:25:31.812704 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 10:25:31.812746 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 10:25:31.813291 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 10:25:31.817634 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 10:25:31.827541 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 10:25:31.827696 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 10:25:31.831525 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 13 10:25:31.831797 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 10:25:31.831914 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 10:25:31.834701 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 13 10:25:31.835305 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 13 10:25:31.836285 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 10:25:31.836346 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:25:31.839404 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 10:25:31.840354 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 10:25:31.840411 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 10:25:31.840733 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 10:25:31.840777 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:25:31.846027 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 10:25:31.846082 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 10:25:31.847010 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 10:25:31.847073 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:25:31.850708 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:25:31.852422 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 10:25:31.852501 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:25:31.866253 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 10:25:31.866392 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 10:25:31.874713 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 10:25:31.874893 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:25:31.875597 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 10:25:31.875642 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 10:25:31.878393 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 10:25:31.878431 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:25:31.878684 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 10:25:31.878728 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 10:25:31.879519 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 10:25:31.879564 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 10:25:31.880154 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 10:25:31.880203 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 10:25:31.881563 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 10:25:31.889914 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 13 10:25:31.889986 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:25:31.893716 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 10:25:31.893783 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:25:31.896854 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 10:25:31.896917 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:25:31.900469 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 10:25:31.900533 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:25:31.904431 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:25:31.904490 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:25:31.908978 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 13 10:25:31.909040 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 13 10:25:31.909088 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 10:25:31.909143 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:25:31.909515 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 10:25:31.909627 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 10:25:31.936724 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 10:25:31.936898 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 10:25:31.937883 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 10:25:31.939577 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 10:25:31.939649 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 10:25:31.944243 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 10:25:31.964912 systemd[1]: Switching root. Sep 13 10:25:32.001652 systemd-journald[220]: Journal stopped Sep 13 10:25:33.309797 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 13 10:25:33.309875 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 10:25:33.309893 kernel: SELinux: policy capability open_perms=1 Sep 13 10:25:33.309914 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 10:25:33.309939 kernel: SELinux: policy capability always_check_network=0 Sep 13 10:25:33.309974 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 10:25:33.309990 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 10:25:33.310004 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 10:25:33.310020 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 10:25:33.310038 kernel: SELinux: policy capability userspace_initial_context=0 Sep 13 10:25:33.310053 kernel: audit: type=1403 audit(1757759132.447:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 10:25:33.310069 systemd[1]: Successfully loaded SELinux policy in 66.626ms. Sep 13 10:25:33.310088 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.027ms. Sep 13 10:25:33.310113 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 10:25:33.310130 systemd[1]: Detected virtualization kvm. Sep 13 10:25:33.310146 systemd[1]: Detected architecture x86-64. Sep 13 10:25:33.310161 systemd[1]: Detected first boot. Sep 13 10:25:33.310177 systemd[1]: Initializing machine ID from VM UUID. Sep 13 10:25:33.310193 zram_generator::config[1132]: No configuration found. Sep 13 10:25:33.310210 kernel: Guest personality initialized and is inactive Sep 13 10:25:33.310225 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 10:25:33.310252 kernel: Initialized host personality Sep 13 10:25:33.310267 kernel: NET: Registered PF_VSOCK protocol family Sep 13 10:25:33.310282 systemd[1]: Populated /etc with preset unit settings. Sep 13 10:25:33.310299 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 13 10:25:33.310315 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 10:25:33.310330 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 10:25:33.310346 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 10:25:33.310362 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 10:25:33.310378 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 10:25:33.310399 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 10:25:33.310422 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 10:25:33.310438 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 10:25:33.310460 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 10:25:33.310476 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 10:25:33.310493 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 10:25:33.310508 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:25:33.310524 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:25:33.310540 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 10:25:33.310558 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 10:25:33.310575 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 10:25:33.310591 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 10:25:33.310607 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 10:25:33.310623 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:25:33.310639 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:25:33.310655 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 10:25:33.310679 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 10:25:33.310698 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 10:25:33.310714 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 10:25:33.310730 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:25:33.310745 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 10:25:33.310763 systemd[1]: Reached target slices.target - Slice Units. Sep 13 10:25:33.310778 systemd[1]: Reached target swap.target - Swaps. Sep 13 10:25:33.310794 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 10:25:33.310810 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 10:25:33.310828 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 13 10:25:33.310844 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:25:33.310860 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 10:25:33.310876 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:25:33.310892 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 10:25:33.310908 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 10:25:33.310924 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 10:25:33.310939 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 10:25:33.310970 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:25:33.310989 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 10:25:33.311005 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 10:25:33.311021 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 10:25:33.311038 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 10:25:33.311053 systemd[1]: Reached target machines.target - Containers. Sep 13 10:25:33.311069 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 10:25:33.311085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:25:33.311101 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 10:25:33.311136 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 10:25:33.311156 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:25:33.311171 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 10:25:33.311187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:25:33.311203 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 10:25:33.311218 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:25:33.311234 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 10:25:33.311258 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 10:25:33.311275 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 10:25:33.311293 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 10:25:33.311309 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 10:25:33.311325 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:25:33.311341 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 10:25:33.311378 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 10:25:33.311394 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 10:25:33.311414 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 10:25:33.311428 kernel: loop: module loaded Sep 13 10:25:33.311442 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 13 10:25:33.311460 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 10:25:33.311475 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 10:25:33.311490 systemd[1]: Stopped verity-setup.service. Sep 13 10:25:33.311507 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:25:33.311524 kernel: fuse: init (API version 7.41) Sep 13 10:25:33.311538 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 10:25:33.311553 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 10:25:33.311567 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 10:25:33.311581 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 10:25:33.311595 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 10:25:33.311612 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 10:25:33.311627 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:25:33.311641 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 10:25:33.311655 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 10:25:33.311670 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 10:25:33.311710 systemd-journald[1204]: Collecting audit messages is disabled. Sep 13 10:25:33.311736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:25:33.311754 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:25:33.311768 systemd-journald[1204]: Journal started Sep 13 10:25:33.311793 systemd-journald[1204]: Runtime Journal (/run/log/journal/908668a050e7470d82bdb4e335fcee25) is 6M, max 48.6M, 42.5M free. Sep 13 10:25:33.065778 systemd[1]: Queued start job for default target multi-user.target. Sep 13 10:25:33.081342 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 10:25:33.081848 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 10:25:33.314101 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 10:25:33.315115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:25:33.315436 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:25:33.316926 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 10:25:33.317257 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 10:25:33.318623 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:25:33.318831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:25:33.320287 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 10:25:33.321730 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:25:33.323334 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 10:25:33.324879 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 13 10:25:33.330975 kernel: ACPI: bus type drm_connector registered Sep 13 10:25:33.331261 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 10:25:33.331490 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 10:25:33.343191 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 10:25:33.346034 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 10:25:33.348288 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 10:25:33.349627 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 10:25:33.349737 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 10:25:33.351837 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 13 10:25:33.358995 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 10:25:33.360623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:25:33.362146 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 10:25:33.367061 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 10:25:33.368483 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 10:25:33.369569 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 10:25:33.370675 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 10:25:33.372190 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:25:33.375191 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 10:25:33.379112 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 10:25:33.381985 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 10:25:33.383273 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 10:25:33.389352 systemd-journald[1204]: Time spent on flushing to /var/log/journal/908668a050e7470d82bdb4e335fcee25 is 27.801ms for 988 entries. Sep 13 10:25:33.389352 systemd-journald[1204]: System Journal (/var/log/journal/908668a050e7470d82bdb4e335fcee25) is 8M, max 195.6M, 187.6M free. Sep 13 10:25:33.432149 systemd-journald[1204]: Received client request to flush runtime journal. Sep 13 10:25:33.432288 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 10:25:33.411524 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 10:25:33.413342 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:25:33.415414 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 10:25:33.418242 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 13 10:25:33.420059 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:25:33.434174 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 10:25:33.491227 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 10:25:33.496082 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 13 10:25:33.496096 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 13 10:25:33.504406 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:25:33.510272 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 10:25:33.514031 kernel: loop1: detected capacity change from 0 to 110984 Sep 13 10:25:33.523207 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 13 10:25:33.545040 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 10:25:33.552977 kernel: loop2: detected capacity change from 0 to 128016 Sep 13 10:25:33.553598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 10:25:33.584006 kernel: loop3: detected capacity change from 0 to 221472 Sep 13 10:25:33.585470 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 13 10:25:33.585491 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 13 10:25:33.590773 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:25:33.596984 kernel: loop4: detected capacity change from 0 to 110984 Sep 13 10:25:33.608989 kernel: loop5: detected capacity change from 0 to 128016 Sep 13 10:25:33.616036 (sd-merge)[1277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 10:25:33.616629 (sd-merge)[1277]: Merged extensions into '/usr'. Sep 13 10:25:33.625624 systemd[1]: Reload requested from client PID 1251 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 10:25:33.625640 systemd[1]: Reloading... Sep 13 10:25:33.767995 zram_generator::config[1307]: No configuration found. Sep 13 10:25:33.873446 ldconfig[1246]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 10:25:33.987625 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 10:25:33.987871 systemd[1]: Reloading finished in 361 ms. Sep 13 10:25:34.020246 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 10:25:34.022051 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 10:25:34.041630 systemd[1]: Starting ensure-sysext.service... Sep 13 10:25:34.043733 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 10:25:34.068408 systemd[1]: Reload requested from client PID 1341 ('systemctl') (unit ensure-sysext.service)... Sep 13 10:25:34.068424 systemd[1]: Reloading... Sep 13 10:25:34.075680 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 13 10:25:34.075733 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 13 10:25:34.076287 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 10:25:34.076638 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 10:25:34.077945 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 10:25:34.078356 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Sep 13 10:25:34.078462 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Sep 13 10:25:34.084266 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 10:25:34.084283 systemd-tmpfiles[1342]: Skipping /boot Sep 13 10:25:34.098562 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 10:25:34.098579 systemd-tmpfiles[1342]: Skipping /boot Sep 13 10:25:34.197988 zram_generator::config[1369]: No configuration found. Sep 13 10:25:34.422234 systemd[1]: Reloading finished in 353 ms. Sep 13 10:25:34.452269 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 10:25:34.482457 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:25:34.494357 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 10:25:34.497734 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 10:25:34.520582 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 10:25:34.527830 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 10:25:34.534229 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:25:34.537136 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 10:25:34.543231 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:25:34.543467 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:25:34.550567 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:25:34.554312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:25:34.557756 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:25:34.560182 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:25:34.560352 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:25:34.560474 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:25:34.568017 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 10:25:34.570349 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:25:34.570877 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:25:34.573057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:25:34.573373 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:25:34.575438 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:25:34.575939 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:25:34.579334 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 10:25:34.582339 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 10:25:34.612280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:25:34.612585 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:25:34.613644 systemd-udevd[1418]: Using default interface naming scheme 'v255'. Sep 13 10:25:34.617190 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:25:34.620703 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 10:25:34.623846 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:25:34.628558 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:25:34.629908 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:25:34.630044 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:25:34.639760 augenrules[1446]: No rules Sep 13 10:25:34.640192 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 10:25:34.641386 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:25:34.643798 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 10:25:34.644119 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 10:25:34.645766 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 10:25:34.647544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:25:34.647783 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:25:34.648810 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 10:25:34.658758 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 10:25:34.660438 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:25:34.663315 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:25:34.663547 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:25:34.665209 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:25:34.665417 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:25:34.680272 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 10:25:34.681838 systemd[1]: Finished ensure-sysext.service. Sep 13 10:25:34.689644 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 10:25:34.693013 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 10:25:34.693082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 10:25:34.697406 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 10:25:34.698544 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 10:25:34.700029 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 10:25:34.779058 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 10:25:34.849987 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 10:25:34.864977 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 10:25:34.867394 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 10:25:34.869972 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 10:25:34.888027 kernel: ACPI: button: Power Button [PWRF] Sep 13 10:25:34.896722 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 10:25:34.907114 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 10:25:34.907433 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 10:25:34.957515 systemd-networkd[1479]: lo: Link UP Sep 13 10:25:34.957526 systemd-networkd[1479]: lo: Gained carrier Sep 13 10:25:34.959355 systemd-networkd[1479]: Enumeration completed Sep 13 10:25:34.959481 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 10:25:34.959795 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:25:34.959801 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 10:25:34.964081 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 13 10:25:34.965620 systemd-networkd[1479]: eth0: Link UP Sep 13 10:25:34.965776 systemd-networkd[1479]: eth0: Gained carrier Sep 13 10:25:34.965792 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:25:34.968140 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 10:25:34.979257 systemd-resolved[1411]: Positive Trust Anchors: Sep 13 10:25:34.979268 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 10:25:34.979298 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 10:25:34.980591 systemd-networkd[1479]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 10:25:34.994811 systemd-resolved[1411]: Defaulting to hostname 'linux'. Sep 13 10:25:34.999737 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 10:25:35.001177 systemd[1]: Reached target network.target - Network. Sep 13 10:25:35.002399 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:25:35.774765 systemd-resolved[1411]: Clock change detected. Flushing caches. Sep 13 10:25:35.774818 systemd-timesyncd[1480]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 10:25:35.774858 systemd-timesyncd[1480]: Initial clock synchronization to Sat 2025-09-13 10:25:35.774724 UTC. Sep 13 10:25:35.775082 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 10:25:35.779450 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 13 10:25:35.786781 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 10:25:35.788574 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 10:25:35.791572 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 10:25:35.792903 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 13 10:25:35.794129 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 10:25:35.796476 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 10:25:35.796583 systemd[1]: Reached target paths.target - Path Units. Sep 13 10:25:35.797694 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 10:25:35.800718 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 10:25:35.801984 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 10:25:35.803299 systemd[1]: Reached target timers.target - Timer Units. Sep 13 10:25:35.807343 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 10:25:35.812666 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 10:25:35.820826 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 13 10:25:35.823595 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 13 10:25:35.824914 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 13 10:25:35.863960 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 10:25:35.920001 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 13 10:25:35.925984 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 10:25:35.929445 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 10:25:35.930685 systemd[1]: Reached target basic.target - Basic System. Sep 13 10:25:35.931797 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 10:25:35.932039 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 10:25:35.934635 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 10:25:35.937946 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 10:25:35.941454 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 10:25:35.943650 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 10:25:35.948295 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 10:25:35.949295 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 10:25:35.954788 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 13 10:25:35.961407 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 10:25:35.961550 kernel: kvm_amd: TSC scaling supported Sep 13 10:25:35.961584 kernel: kvm_amd: Nested Virtualization enabled Sep 13 10:25:35.961602 kernel: kvm_amd: Nested Paging enabled Sep 13 10:25:35.961615 kernel: kvm_amd: LBR virtualization supported Sep 13 10:25:35.961635 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 10:25:35.961651 kernel: kvm_amd: Virtual GIF supported Sep 13 10:25:35.961998 jq[1535]: false Sep 13 10:25:35.965874 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 10:25:35.969834 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 10:25:35.975552 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 10:25:35.980845 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 10:25:35.983851 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Refreshing passwd entry cache Sep 13 10:25:35.983863 oslogin_cache_refresh[1537]: Refreshing passwd entry cache Sep 13 10:25:35.986897 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:25:35.989046 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 10:25:35.989970 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 10:25:35.992560 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 10:25:35.995587 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 10:25:35.998442 oslogin_cache_refresh[1537]: Failure getting users, quitting Sep 13 10:25:36.000706 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Failure getting users, quitting Sep 13 10:25:36.000706 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 10:25:36.000706 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Refreshing group entry cache Sep 13 10:25:35.998460 oslogin_cache_refresh[1537]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 10:25:35.998524 oslogin_cache_refresh[1537]: Refreshing group entry cache Sep 13 10:25:36.003412 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Failure getting groups, quitting Sep 13 10:25:36.003412 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 10:25:36.003405 oslogin_cache_refresh[1537]: Failure getting groups, quitting Sep 13 10:25:36.003416 oslogin_cache_refresh[1537]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 10:25:36.005884 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 10:25:36.007721 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 10:25:36.008100 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 10:25:36.008664 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 13 10:25:36.008952 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 13 10:25:36.010718 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 10:25:36.010974 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 10:25:36.013909 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 10:25:36.014157 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 10:25:36.019318 extend-filesystems[1536]: Found /dev/vda6 Sep 13 10:25:36.027772 jq[1554]: true Sep 13 10:25:36.028937 update_engine[1549]: I20250913 10:25:36.027948 1549 main.cc:92] Flatcar Update Engine starting Sep 13 10:25:36.050826 extend-filesystems[1536]: Found /dev/vda9 Sep 13 10:25:36.059682 extend-filesystems[1536]: Checking size of /dev/vda9 Sep 13 10:25:36.063857 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 10:25:36.067584 tar[1558]: linux-amd64/helm Sep 13 10:25:36.071497 jq[1571]: true Sep 13 10:25:36.083255 extend-filesystems[1536]: Resized partition /dev/vda9 Sep 13 10:25:36.098283 kernel: EDAC MC: Ver: 3.0.0 Sep 13 10:25:36.099154 dbus-daemon[1533]: [system] SELinux support is enabled Sep 13 10:25:36.100393 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 10:25:36.100476 extend-filesystems[1583]: resize2fs 1.47.3 (8-Jul-2025) Sep 13 10:25:36.103878 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 10:25:36.103919 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 10:25:36.105315 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 10:25:36.105359 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 10:25:36.110180 update_engine[1549]: I20250913 10:25:36.110108 1549 update_check_scheduler.cc:74] Next update check in 4m58s Sep 13 10:25:36.114796 systemd[1]: Started update-engine.service - Update Engine. Sep 13 10:25:36.118743 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 10:25:36.120700 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 10:25:36.174343 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 10:25:36.199200 systemd-logind[1544]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 10:25:36.199591 systemd-logind[1544]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 10:25:36.200562 extend-filesystems[1583]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 10:25:36.200562 extend-filesystems[1583]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 10:25:36.200562 extend-filesystems[1583]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 10:25:36.210544 extend-filesystems[1536]: Resized filesystem in /dev/vda9 Sep 13 10:25:36.202980 systemd-logind[1544]: New seat seat0. Sep 13 10:25:36.204595 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 10:25:36.204962 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 10:25:36.226305 bash[1602]: Updated "/home/core/.ssh/authorized_keys" Sep 13 10:25:36.379754 locksmithd[1590]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 10:25:36.392946 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 10:25:36.395329 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:25:36.397780 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 10:25:36.409320 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 10:25:36.558031 containerd[1559]: time="2025-09-13T10:25:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 13 10:25:36.558896 containerd[1559]: time="2025-09-13T10:25:36.558845733Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 13 10:25:36.576110 containerd[1559]: time="2025-09-13T10:25:36.576035937Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.869µs" Sep 13 10:25:36.576110 containerd[1559]: time="2025-09-13T10:25:36.576099406Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 13 10:25:36.576239 containerd[1559]: time="2025-09-13T10:25:36.576146996Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 13 10:25:36.576475 containerd[1559]: time="2025-09-13T10:25:36.576418134Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 13 10:25:36.576475 containerd[1559]: time="2025-09-13T10:25:36.576445015Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 13 10:25:36.576581 containerd[1559]: time="2025-09-13T10:25:36.576500509Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 10:25:36.576650 containerd[1559]: time="2025-09-13T10:25:36.576609293Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 10:25:36.576650 containerd[1559]: time="2025-09-13T10:25:36.576631775Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 10:25:36.577020 containerd[1559]: time="2025-09-13T10:25:36.576976942Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 10:25:36.577020 containerd[1559]: time="2025-09-13T10:25:36.576998342Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 10:25:36.577020 containerd[1559]: time="2025-09-13T10:25:36.577011868Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 10:25:36.577132 containerd[1559]: time="2025-09-13T10:25:36.577023409Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 13 10:25:36.577222 containerd[1559]: time="2025-09-13T10:25:36.577177839Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 13 10:25:36.577591 containerd[1559]: time="2025-09-13T10:25:36.577566668Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 10:25:36.577639 containerd[1559]: time="2025-09-13T10:25:36.577625048Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 10:25:36.577691 containerd[1559]: time="2025-09-13T10:25:36.577640797Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 13 10:25:36.577730 containerd[1559]: time="2025-09-13T10:25:36.577700930Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 13 10:25:36.578051 containerd[1559]: time="2025-09-13T10:25:36.578004159Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 13 10:25:36.578133 containerd[1559]: time="2025-09-13T10:25:36.578107934Z" level=info msg="metadata content store policy set" policy=shared Sep 13 10:25:36.584913 containerd[1559]: time="2025-09-13T10:25:36.584881619Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 13 10:25:36.584991 containerd[1559]: time="2025-09-13T10:25:36.584939858Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 13 10:25:36.584991 containerd[1559]: time="2025-09-13T10:25:36.584959656Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 13 10:25:36.585062 containerd[1559]: time="2025-09-13T10:25:36.585023876Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 13 10:25:36.585062 containerd[1559]: time="2025-09-13T10:25:36.585045787Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 13 10:25:36.585136 containerd[1559]: time="2025-09-13T10:25:36.585060785Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 13 10:25:36.585136 containerd[1559]: time="2025-09-13T10:25:36.585107513Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 13 10:25:36.585136 containerd[1559]: time="2025-09-13T10:25:36.585125026Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 13 10:25:36.585216 containerd[1559]: time="2025-09-13T10:25:36.585138932Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 13 10:25:36.585216 containerd[1559]: time="2025-09-13T10:25:36.585151686Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 13 10:25:36.585216 containerd[1559]: time="2025-09-13T10:25:36.585164349Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 13 10:25:36.585216 containerd[1559]: time="2025-09-13T10:25:36.585183556Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585368683Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585405863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585424277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585436179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585448583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585461016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585474111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585485712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585525006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585540084Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585554141Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585692500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585712989Z" level=info msg="Start snapshots syncer" Sep 13 10:25:36.586373 containerd[1559]: time="2025-09-13T10:25:36.585754356Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 13 10:25:36.586796 containerd[1559]: time="2025-09-13T10:25:36.586071601Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 13 10:25:36.586796 containerd[1559]: time="2025-09-13T10:25:36.586154026Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586251479Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586411619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586441595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586456072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586470469Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586498652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586514883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586529220Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586562272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586577781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586591887Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586642542Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586669723Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 10:25:36.587042 containerd[1559]: time="2025-09-13T10:25:36.586681996Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 10:25:36.587428 containerd[1559]: time="2025-09-13T10:25:36.586694971Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 10:25:36.587428 containerd[1559]: time="2025-09-13T10:25:36.586706652Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 13 10:25:36.587428 containerd[1559]: time="2025-09-13T10:25:36.586719777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 13 10:25:36.587428 containerd[1559]: time="2025-09-13T10:25:36.586733032Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 13 10:25:36.587428 containerd[1559]: time="2025-09-13T10:25:36.586764371Z" level=info msg="runtime interface created" Sep 13 10:25:36.587428 containerd[1559]: time="2025-09-13T10:25:36.586773017Z" level=info msg="created NRI interface" Sep 13 10:25:36.587428 containerd[1559]: time="2025-09-13T10:25:36.586783857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 13 10:25:36.587428 containerd[1559]: time="2025-09-13T10:25:36.586797142Z" level=info msg="Connect containerd service" Sep 13 10:25:36.587428 containerd[1559]: time="2025-09-13T10:25:36.586828471Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 10:25:36.588008 containerd[1559]: time="2025-09-13T10:25:36.587957799Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 10:25:36.612316 sshd_keygen[1557]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 10:25:36.636968 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 10:25:36.641612 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 10:25:36.668666 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 10:25:36.668963 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 10:25:36.673528 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 10:25:36.690833 tar[1558]: linux-amd64/LICENSE Sep 13 10:25:36.692531 tar[1558]: linux-amd64/README.md Sep 13 10:25:36.696152 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 10:25:36.699538 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 10:25:36.701999 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 10:25:36.702755 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 10:25:36.713624 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 10:25:36.724579 containerd[1559]: time="2025-09-13T10:25:36.724537734Z" level=info msg="Start subscribing containerd event" Sep 13 10:25:36.724788 containerd[1559]: time="2025-09-13T10:25:36.724741025Z" level=info msg="Start recovering state" Sep 13 10:25:36.724831 containerd[1559]: time="2025-09-13T10:25:36.724602726Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 10:25:36.724875 containerd[1559]: time="2025-09-13T10:25:36.724860940Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 10:25:36.725082 containerd[1559]: time="2025-09-13T10:25:36.725062198Z" level=info msg="Start event monitor" Sep 13 10:25:36.725105 containerd[1559]: time="2025-09-13T10:25:36.725082576Z" level=info msg="Start cni network conf syncer for default" Sep 13 10:25:36.725105 containerd[1559]: time="2025-09-13T10:25:36.725096122Z" level=info msg="Start streaming server" Sep 13 10:25:36.725151 containerd[1559]: time="2025-09-13T10:25:36.725108344Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 13 10:25:36.725151 containerd[1559]: time="2025-09-13T10:25:36.725118994Z" level=info msg="runtime interface starting up..." Sep 13 10:25:36.725151 containerd[1559]: time="2025-09-13T10:25:36.725125016Z" level=info msg="starting plugins..." Sep 13 10:25:36.725151 containerd[1559]: time="2025-09-13T10:25:36.725140415Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 13 10:25:36.725850 containerd[1559]: time="2025-09-13T10:25:36.725283643Z" level=info msg="containerd successfully booted in 0.168204s" Sep 13 10:25:36.725364 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 10:25:37.566582 systemd-networkd[1479]: eth0: Gained IPv6LL Sep 13 10:25:37.570714 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 10:25:37.573092 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 10:25:37.577648 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 10:25:37.581078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:25:37.584354 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 10:25:37.621524 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 10:25:37.639994 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 10:25:37.640568 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 10:25:37.642321 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 10:25:39.173471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:25:39.175442 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 10:25:39.177364 systemd[1]: Startup finished in 3.188s (kernel) + 6.732s (initrd) + 6.023s (userspace) = 15.944s. Sep 13 10:25:39.210846 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:25:39.859916 kubelet[1678]: E0913 10:25:39.859781 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:25:39.863923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:25:39.864143 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:25:39.864584 systemd[1]: kubelet.service: Consumed 2.045s CPU time, 265.8M memory peak. Sep 13 10:25:39.985791 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 10:25:39.987898 systemd[1]: Started sshd@0-10.0.0.126:22-10.0.0.1:40942.service - OpenSSH per-connection server daemon (10.0.0.1:40942). Sep 13 10:25:40.097568 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 40942 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:25:40.099383 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:25:40.106080 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 10:25:40.107143 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 10:25:40.114228 systemd-logind[1544]: New session 1 of user core. Sep 13 10:25:40.132165 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 10:25:40.135599 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 10:25:40.154756 (systemd)[1696]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 10:25:40.157496 systemd-logind[1544]: New session c1 of user core. Sep 13 10:25:40.318123 systemd[1696]: Queued start job for default target default.target. Sep 13 10:25:40.337576 systemd[1696]: Created slice app.slice - User Application Slice. Sep 13 10:25:40.337600 systemd[1696]: Reached target paths.target - Paths. Sep 13 10:25:40.337640 systemd[1696]: Reached target timers.target - Timers. Sep 13 10:25:40.339187 systemd[1696]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 10:25:40.351021 systemd[1696]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 10:25:40.351175 systemd[1696]: Reached target sockets.target - Sockets. Sep 13 10:25:40.351225 systemd[1696]: Reached target basic.target - Basic System. Sep 13 10:25:40.351301 systemd[1696]: Reached target default.target - Main User Target. Sep 13 10:25:40.351346 systemd[1696]: Startup finished in 187ms. Sep 13 10:25:40.351770 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 10:25:40.353672 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 10:25:40.420944 systemd[1]: Started sshd@1-10.0.0.126:22-10.0.0.1:40954.service - OpenSSH per-connection server daemon (10.0.0.1:40954). Sep 13 10:25:40.475401 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 40954 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:25:40.477188 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:25:40.482029 systemd-logind[1544]: New session 2 of user core. Sep 13 10:25:40.495421 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 10:25:40.549134 sshd[1710]: Connection closed by 10.0.0.1 port 40954 Sep 13 10:25:40.549492 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Sep 13 10:25:40.562216 systemd[1]: sshd@1-10.0.0.126:22-10.0.0.1:40954.service: Deactivated successfully. Sep 13 10:25:40.564169 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 10:25:40.564926 systemd-logind[1544]: Session 2 logged out. Waiting for processes to exit. Sep 13 10:25:40.568130 systemd[1]: Started sshd@2-10.0.0.126:22-10.0.0.1:40970.service - OpenSSH per-connection server daemon (10.0.0.1:40970). Sep 13 10:25:40.568827 systemd-logind[1544]: Removed session 2. Sep 13 10:25:40.619104 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 40970 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:25:40.620601 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:25:40.625673 systemd-logind[1544]: New session 3 of user core. Sep 13 10:25:40.635407 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 10:25:40.686907 sshd[1719]: Connection closed by 10.0.0.1 port 40970 Sep 13 10:25:40.687655 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Sep 13 10:25:40.700913 systemd[1]: sshd@2-10.0.0.126:22-10.0.0.1:40970.service: Deactivated successfully. Sep 13 10:25:40.702817 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 10:25:40.703550 systemd-logind[1544]: Session 3 logged out. Waiting for processes to exit. Sep 13 10:25:40.706158 systemd[1]: Started sshd@3-10.0.0.126:22-10.0.0.1:40972.service - OpenSSH per-connection server daemon (10.0.0.1:40972). Sep 13 10:25:40.706963 systemd-logind[1544]: Removed session 3. Sep 13 10:25:40.756454 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 40972 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:25:40.757798 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:25:40.762538 systemd-logind[1544]: New session 4 of user core. Sep 13 10:25:40.773473 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 10:25:40.828035 sshd[1728]: Connection closed by 10.0.0.1 port 40972 Sep 13 10:25:40.828431 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Sep 13 10:25:40.847092 systemd[1]: sshd@3-10.0.0.126:22-10.0.0.1:40972.service: Deactivated successfully. Sep 13 10:25:40.848940 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 10:25:40.849710 systemd-logind[1544]: Session 4 logged out. Waiting for processes to exit. Sep 13 10:25:40.852501 systemd[1]: Started sshd@4-10.0.0.126:22-10.0.0.1:40982.service - OpenSSH per-connection server daemon (10.0.0.1:40982). Sep 13 10:25:40.853032 systemd-logind[1544]: Removed session 4. Sep 13 10:25:40.913065 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 40982 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:25:40.914769 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:25:40.919677 systemd-logind[1544]: New session 5 of user core. Sep 13 10:25:40.928485 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 10:25:40.988800 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 10:25:40.989123 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:25:41.006780 sudo[1738]: pam_unix(sudo:session): session closed for user root Sep 13 10:25:41.008591 sshd[1737]: Connection closed by 10.0.0.1 port 40982 Sep 13 10:25:41.008977 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Sep 13 10:25:41.018635 systemd[1]: sshd@4-10.0.0.126:22-10.0.0.1:40982.service: Deactivated successfully. Sep 13 10:25:41.020803 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 10:25:41.021654 systemd-logind[1544]: Session 5 logged out. Waiting for processes to exit. Sep 13 10:25:41.024973 systemd[1]: Started sshd@5-10.0.0.126:22-10.0.0.1:40996.service - OpenSSH per-connection server daemon (10.0.0.1:40996). Sep 13 10:25:41.026002 systemd-logind[1544]: Removed session 5. Sep 13 10:25:41.074878 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 40996 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:25:41.076554 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:25:41.081826 systemd-logind[1544]: New session 6 of user core. Sep 13 10:25:41.096510 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 10:25:41.150880 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 10:25:41.151189 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:25:41.293144 sudo[1749]: pam_unix(sudo:session): session closed for user root Sep 13 10:25:41.300046 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 13 10:25:41.300468 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:25:41.310823 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 10:25:41.355900 augenrules[1771]: No rules Sep 13 10:25:41.357664 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 10:25:41.357956 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 10:25:41.359563 sudo[1748]: pam_unix(sudo:session): session closed for user root Sep 13 10:25:41.361141 sshd[1747]: Connection closed by 10.0.0.1 port 40996 Sep 13 10:25:41.361591 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Sep 13 10:25:41.373927 systemd[1]: sshd@5-10.0.0.126:22-10.0.0.1:40996.service: Deactivated successfully. Sep 13 10:25:41.375711 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 10:25:41.376636 systemd-logind[1544]: Session 6 logged out. Waiting for processes to exit. Sep 13 10:25:41.379396 systemd[1]: Started sshd@6-10.0.0.126:22-10.0.0.1:41008.service - OpenSSH per-connection server daemon (10.0.0.1:41008). Sep 13 10:25:41.380170 systemd-logind[1544]: Removed session 6. Sep 13 10:25:41.433098 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 41008 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:25:41.434683 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:25:41.439712 systemd-logind[1544]: New session 7 of user core. Sep 13 10:25:41.455426 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 10:25:41.509031 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 10:25:41.509368 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:25:42.256478 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 10:25:42.274737 (dockerd)[1804]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 10:25:42.867383 dockerd[1804]: time="2025-09-13T10:25:42.867248654Z" level=info msg="Starting up" Sep 13 10:25:42.868609 dockerd[1804]: time="2025-09-13T10:25:42.868566516Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 13 10:25:42.893602 dockerd[1804]: time="2025-09-13T10:25:42.893548796Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 13 10:25:43.201006 dockerd[1804]: time="2025-09-13T10:25:43.200879791Z" level=info msg="Loading containers: start." Sep 13 10:25:43.212305 kernel: Initializing XFRM netlink socket Sep 13 10:25:43.478340 systemd-networkd[1479]: docker0: Link UP Sep 13 10:25:43.483383 dockerd[1804]: time="2025-09-13T10:25:43.483341418Z" level=info msg="Loading containers: done." Sep 13 10:25:43.501739 dockerd[1804]: time="2025-09-13T10:25:43.501677261Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 10:25:43.501940 dockerd[1804]: time="2025-09-13T10:25:43.501771157Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 13 10:25:43.501940 dockerd[1804]: time="2025-09-13T10:25:43.501877276Z" level=info msg="Initializing buildkit" Sep 13 10:25:43.533876 dockerd[1804]: time="2025-09-13T10:25:43.533811966Z" level=info msg="Completed buildkit initialization" Sep 13 10:25:43.539491 dockerd[1804]: time="2025-09-13T10:25:43.539436196Z" level=info msg="Daemon has completed initialization" Sep 13 10:25:43.539729 dockerd[1804]: time="2025-09-13T10:25:43.539643404Z" level=info msg="API listen on /run/docker.sock" Sep 13 10:25:43.539787 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 10:25:44.480955 containerd[1559]: time="2025-09-13T10:25:44.480890475Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 10:25:45.070436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount964350599.mount: Deactivated successfully. Sep 13 10:25:46.238428 containerd[1559]: time="2025-09-13T10:25:46.238351650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:46.239254 containerd[1559]: time="2025-09-13T10:25:46.239180725Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 13 10:25:46.240564 containerd[1559]: time="2025-09-13T10:25:46.240499919Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:46.243155 containerd[1559]: time="2025-09-13T10:25:46.243106599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:46.244023 containerd[1559]: time="2025-09-13T10:25:46.243966011Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.763018168s" Sep 13 10:25:46.244023 containerd[1559]: time="2025-09-13T10:25:46.244008480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 10:25:46.245345 containerd[1559]: time="2025-09-13T10:25:46.245144401Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 10:25:47.627381 containerd[1559]: time="2025-09-13T10:25:47.627305821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:47.627860 containerd[1559]: time="2025-09-13T10:25:47.627812421Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 13 10:25:47.628985 containerd[1559]: time="2025-09-13T10:25:47.628945717Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:47.631850 containerd[1559]: time="2025-09-13T10:25:47.631787928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:47.632921 containerd[1559]: time="2025-09-13T10:25:47.632882852Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.38771139s" Sep 13 10:25:47.632921 containerd[1559]: time="2025-09-13T10:25:47.632917597Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 10:25:47.633577 containerd[1559]: time="2025-09-13T10:25:47.633550564Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 10:25:49.284629 containerd[1559]: time="2025-09-13T10:25:49.284512276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:49.285550 containerd[1559]: time="2025-09-13T10:25:49.285469562Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 13 10:25:49.286795 containerd[1559]: time="2025-09-13T10:25:49.286747008Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:49.289554 containerd[1559]: time="2025-09-13T10:25:49.289515922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:49.290514 containerd[1559]: time="2025-09-13T10:25:49.290464611Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.65688322s" Sep 13 10:25:49.292299 containerd[1559]: time="2025-09-13T10:25:49.290502162Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 10:25:49.293757 containerd[1559]: time="2025-09-13T10:25:49.293543446Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 10:25:50.115637 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 10:25:50.145987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:25:50.510848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:25:50.514918 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:25:50.699461 kubelet[2097]: E0913 10:25:50.699342 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:25:50.705924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:25:50.707505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:25:50.708040 systemd[1]: kubelet.service: Consumed 400ms CPU time, 111M memory peak. Sep 13 10:25:50.998578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767539319.mount: Deactivated successfully. Sep 13 10:25:51.930858 containerd[1559]: time="2025-09-13T10:25:51.930808081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:51.931755 containerd[1559]: time="2025-09-13T10:25:51.931731763Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 13 10:25:51.932993 containerd[1559]: time="2025-09-13T10:25:51.932955379Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:51.934883 containerd[1559]: time="2025-09-13T10:25:51.934856174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:51.935503 containerd[1559]: time="2025-09-13T10:25:51.935454877Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.641872558s" Sep 13 10:25:51.935548 containerd[1559]: time="2025-09-13T10:25:51.935504219Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 10:25:51.935990 containerd[1559]: time="2025-09-13T10:25:51.935961497Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 10:25:52.487722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1563140261.mount: Deactivated successfully. Sep 13 10:25:53.240577 containerd[1559]: time="2025-09-13T10:25:53.240498175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:53.241373 containerd[1559]: time="2025-09-13T10:25:53.241299047Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 10:25:53.242535 containerd[1559]: time="2025-09-13T10:25:53.242492636Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:53.245352 containerd[1559]: time="2025-09-13T10:25:53.245320110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:53.246398 containerd[1559]: time="2025-09-13T10:25:53.246366813Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.310376652s" Sep 13 10:25:53.246446 containerd[1559]: time="2025-09-13T10:25:53.246400115Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 10:25:53.246867 containerd[1559]: time="2025-09-13T10:25:53.246841383Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 10:25:53.815528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3865524447.mount: Deactivated successfully. Sep 13 10:25:53.821325 containerd[1559]: time="2025-09-13T10:25:53.821218320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:25:53.821965 containerd[1559]: time="2025-09-13T10:25:53.821913975Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 10:25:53.823381 containerd[1559]: time="2025-09-13T10:25:53.823331354Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:25:53.825580 containerd[1559]: time="2025-09-13T10:25:53.825531120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:25:53.826321 containerd[1559]: time="2025-09-13T10:25:53.826284833Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 579.403526ms" Sep 13 10:25:53.826321 containerd[1559]: time="2025-09-13T10:25:53.826315751Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 10:25:53.826983 containerd[1559]: time="2025-09-13T10:25:53.826941766Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 10:25:54.794439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2397723741.mount: Deactivated successfully. Sep 13 10:25:58.391150 containerd[1559]: time="2025-09-13T10:25:58.391084663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:58.392003 containerd[1559]: time="2025-09-13T10:25:58.391973009Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 13 10:25:58.393440 containerd[1559]: time="2025-09-13T10:25:58.393380489Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:58.396511 containerd[1559]: time="2025-09-13T10:25:58.396471427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:58.397632 containerd[1559]: time="2025-09-13T10:25:58.397592280Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.570607593s" Sep 13 10:25:58.397689 containerd[1559]: time="2025-09-13T10:25:58.397629860Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 10:26:00.233018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:26:00.233223 systemd[1]: kubelet.service: Consumed 400ms CPU time, 111M memory peak. Sep 13 10:26:00.235552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:26:00.262545 systemd[1]: Reload requested from client PID 2253 ('systemctl') (unit session-7.scope)... Sep 13 10:26:00.262561 systemd[1]: Reloading... Sep 13 10:26:00.373312 zram_generator::config[2295]: No configuration found. Sep 13 10:26:01.299594 systemd[1]: Reloading finished in 1036 ms. Sep 13 10:26:01.388566 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 10:26:01.388857 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 10:26:01.389570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:26:01.389638 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.3M memory peak. Sep 13 10:26:01.391975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:26:01.585216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:26:01.596610 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 10:26:01.649716 kubelet[2343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:26:01.649716 kubelet[2343]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 10:26:01.649716 kubelet[2343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:26:01.650144 kubelet[2343]: I0913 10:26:01.649820 2343 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 10:26:01.848889 kubelet[2343]: I0913 10:26:01.848754 2343 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 10:26:01.848889 kubelet[2343]: I0913 10:26:01.848792 2343 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 10:26:01.849118 kubelet[2343]: I0913 10:26:01.849103 2343 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 10:26:01.881588 kubelet[2343]: E0913 10:26:01.881536 2343 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.126:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:26:01.886091 kubelet[2343]: I0913 10:26:01.886028 2343 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 10:26:01.894809 kubelet[2343]: I0913 10:26:01.894775 2343 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 10:26:01.901289 kubelet[2343]: I0913 10:26:01.901230 2343 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 10:26:01.902113 kubelet[2343]: I0913 10:26:01.902073 2343 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 10:26:01.902390 kubelet[2343]: I0913 10:26:01.902333 2343 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 10:26:01.902629 kubelet[2343]: I0913 10:26:01.902379 2343 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 10:26:01.902874 kubelet[2343]: I0913 10:26:01.902647 2343 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 10:26:01.902874 kubelet[2343]: I0913 10:26:01.902661 2343 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 10:26:01.902874 kubelet[2343]: I0913 10:26:01.902859 2343 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:26:01.905802 kubelet[2343]: I0913 10:26:01.905757 2343 kubelet.go:408] "Attempting to sync node with API server" Sep 13 10:26:01.905802 kubelet[2343]: I0913 10:26:01.905803 2343 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 10:26:01.905931 kubelet[2343]: I0913 10:26:01.905867 2343 kubelet.go:314] "Adding apiserver pod source" Sep 13 10:26:01.905931 kubelet[2343]: I0913 10:26:01.905906 2343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 10:26:02.016170 kubelet[2343]: W0913 10:26:02.015682 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 13 10:26:02.016170 kubelet[2343]: E0913 10:26:02.015783 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:26:02.016170 kubelet[2343]: W0913 10:26:02.015863 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 13 10:26:02.016170 kubelet[2343]: E0913 10:26:02.015889 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:26:02.019129 kubelet[2343]: I0913 10:26:02.019083 2343 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 10:26:02.019777 kubelet[2343]: I0913 10:26:02.019743 2343 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 10:26:02.020510 kubelet[2343]: W0913 10:26:02.020487 2343 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 10:26:02.022331 kubelet[2343]: I0913 10:26:02.022303 2343 server.go:1274] "Started kubelet" Sep 13 10:26:02.022486 kubelet[2343]: I0913 10:26:02.022448 2343 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 10:26:02.022956 kubelet[2343]: I0913 10:26:02.022937 2343 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 10:26:02.023072 kubelet[2343]: I0913 10:26:02.022941 2343 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 10:26:02.024177 kubelet[2343]: I0913 10:26:02.024131 2343 server.go:449] "Adding debug handlers to kubelet server" Sep 13 10:26:02.025331 kubelet[2343]: I0913 10:26:02.025309 2343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 10:26:02.026742 kubelet[2343]: E0913 10:26:02.026705 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:26:02.026742 kubelet[2343]: I0913 10:26:02.026717 2343 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 10:26:02.027239 kubelet[2343]: I0913 10:26:02.026876 2343 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 10:26:02.027703 kubelet[2343]: I0913 10:26:02.027675 2343 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 10:26:02.027804 kubelet[2343]: I0913 10:26:02.027786 2343 reconciler.go:26] "Reconciler: start to sync state" Sep 13 10:26:02.028465 kubelet[2343]: W0913 10:26:02.028389 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 13 10:26:02.028465 kubelet[2343]: E0913 10:26:02.028465 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:26:02.028955 kubelet[2343]: E0913 10:26:02.028920 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="200ms" Sep 13 10:26:02.029199 kubelet[2343]: I0913 10:26:02.029171 2343 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 10:26:02.029587 kubelet[2343]: E0913 10:26:02.029555 2343 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 10:26:02.030476 kubelet[2343]: I0913 10:26:02.030447 2343 factory.go:221] Registration of the containerd container factory successfully Sep 13 10:26:02.030476 kubelet[2343]: I0913 10:26:02.030465 2343 factory.go:221] Registration of the systemd container factory successfully Sep 13 10:26:02.030567 kubelet[2343]: E0913 10:26:02.029571 2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.126:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.126:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864d0a51506593f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 10:26:02.022254911 +0000 UTC m=+0.420862349,LastTimestamp:2025-09-13 10:26:02.022254911 +0000 UTC m=+0.420862349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 10:26:02.043453 kubelet[2343]: I0913 10:26:02.043389 2343 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 10:26:02.043453 kubelet[2343]: I0913 10:26:02.043437 2343 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 10:26:02.043453 kubelet[2343]: I0913 10:26:02.043458 2343 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:26:02.048661 kubelet[2343]: I0913 10:26:02.048622 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 10:26:02.051252 kubelet[2343]: I0913 10:26:02.050444 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 10:26:02.051252 kubelet[2343]: I0913 10:26:02.050502 2343 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 10:26:02.051252 kubelet[2343]: I0913 10:26:02.050543 2343 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 10:26:02.051252 kubelet[2343]: E0913 10:26:02.050600 2343 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 10:26:02.051965 kubelet[2343]: W0913 10:26:02.051917 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 13 10:26:02.051965 kubelet[2343]: E0913 10:26:02.051951 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:26:02.127650 kubelet[2343]: E0913 10:26:02.127473 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:26:02.150935 kubelet[2343]: E0913 10:26:02.150885 2343 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 10:26:02.228365 kubelet[2343]: E0913 10:26:02.228310 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:26:02.229913 kubelet[2343]: E0913 10:26:02.229864 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="400ms" Sep 13 10:26:02.329167 kubelet[2343]: E0913 10:26:02.329113 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:26:02.351405 kubelet[2343]: E0913 10:26:02.351356 2343 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 10:26:02.429947 kubelet[2343]: E0913 10:26:02.429854 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:26:02.531000 kubelet[2343]: E0913 10:26:02.530928 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:26:02.631127 kubelet[2343]: E0913 10:26:02.631050 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:26:02.631127 kubelet[2343]: E0913 10:26:02.631088 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="800ms" Sep 13 10:26:02.696173 kubelet[2343]: I0913 10:26:02.695981 2343 policy_none.go:49] "None policy: Start" Sep 13 10:26:02.697262 kubelet[2343]: I0913 10:26:02.697233 2343 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 10:26:02.697339 kubelet[2343]: I0913 10:26:02.697291 2343 state_mem.go:35] "Initializing new in-memory state store" Sep 13 10:26:02.709865 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 10:26:02.724067 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 10:26:02.727875 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 10:26:02.731980 kubelet[2343]: E0913 10:26:02.731926 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:26:02.747500 kubelet[2343]: I0913 10:26:02.747409 2343 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 10:26:02.747689 kubelet[2343]: I0913 10:26:02.747657 2343 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 10:26:02.747718 kubelet[2343]: I0913 10:26:02.747671 2343 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 10:26:02.747994 kubelet[2343]: I0913 10:26:02.747963 2343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 10:26:02.749476 kubelet[2343]: E0913 10:26:02.749453 2343 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 10:26:02.760621 systemd[1]: Created slice kubepods-burstable-podf7d9d93eacaeddb526cf6f93412a289b.slice - libcontainer container kubepods-burstable-podf7d9d93eacaeddb526cf6f93412a289b.slice. Sep 13 10:26:02.773498 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 13 10:26:02.776877 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 13 10:26:02.832947 kubelet[2343]: I0913 10:26:02.832879 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7d9d93eacaeddb526cf6f93412a289b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f7d9d93eacaeddb526cf6f93412a289b\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:26:02.832947 kubelet[2343]: I0913 10:26:02.832933 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7d9d93eacaeddb526cf6f93412a289b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f7d9d93eacaeddb526cf6f93412a289b\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:26:02.833103 kubelet[2343]: I0913 10:26:02.833026 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:26:02.833103 kubelet[2343]: I0913 10:26:02.833053 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 10:26:02.833103 kubelet[2343]: I0913 10:26:02.833071 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:26:02.833103 kubelet[2343]: I0913 10:26:02.833087 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7d9d93eacaeddb526cf6f93412a289b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f7d9d93eacaeddb526cf6f93412a289b\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:26:02.833103 kubelet[2343]: I0913 10:26:02.833104 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:26:02.833237 kubelet[2343]: I0913 10:26:02.833123 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:26:02.833237 kubelet[2343]: I0913 10:26:02.833144 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:26:02.849383 kubelet[2343]: I0913 10:26:02.849334 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 10:26:02.850009 kubelet[2343]: E0913 10:26:02.849959 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Sep 13 10:26:02.934111 kubelet[2343]: W0913 10:26:02.934031 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 13 10:26:02.934111 kubelet[2343]: E0913 10:26:02.934093 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:26:03.052430 kubelet[2343]: I0913 10:26:03.052386 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 10:26:03.052959 kubelet[2343]: E0913 10:26:03.052910 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Sep 13 10:26:03.072529 containerd[1559]: time="2025-09-13T10:26:03.072489470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f7d9d93eacaeddb526cf6f93412a289b,Namespace:kube-system,Attempt:0,}" Sep 13 10:26:03.076398 containerd[1559]: time="2025-09-13T10:26:03.076354570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 10:26:03.079942 containerd[1559]: time="2025-09-13T10:26:03.079894300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 10:26:03.139764 containerd[1559]: time="2025-09-13T10:26:03.139696542Z" level=info msg="connecting to shim 9a968e49750f6cf7e30858bfc56c0fbf68a65ebf321817bbdad4a1eda22f4938" address="unix:///run/containerd/s/eb35ef65d04d55dbd673d690b49f0ef1dbe5dcee6f5f01fd27ac0e0a3bad275f" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:03.142287 containerd[1559]: time="2025-09-13T10:26:03.142143101Z" level=info msg="connecting to shim b63f812ab2869ec090fc2dee9d027c2ed0a4c52c06004698147ff349eecf75a3" address="unix:///run/containerd/s/e8ffe87ce196544c89a7fc552a34a740f886eb6390adb4f89a0b8217649ab6a6" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:03.156525 containerd[1559]: time="2025-09-13T10:26:03.156461859Z" level=info msg="connecting to shim 563369d905735221eff928efcfe84bcb6c3a7466138bafaa54d97b21dcdd996f" address="unix:///run/containerd/s/a184ff323e8584cd1ae00b0d76ca7d77bc642c5cd5e9d11b73bcbbee52251232" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:03.186500 systemd[1]: Started cri-containerd-b63f812ab2869ec090fc2dee9d027c2ed0a4c52c06004698147ff349eecf75a3.scope - libcontainer container b63f812ab2869ec090fc2dee9d027c2ed0a4c52c06004698147ff349eecf75a3. Sep 13 10:26:03.191830 systemd[1]: Started cri-containerd-9a968e49750f6cf7e30858bfc56c0fbf68a65ebf321817bbdad4a1eda22f4938.scope - libcontainer container 9a968e49750f6cf7e30858bfc56c0fbf68a65ebf321817bbdad4a1eda22f4938. Sep 13 10:26:03.195790 systemd[1]: Started cri-containerd-563369d905735221eff928efcfe84bcb6c3a7466138bafaa54d97b21dcdd996f.scope - libcontainer container 563369d905735221eff928efcfe84bcb6c3a7466138bafaa54d97b21dcdd996f. Sep 13 10:26:03.315156 containerd[1559]: time="2025-09-13T10:26:03.315008898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"563369d905735221eff928efcfe84bcb6c3a7466138bafaa54d97b21dcdd996f\"" Sep 13 10:26:03.319954 containerd[1559]: time="2025-09-13T10:26:03.319920861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f7d9d93eacaeddb526cf6f93412a289b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b63f812ab2869ec090fc2dee9d027c2ed0a4c52c06004698147ff349eecf75a3\"" Sep 13 10:26:03.320386 containerd[1559]: time="2025-09-13T10:26:03.320346850Z" level=info msg="CreateContainer within sandbox \"563369d905735221eff928efcfe84bcb6c3a7466138bafaa54d97b21dcdd996f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 10:26:03.322023 containerd[1559]: time="2025-09-13T10:26:03.321986345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a968e49750f6cf7e30858bfc56c0fbf68a65ebf321817bbdad4a1eda22f4938\"" Sep 13 10:26:03.322189 containerd[1559]: time="2025-09-13T10:26:03.322152637Z" level=info msg="CreateContainer within sandbox \"b63f812ab2869ec090fc2dee9d027c2ed0a4c52c06004698147ff349eecf75a3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 10:26:03.323905 containerd[1559]: time="2025-09-13T10:26:03.323866472Z" level=info msg="CreateContainer within sandbox \"9a968e49750f6cf7e30858bfc56c0fbf68a65ebf321817bbdad4a1eda22f4938\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 10:26:03.331154 containerd[1559]: time="2025-09-13T10:26:03.331122873Z" level=info msg="Container cf53793d94d68b0aa5f964b9fc885320352dd00f165baebf9411706a020fd795: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:03.339706 containerd[1559]: time="2025-09-13T10:26:03.339645399Z" level=info msg="CreateContainer within sandbox \"563369d905735221eff928efcfe84bcb6c3a7466138bafaa54d97b21dcdd996f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cf53793d94d68b0aa5f964b9fc885320352dd00f165baebf9411706a020fd795\"" Sep 13 10:26:03.340330 containerd[1559]: time="2025-09-13T10:26:03.340301720Z" level=info msg="StartContainer for \"cf53793d94d68b0aa5f964b9fc885320352dd00f165baebf9411706a020fd795\"" Sep 13 10:26:03.340465 containerd[1559]: time="2025-09-13T10:26:03.340427035Z" level=info msg="Container 8f144558c9dfba313f2fdbb7ffe38c20c4231f9813bac2f5b9c976c3a2512baa: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:03.341393 containerd[1559]: time="2025-09-13T10:26:03.341369783Z" level=info msg="Container 86ab4fb9b979c352710ce906d2b18bef4c9eb49911de8736d4750fee714131d4: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:03.341574 containerd[1559]: time="2025-09-13T10:26:03.341477746Z" level=info msg="connecting to shim cf53793d94d68b0aa5f964b9fc885320352dd00f165baebf9411706a020fd795" address="unix:///run/containerd/s/a184ff323e8584cd1ae00b0d76ca7d77bc642c5cd5e9d11b73bcbbee52251232" protocol=ttrpc version=3 Sep 13 10:26:03.351075 containerd[1559]: time="2025-09-13T10:26:03.351041284Z" level=info msg="CreateContainer within sandbox \"9a968e49750f6cf7e30858bfc56c0fbf68a65ebf321817bbdad4a1eda22f4938\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"86ab4fb9b979c352710ce906d2b18bef4c9eb49911de8736d4750fee714131d4\"" Sep 13 10:26:03.351541 containerd[1559]: time="2025-09-13T10:26:03.351494564Z" level=info msg="StartContainer for \"86ab4fb9b979c352710ce906d2b18bef4c9eb49911de8736d4750fee714131d4\"" Sep 13 10:26:03.353640 containerd[1559]: time="2025-09-13T10:26:03.353609271Z" level=info msg="connecting to shim 86ab4fb9b979c352710ce906d2b18bef4c9eb49911de8736d4750fee714131d4" address="unix:///run/containerd/s/eb35ef65d04d55dbd673d690b49f0ef1dbe5dcee6f5f01fd27ac0e0a3bad275f" protocol=ttrpc version=3 Sep 13 10:26:03.355502 containerd[1559]: time="2025-09-13T10:26:03.355478267Z" level=info msg="CreateContainer within sandbox \"b63f812ab2869ec090fc2dee9d027c2ed0a4c52c06004698147ff349eecf75a3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f144558c9dfba313f2fdbb7ffe38c20c4231f9813bac2f5b9c976c3a2512baa\"" Sep 13 10:26:03.358288 containerd[1559]: time="2025-09-13T10:26:03.357490130Z" level=info msg="StartContainer for \"8f144558c9dfba313f2fdbb7ffe38c20c4231f9813bac2f5b9c976c3a2512baa\"" Sep 13 10:26:03.359290 containerd[1559]: time="2025-09-13T10:26:03.359245242Z" level=info msg="connecting to shim 8f144558c9dfba313f2fdbb7ffe38c20c4231f9813bac2f5b9c976c3a2512baa" address="unix:///run/containerd/s/e8ffe87ce196544c89a7fc552a34a740f886eb6390adb4f89a0b8217649ab6a6" protocol=ttrpc version=3 Sep 13 10:26:03.365393 systemd[1]: Started cri-containerd-cf53793d94d68b0aa5f964b9fc885320352dd00f165baebf9411706a020fd795.scope - libcontainer container cf53793d94d68b0aa5f964b9fc885320352dd00f165baebf9411706a020fd795. Sep 13 10:26:03.384432 systemd[1]: Started cri-containerd-86ab4fb9b979c352710ce906d2b18bef4c9eb49911de8736d4750fee714131d4.scope - libcontainer container 86ab4fb9b979c352710ce906d2b18bef4c9eb49911de8736d4750fee714131d4. Sep 13 10:26:03.387767 systemd[1]: Started cri-containerd-8f144558c9dfba313f2fdbb7ffe38c20c4231f9813bac2f5b9c976c3a2512baa.scope - libcontainer container 8f144558c9dfba313f2fdbb7ffe38c20c4231f9813bac2f5b9c976c3a2512baa. Sep 13 10:26:03.410656 kubelet[2343]: W0913 10:26:03.410532 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 13 10:26:03.410656 kubelet[2343]: E0913 10:26:03.410614 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:26:03.431938 kubelet[2343]: E0913 10:26:03.431878 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="1.6s" Sep 13 10:26:03.444183 containerd[1559]: time="2025-09-13T10:26:03.444148066Z" level=info msg="StartContainer for \"86ab4fb9b979c352710ce906d2b18bef4c9eb49911de8736d4750fee714131d4\" returns successfully" Sep 13 10:26:03.455546 kubelet[2343]: I0913 10:26:03.455518 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 10:26:03.455994 kubelet[2343]: E0913 10:26:03.455960 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Sep 13 10:26:03.456838 containerd[1559]: time="2025-09-13T10:26:03.456800618Z" level=info msg="StartContainer for \"cf53793d94d68b0aa5f964b9fc885320352dd00f165baebf9411706a020fd795\" returns successfully" Sep 13 10:26:03.466289 containerd[1559]: time="2025-09-13T10:26:03.466205219Z" level=info msg="StartContainer for \"8f144558c9dfba313f2fdbb7ffe38c20c4231f9813bac2f5b9c976c3a2512baa\" returns successfully" Sep 13 10:26:03.498024 kubelet[2343]: W0913 10:26:03.497842 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 13 10:26:03.498194 kubelet[2343]: E0913 10:26:03.498018 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:26:04.257713 kubelet[2343]: I0913 10:26:04.257641 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 10:26:05.130588 kubelet[2343]: E0913 10:26:05.130523 2343 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 10:26:05.171010 kubelet[2343]: E0913 10:26:05.170897 2343 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1864d0a51506593f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 10:26:02.022254911 +0000 UTC m=+0.420862349,LastTimestamp:2025-09-13 10:26:02.022254911 +0000 UTC m=+0.420862349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 10:26:05.309625 kubelet[2343]: E0913 10:26:05.309458 2343 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1864d0a51575879e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 10:26:02.029541278 +0000 UTC m=+0.428148716,LastTimestamp:2025-09-13 10:26:02.029541278 +0000 UTC m=+0.428148716,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 10:26:05.314758 kubelet[2343]: I0913 10:26:05.314593 2343 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 10:26:05.314758 kubelet[2343]: E0913 10:26:05.314632 2343 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 10:26:05.327196 kubelet[2343]: E0913 10:26:05.327162 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:26:05.909079 kubelet[2343]: I0913 10:26:05.909030 2343 apiserver.go:52] "Watching apiserver" Sep 13 10:26:05.928207 kubelet[2343]: I0913 10:26:05.928153 2343 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 10:26:07.075418 systemd[1]: Reload requested from client PID 2621 ('systemctl') (unit session-7.scope)... Sep 13 10:26:07.075435 systemd[1]: Reloading... Sep 13 10:26:07.227310 zram_generator::config[2664]: No configuration found. Sep 13 10:26:07.484938 systemd[1]: Reloading finished in 409 ms. Sep 13 10:26:07.520421 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:26:07.545250 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 10:26:07.545706 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:26:07.545788 systemd[1]: kubelet.service: Consumed 1.052s CPU time, 132.6M memory peak. Sep 13 10:26:07.548755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:26:07.789072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:26:07.798806 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 10:26:07.897687 kubelet[2709]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:26:07.897687 kubelet[2709]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 10:26:07.897687 kubelet[2709]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:26:07.898108 kubelet[2709]: I0913 10:26:07.897772 2709 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 10:26:07.904020 kubelet[2709]: I0913 10:26:07.903986 2709 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 10:26:07.904020 kubelet[2709]: I0913 10:26:07.904008 2709 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 10:26:07.904220 kubelet[2709]: I0913 10:26:07.904196 2709 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 10:26:07.905415 kubelet[2709]: I0913 10:26:07.905388 2709 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 10:26:07.907511 kubelet[2709]: I0913 10:26:07.907481 2709 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 10:26:07.911915 kubelet[2709]: I0913 10:26:07.911883 2709 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 10:26:07.916445 kubelet[2709]: I0913 10:26:07.916413 2709 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 10:26:07.916581 kubelet[2709]: I0913 10:26:07.916563 2709 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 10:26:07.916752 kubelet[2709]: I0913 10:26:07.916711 2709 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 10:26:07.916915 kubelet[2709]: I0913 10:26:07.916743 2709 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 10:26:07.917011 kubelet[2709]: I0913 10:26:07.916927 2709 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 10:26:07.917011 kubelet[2709]: I0913 10:26:07.916937 2709 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 10:26:07.917011 kubelet[2709]: I0913 10:26:07.916965 2709 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:26:07.917110 kubelet[2709]: I0913 10:26:07.917094 2709 kubelet.go:408] "Attempting to sync node with API server" Sep 13 10:26:07.917110 kubelet[2709]: I0913 10:26:07.917110 2709 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 10:26:07.917163 kubelet[2709]: I0913 10:26:07.917141 2709 kubelet.go:314] "Adding apiserver pod source" Sep 13 10:26:07.917163 kubelet[2709]: I0913 10:26:07.917151 2709 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 10:26:07.918300 kubelet[2709]: I0913 10:26:07.918195 2709 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 10:26:07.919827 kubelet[2709]: I0913 10:26:07.918665 2709 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 10:26:07.919827 kubelet[2709]: I0913 10:26:07.919092 2709 server.go:1274] "Started kubelet" Sep 13 10:26:07.919827 kubelet[2709]: I0913 10:26:07.919241 2709 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 10:26:07.919827 kubelet[2709]: I0913 10:26:07.919390 2709 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 10:26:07.919827 kubelet[2709]: I0913 10:26:07.919638 2709 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 10:26:07.920681 kubelet[2709]: I0913 10:26:07.920644 2709 server.go:449] "Adding debug handlers to kubelet server" Sep 13 10:26:07.924832 kubelet[2709]: I0913 10:26:07.923919 2709 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 10:26:07.924832 kubelet[2709]: I0913 10:26:07.924042 2709 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 10:26:07.925890 kubelet[2709]: E0913 10:26:07.925842 2709 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:26:07.925944 kubelet[2709]: I0913 10:26:07.925897 2709 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 10:26:07.927520 kubelet[2709]: I0913 10:26:07.927494 2709 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 10:26:07.927779 kubelet[2709]: I0913 10:26:07.927754 2709 reconciler.go:26] "Reconciler: start to sync state" Sep 13 10:26:07.931422 kubelet[2709]: E0913 10:26:07.931392 2709 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 10:26:07.931963 kubelet[2709]: I0913 10:26:07.931936 2709 factory.go:221] Registration of the containerd container factory successfully Sep 13 10:26:07.931963 kubelet[2709]: I0913 10:26:07.931956 2709 factory.go:221] Registration of the systemd container factory successfully Sep 13 10:26:07.932087 kubelet[2709]: I0913 10:26:07.932058 2709 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 10:26:07.945754 kubelet[2709]: I0913 10:26:07.945696 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 10:26:07.947118 kubelet[2709]: I0913 10:26:07.947091 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 10:26:07.947118 kubelet[2709]: I0913 10:26:07.947119 2709 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 10:26:07.947201 kubelet[2709]: I0913 10:26:07.947139 2709 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 10:26:07.947232 kubelet[2709]: E0913 10:26:07.947183 2709 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 10:26:07.967990 kubelet[2709]: I0913 10:26:07.967958 2709 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 10:26:07.967990 kubelet[2709]: I0913 10:26:07.967976 2709 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 10:26:07.967990 kubelet[2709]: I0913 10:26:07.967995 2709 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:26:07.968172 kubelet[2709]: I0913 10:26:07.968129 2709 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 10:26:07.968172 kubelet[2709]: I0913 10:26:07.968139 2709 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 10:26:07.968172 kubelet[2709]: I0913 10:26:07.968158 2709 policy_none.go:49] "None policy: Start" Sep 13 10:26:07.969303 kubelet[2709]: I0913 10:26:07.968697 2709 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 10:26:07.969303 kubelet[2709]: I0913 10:26:07.968723 2709 state_mem.go:35] "Initializing new in-memory state store" Sep 13 10:26:07.969303 kubelet[2709]: I0913 10:26:07.968878 2709 state_mem.go:75] "Updated machine memory state" Sep 13 10:26:07.973202 kubelet[2709]: I0913 10:26:07.973169 2709 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 10:26:07.973202 kubelet[2709]: I0913 10:26:07.973390 2709 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 10:26:07.973202 kubelet[2709]: I0913 10:26:07.973406 2709 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 10:26:07.973639 kubelet[2709]: I0913 10:26:07.973620 2709 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 10:26:08.077154 kubelet[2709]: I0913 10:26:08.077113 2709 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 10:26:08.111812 kubelet[2709]: E0913 10:26:08.111780 2709 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 10:26:08.116299 kubelet[2709]: I0913 10:26:08.114812 2709 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 10:26:08.116299 kubelet[2709]: I0913 10:26:08.114892 2709 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 10:26:08.128724 kubelet[2709]: I0913 10:26:08.128559 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 10:26:08.128724 kubelet[2709]: I0913 10:26:08.128598 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:26:08.128724 kubelet[2709]: I0913 10:26:08.128625 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:26:08.128724 kubelet[2709]: I0913 10:26:08.128644 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:26:08.128724 kubelet[2709]: I0913 10:26:08.128668 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7d9d93eacaeddb526cf6f93412a289b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f7d9d93eacaeddb526cf6f93412a289b\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:26:08.129158 kubelet[2709]: I0913 10:26:08.128682 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7d9d93eacaeddb526cf6f93412a289b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f7d9d93eacaeddb526cf6f93412a289b\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:26:08.129158 kubelet[2709]: I0913 10:26:08.128697 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7d9d93eacaeddb526cf6f93412a289b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f7d9d93eacaeddb526cf6f93412a289b\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:26:08.129158 kubelet[2709]: I0913 10:26:08.128710 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:26:08.129158 kubelet[2709]: I0913 10:26:08.128725 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:26:08.917485 kubelet[2709]: I0913 10:26:08.917440 2709 apiserver.go:52] "Watching apiserver" Sep 13 10:26:08.927805 kubelet[2709]: I0913 10:26:08.927771 2709 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 10:26:09.203664 kubelet[2709]: E0913 10:26:09.203522 2709 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 10:26:09.221718 kubelet[2709]: I0913 10:26:09.221591 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.221568968 podStartE2EDuration="1.221568968s" podCreationTimestamp="2025-09-13 10:26:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:26:09.221508302 +0000 UTC m=+1.364036403" watchObservedRunningTime="2025-09-13 10:26:09.221568968 +0000 UTC m=+1.364097069" Sep 13 10:26:09.239791 kubelet[2709]: I0913 10:26:09.239642 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.239602848 podStartE2EDuration="3.239602848s" podCreationTimestamp="2025-09-13 10:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:26:09.229304341 +0000 UTC m=+1.371832452" watchObservedRunningTime="2025-09-13 10:26:09.239602848 +0000 UTC m=+1.382130949" Sep 13 10:26:09.341911 kubelet[2709]: I0913 10:26:09.341818 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.341790213 podStartE2EDuration="1.341790213s" podCreationTimestamp="2025-09-13 10:26:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:26:09.239594242 +0000 UTC m=+1.382122363" watchObservedRunningTime="2025-09-13 10:26:09.341790213 +0000 UTC m=+1.484318314" Sep 13 10:26:13.894087 kubelet[2709]: I0913 10:26:13.894039 2709 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 10:26:13.894624 kubelet[2709]: I0913 10:26:13.894555 2709 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 10:26:13.894665 containerd[1559]: time="2025-09-13T10:26:13.894409467Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 10:26:14.624950 kubelet[2709]: I0913 10:26:14.624884 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a2e67542-c944-4982-9960-7850a97158cd-kube-proxy\") pod \"kube-proxy-7f7wn\" (UID: \"a2e67542-c944-4982-9960-7850a97158cd\") " pod="kube-system/kube-proxy-7f7wn" Sep 13 10:26:14.624950 kubelet[2709]: I0913 10:26:14.624927 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hmz\" (UniqueName: \"kubernetes.io/projected/a2e67542-c944-4982-9960-7850a97158cd-kube-api-access-94hmz\") pod \"kube-proxy-7f7wn\" (UID: \"a2e67542-c944-4982-9960-7850a97158cd\") " pod="kube-system/kube-proxy-7f7wn" Sep 13 10:26:14.624950 kubelet[2709]: I0913 10:26:14.624956 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2e67542-c944-4982-9960-7850a97158cd-xtables-lock\") pod \"kube-proxy-7f7wn\" (UID: \"a2e67542-c944-4982-9960-7850a97158cd\") " pod="kube-system/kube-proxy-7f7wn" Sep 13 10:26:14.625151 kubelet[2709]: I0913 10:26:14.624970 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2e67542-c944-4982-9960-7850a97158cd-lib-modules\") pod \"kube-proxy-7f7wn\" (UID: \"a2e67542-c944-4982-9960-7850a97158cd\") " pod="kube-system/kube-proxy-7f7wn" Sep 13 10:26:14.631418 systemd[1]: Created slice kubepods-besteffort-poda2e67542_c944_4982_9960_7850a97158cd.slice - libcontainer container kubepods-besteffort-poda2e67542_c944_4982_9960_7850a97158cd.slice. Sep 13 10:26:14.945512 containerd[1559]: time="2025-09-13T10:26:14.944973493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7f7wn,Uid:a2e67542-c944-4982-9960-7850a97158cd,Namespace:kube-system,Attempt:0,}" Sep 13 10:26:14.949097 systemd[1]: Created slice kubepods-besteffort-pod3f0bc30b_7908_4022_a4c0_786da86b6d18.slice - libcontainer container kubepods-besteffort-pod3f0bc30b_7908_4022_a4c0_786da86b6d18.slice. Sep 13 10:26:14.971120 containerd[1559]: time="2025-09-13T10:26:14.971052670Z" level=info msg="connecting to shim 7c10db2f2ceccc14c3a166a7ff99b48fa426df81580afcffefb6db728db3aaf0" address="unix:///run/containerd/s/5ee63c145b9f8362355068255c31b6d8f80c67e52e554d71a7ff9083728d5b58" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:15.026533 systemd[1]: Started cri-containerd-7c10db2f2ceccc14c3a166a7ff99b48fa426df81580afcffefb6db728db3aaf0.scope - libcontainer container 7c10db2f2ceccc14c3a166a7ff99b48fa426df81580afcffefb6db728db3aaf0. Sep 13 10:26:15.053936 containerd[1559]: time="2025-09-13T10:26:15.053887171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7f7wn,Uid:a2e67542-c944-4982-9960-7850a97158cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c10db2f2ceccc14c3a166a7ff99b48fa426df81580afcffefb6db728db3aaf0\"" Sep 13 10:26:15.056792 containerd[1559]: time="2025-09-13T10:26:15.056721377Z" level=info msg="CreateContainer within sandbox \"7c10db2f2ceccc14c3a166a7ff99b48fa426df81580afcffefb6db728db3aaf0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 10:26:15.067196 containerd[1559]: time="2025-09-13T10:26:15.067144053Z" level=info msg="Container 4b1501902815e0790678ffe232b3987bcc35316090123b959ff8945942c9492f: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:15.070690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount797687843.mount: Deactivated successfully. Sep 13 10:26:15.076040 containerd[1559]: time="2025-09-13T10:26:15.076008252Z" level=info msg="CreateContainer within sandbox \"7c10db2f2ceccc14c3a166a7ff99b48fa426df81580afcffefb6db728db3aaf0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4b1501902815e0790678ffe232b3987bcc35316090123b959ff8945942c9492f\"" Sep 13 10:26:15.076621 containerd[1559]: time="2025-09-13T10:26:15.076584358Z" level=info msg="StartContainer for \"4b1501902815e0790678ffe232b3987bcc35316090123b959ff8945942c9492f\"" Sep 13 10:26:15.078347 containerd[1559]: time="2025-09-13T10:26:15.078312048Z" level=info msg="connecting to shim 4b1501902815e0790678ffe232b3987bcc35316090123b959ff8945942c9492f" address="unix:///run/containerd/s/5ee63c145b9f8362355068255c31b6d8f80c67e52e554d71a7ff9083728d5b58" protocol=ttrpc version=3 Sep 13 10:26:15.102869 systemd[1]: Started cri-containerd-4b1501902815e0790678ffe232b3987bcc35316090123b959ff8945942c9492f.scope - libcontainer container 4b1501902815e0790678ffe232b3987bcc35316090123b959ff8945942c9492f. Sep 13 10:26:15.128490 kubelet[2709]: I0913 10:26:15.128442 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xknjm\" (UniqueName: \"kubernetes.io/projected/3f0bc30b-7908-4022-a4c0-786da86b6d18-kube-api-access-xknjm\") pod \"tigera-operator-58fc44c59b-nddl7\" (UID: \"3f0bc30b-7908-4022-a4c0-786da86b6d18\") " pod="tigera-operator/tigera-operator-58fc44c59b-nddl7" Sep 13 10:26:15.128490 kubelet[2709]: I0913 10:26:15.128482 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3f0bc30b-7908-4022-a4c0-786da86b6d18-var-lib-calico\") pod \"tigera-operator-58fc44c59b-nddl7\" (UID: \"3f0bc30b-7908-4022-a4c0-786da86b6d18\") " pod="tigera-operator/tigera-operator-58fc44c59b-nddl7" Sep 13 10:26:15.147688 containerd[1559]: time="2025-09-13T10:26:15.147644513Z" level=info msg="StartContainer for \"4b1501902815e0790678ffe232b3987bcc35316090123b959ff8945942c9492f\" returns successfully" Sep 13 10:26:15.552394 containerd[1559]: time="2025-09-13T10:26:15.552310160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-nddl7,Uid:3f0bc30b-7908-4022-a4c0-786da86b6d18,Namespace:tigera-operator,Attempt:0,}" Sep 13 10:26:15.575938 containerd[1559]: time="2025-09-13T10:26:15.575861224Z" level=info msg="connecting to shim c90ced283907e1f6ccc8f994e00af549628ee0289d3bd03c81c7a4e39f2379a6" address="unix:///run/containerd/s/9bdf7380e57766651720f90bdf5acaae5c72c879cd89fd16632cac6031d88630" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:15.600461 systemd[1]: Started cri-containerd-c90ced283907e1f6ccc8f994e00af549628ee0289d3bd03c81c7a4e39f2379a6.scope - libcontainer container c90ced283907e1f6ccc8f994e00af549628ee0289d3bd03c81c7a4e39f2379a6. Sep 13 10:26:15.772389 containerd[1559]: time="2025-09-13T10:26:15.772330198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-nddl7,Uid:3f0bc30b-7908-4022-a4c0-786da86b6d18,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c90ced283907e1f6ccc8f994e00af549628ee0289d3bd03c81c7a4e39f2379a6\"" Sep 13 10:26:15.774853 containerd[1559]: time="2025-09-13T10:26:15.773773506Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 10:26:17.421409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182111762.mount: Deactivated successfully. Sep 13 10:26:17.444736 kubelet[2709]: I0913 10:26:17.444660 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7f7wn" podStartSLOduration=3.444637876 podStartE2EDuration="3.444637876s" podCreationTimestamp="2025-09-13 10:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:26:16.409658315 +0000 UTC m=+8.552186416" watchObservedRunningTime="2025-09-13 10:26:17.444637876 +0000 UTC m=+9.587165977" Sep 13 10:26:18.041395 containerd[1559]: time="2025-09-13T10:26:18.041315727Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:18.042099 containerd[1559]: time="2025-09-13T10:26:18.042030715Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 10:26:18.043470 containerd[1559]: time="2025-09-13T10:26:18.043428580Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:18.045908 containerd[1559]: time="2025-09-13T10:26:18.045859096Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:18.046561 containerd[1559]: time="2025-09-13T10:26:18.046518097Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.272714133s" Sep 13 10:26:18.046610 containerd[1559]: time="2025-09-13T10:26:18.046566699Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 10:26:18.049083 containerd[1559]: time="2025-09-13T10:26:18.049041679Z" level=info msg="CreateContainer within sandbox \"c90ced283907e1f6ccc8f994e00af549628ee0289d3bd03c81c7a4e39f2379a6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 10:26:18.059928 containerd[1559]: time="2025-09-13T10:26:18.059855716Z" level=info msg="Container d664e2edd712d8c159f2029131ef105ab92261d1c9e7d5c704a21a090f3977f9: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:18.067097 containerd[1559]: time="2025-09-13T10:26:18.067040611Z" level=info msg="CreateContainer within sandbox \"c90ced283907e1f6ccc8f994e00af549628ee0289d3bd03c81c7a4e39f2379a6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d664e2edd712d8c159f2029131ef105ab92261d1c9e7d5c704a21a090f3977f9\"" Sep 13 10:26:18.067777 containerd[1559]: time="2025-09-13T10:26:18.067722216Z" level=info msg="StartContainer for \"d664e2edd712d8c159f2029131ef105ab92261d1c9e7d5c704a21a090f3977f9\"" Sep 13 10:26:18.068779 containerd[1559]: time="2025-09-13T10:26:18.068751861Z" level=info msg="connecting to shim d664e2edd712d8c159f2029131ef105ab92261d1c9e7d5c704a21a090f3977f9" address="unix:///run/containerd/s/9bdf7380e57766651720f90bdf5acaae5c72c879cd89fd16632cac6031d88630" protocol=ttrpc version=3 Sep 13 10:26:18.125409 systemd[1]: Started cri-containerd-d664e2edd712d8c159f2029131ef105ab92261d1c9e7d5c704a21a090f3977f9.scope - libcontainer container d664e2edd712d8c159f2029131ef105ab92261d1c9e7d5c704a21a090f3977f9. Sep 13 10:26:18.160497 containerd[1559]: time="2025-09-13T10:26:18.160441457Z" level=info msg="StartContainer for \"d664e2edd712d8c159f2029131ef105ab92261d1c9e7d5c704a21a090f3977f9\" returns successfully" Sep 13 10:26:19.104501 kubelet[2709]: I0913 10:26:19.104363 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-nddl7" podStartSLOduration=2.830088141 podStartE2EDuration="5.104341979s" podCreationTimestamp="2025-09-13 10:26:14 +0000 UTC" firstStartedPulling="2025-09-13 10:26:15.773387441 +0000 UTC m=+7.915915543" lastFinishedPulling="2025-09-13 10:26:18.04764128 +0000 UTC m=+10.190169381" observedRunningTime="2025-09-13 10:26:19.104238192 +0000 UTC m=+11.246766403" watchObservedRunningTime="2025-09-13 10:26:19.104341979 +0000 UTC m=+11.246870080" Sep 13 10:26:21.236566 update_engine[1549]: I20250913 10:26:21.236438 1549 update_attempter.cc:509] Updating boot flags... Sep 13 10:26:24.494782 sudo[1784]: pam_unix(sudo:session): session closed for user root Sep 13 10:26:24.496631 sshd[1783]: Connection closed by 10.0.0.1 port 41008 Sep 13 10:26:24.497536 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:24.505403 systemd-logind[1544]: Session 7 logged out. Waiting for processes to exit. Sep 13 10:26:24.506724 systemd[1]: sshd@6-10.0.0.126:22-10.0.0.1:41008.service: Deactivated successfully. Sep 13 10:26:24.512808 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 10:26:24.513082 systemd[1]: session-7.scope: Consumed 4.805s CPU time, 224.8M memory peak. Sep 13 10:26:24.518326 systemd-logind[1544]: Removed session 7. Sep 13 10:26:26.838529 systemd[1]: Created slice kubepods-besteffort-pod6dd66db0_511a_49fc_96c8_fd3e041a5e37.slice - libcontainer container kubepods-besteffort-pod6dd66db0_511a_49fc_96c8_fd3e041a5e37.slice. Sep 13 10:26:26.902901 kubelet[2709]: I0913 10:26:26.902781 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dd66db0-511a-49fc-96c8-fd3e041a5e37-tigera-ca-bundle\") pod \"calico-typha-865d7d99fc-zqb2x\" (UID: \"6dd66db0-511a-49fc-96c8-fd3e041a5e37\") " pod="calico-system/calico-typha-865d7d99fc-zqb2x" Sep 13 10:26:26.902901 kubelet[2709]: I0913 10:26:26.902830 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6dd66db0-511a-49fc-96c8-fd3e041a5e37-typha-certs\") pod \"calico-typha-865d7d99fc-zqb2x\" (UID: \"6dd66db0-511a-49fc-96c8-fd3e041a5e37\") " pod="calico-system/calico-typha-865d7d99fc-zqb2x" Sep 13 10:26:26.902901 kubelet[2709]: I0913 10:26:26.902853 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88ngg\" (UniqueName: \"kubernetes.io/projected/6dd66db0-511a-49fc-96c8-fd3e041a5e37-kube-api-access-88ngg\") pod \"calico-typha-865d7d99fc-zqb2x\" (UID: \"6dd66db0-511a-49fc-96c8-fd3e041a5e37\") " pod="calico-system/calico-typha-865d7d99fc-zqb2x" Sep 13 10:26:27.145306 containerd[1559]: time="2025-09-13T10:26:27.145146138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-865d7d99fc-zqb2x,Uid:6dd66db0-511a-49fc-96c8-fd3e041a5e37,Namespace:calico-system,Attempt:0,}" Sep 13 10:26:27.265782 systemd[1]: Created slice kubepods-besteffort-pod7e61a537_b022_4e63_a3da_7878cb87c611.slice - libcontainer container kubepods-besteffort-pod7e61a537_b022_4e63_a3da_7878cb87c611.slice. Sep 13 10:26:27.273774 containerd[1559]: time="2025-09-13T10:26:27.273524138Z" level=info msg="connecting to shim d3c210e642025d1941ee542de4831d7275118d7406f25e79fa2c7b17d440b984" address="unix:///run/containerd/s/7fa4d1fef6ce51e77b7e2efa7c87cda1073ba103593eb74c61c68e4dd8300da3" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:27.304239 kubelet[2709]: I0913 10:26:27.304176 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7e61a537-b022-4e63-a3da-7878cb87c611-policysync\") pod \"calico-node-kng5h\" (UID: \"7e61a537-b022-4e63-a3da-7878cb87c611\") " pod="calico-system/calico-node-kng5h" Sep 13 10:26:27.304589 kubelet[2709]: I0913 10:26:27.304396 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7e61a537-b022-4e63-a3da-7878cb87c611-node-certs\") pod \"calico-node-kng5h\" (UID: \"7e61a537-b022-4e63-a3da-7878cb87c611\") " pod="calico-system/calico-node-kng5h" Sep 13 10:26:27.304589 kubelet[2709]: I0913 10:26:27.304422 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7e61a537-b022-4e63-a3da-7878cb87c611-var-run-calico\") pod \"calico-node-kng5h\" (UID: \"7e61a537-b022-4e63-a3da-7878cb87c611\") " pod="calico-system/calico-node-kng5h" Sep 13 10:26:27.304790 kubelet[2709]: I0913 10:26:27.304716 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7e61a537-b022-4e63-a3da-7878cb87c611-cni-bin-dir\") pod \"calico-node-kng5h\" (UID: \"7e61a537-b022-4e63-a3da-7878cb87c611\") " pod="calico-system/calico-node-kng5h" Sep 13 10:26:27.304790 kubelet[2709]: I0913 10:26:27.304743 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7e61a537-b022-4e63-a3da-7878cb87c611-cni-net-dir\") pod \"calico-node-kng5h\" (UID: \"7e61a537-b022-4e63-a3da-7878cb87c611\") " pod="calico-system/calico-node-kng5h" Sep 13 10:26:27.304966 kubelet[2709]: I0913 10:26:27.304883 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e61a537-b022-4e63-a3da-7878cb87c611-xtables-lock\") pod \"calico-node-kng5h\" (UID: \"7e61a537-b022-4e63-a3da-7878cb87c611\") " pod="calico-system/calico-node-kng5h" Sep 13 10:26:27.304966 kubelet[2709]: I0913 10:26:27.304903 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7e61a537-b022-4e63-a3da-7878cb87c611-flexvol-driver-host\") pod \"calico-node-kng5h\" (UID: \"7e61a537-b022-4e63-a3da-7878cb87c611\") " pod="calico-system/calico-node-kng5h" Sep 13 10:26:27.305228 kubelet[2709]: I0913 10:26:27.305138 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7e61a537-b022-4e63-a3da-7878cb87c611-cni-log-dir\") pod \"calico-node-kng5h\" (UID: \"7e61a537-b022-4e63-a3da-7878cb87c611\") " pod="calico-system/calico-node-kng5h" Sep 13 10:26:27.305228 kubelet[2709]: I0913 10:26:27.305180 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e61a537-b022-4e63-a3da-7878cb87c611-lib-modules\") pod \"calico-node-kng5h\" (UID: \"7e61a537-b022-4e63-a3da-7878cb87c611\") " pod="calico-system/calico-node-kng5h" Sep 13 10:26:27.305391 kubelet[2709]: I0913 10:26:27.305198 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvzcd\" (UniqueName: \"kubernetes.io/projected/7e61a537-b022-4e63-a3da-7878cb87c611-kube-api-access-tvzcd\") pod \"calico-node-kng5h\" (UID: \"7e61a537-b022-4e63-a3da-7878cb87c611\") " pod="calico-system/calico-node-kng5h" Sep 13 10:26:27.305528 kubelet[2709]: I0913 10:26:27.305499 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e61a537-b022-4e63-a3da-7878cb87c611-tigera-ca-bundle\") pod \"calico-node-kng5h\" (UID: \"7e61a537-b022-4e63-a3da-7878cb87c611\") " pod="calico-system/calico-node-kng5h" Sep 13 10:26:27.305672 kubelet[2709]: I0913 10:26:27.305602 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7e61a537-b022-4e63-a3da-7878cb87c611-var-lib-calico\") pod \"calico-node-kng5h\" (UID: \"7e61a537-b022-4e63-a3da-7878cb87c611\") " pod="calico-system/calico-node-kng5h" Sep 13 10:26:27.306433 systemd[1]: Started cri-containerd-d3c210e642025d1941ee542de4831d7275118d7406f25e79fa2c7b17d440b984.scope - libcontainer container d3c210e642025d1941ee542de4831d7275118d7406f25e79fa2c7b17d440b984. Sep 13 10:26:27.369025 containerd[1559]: time="2025-09-13T10:26:27.368966388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-865d7d99fc-zqb2x,Uid:6dd66db0-511a-49fc-96c8-fd3e041a5e37,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3c210e642025d1941ee542de4831d7275118d7406f25e79fa2c7b17d440b984\"" Sep 13 10:26:27.370580 containerd[1559]: time="2025-09-13T10:26:27.370535863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 10:26:27.413310 kubelet[2709]: E0913 10:26:27.413208 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.413310 kubelet[2709]: W0913 10:26:27.413230 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.413310 kubelet[2709]: E0913 10:26:27.413254 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.416313 kubelet[2709]: E0913 10:26:27.416256 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.416313 kubelet[2709]: W0913 10:26:27.416308 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.416402 kubelet[2709]: E0913 10:26:27.416348 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.464932 kubelet[2709]: E0913 10:26:27.464241 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rg54z" podUID="c833b9e8-250f-4f26-9d9e-7e0037032f7c" Sep 13 10:26:27.505102 kubelet[2709]: E0913 10:26:27.505058 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.505102 kubelet[2709]: W0913 10:26:27.505090 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.505326 kubelet[2709]: E0913 10:26:27.505117 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.505445 kubelet[2709]: E0913 10:26:27.505431 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.505445 kubelet[2709]: W0913 10:26:27.505441 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.505521 kubelet[2709]: E0913 10:26:27.505451 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.505671 kubelet[2709]: E0913 10:26:27.505660 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.505671 kubelet[2709]: W0913 10:26:27.505669 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.505741 kubelet[2709]: E0913 10:26:27.505678 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.505883 kubelet[2709]: E0913 10:26:27.505864 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.505883 kubelet[2709]: W0913 10:26:27.505879 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.505990 kubelet[2709]: E0913 10:26:27.505890 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.506109 kubelet[2709]: E0913 10:26:27.506096 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.506109 kubelet[2709]: W0913 10:26:27.506106 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.506167 kubelet[2709]: E0913 10:26:27.506116 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.506338 kubelet[2709]: E0913 10:26:27.506305 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.506338 kubelet[2709]: W0913 10:26:27.506316 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.506338 kubelet[2709]: E0913 10:26:27.506327 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.506733 kubelet[2709]: E0913 10:26:27.506503 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.506733 kubelet[2709]: W0913 10:26:27.506511 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.506733 kubelet[2709]: E0913 10:26:27.506519 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.507772 kubelet[2709]: E0913 10:26:27.507517 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.507772 kubelet[2709]: W0913 10:26:27.507546 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.507772 kubelet[2709]: E0913 10:26:27.507576 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.507941 kubelet[2709]: E0913 10:26:27.507927 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.508031 kubelet[2709]: W0913 10:26:27.507999 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.508031 kubelet[2709]: E0913 10:26:27.508015 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.508224 kubelet[2709]: E0913 10:26:27.508207 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.508224 kubelet[2709]: W0913 10:26:27.508220 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.508334 kubelet[2709]: E0913 10:26:27.508235 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.508503 kubelet[2709]: E0913 10:26:27.508481 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.508503 kubelet[2709]: W0913 10:26:27.508496 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.508585 kubelet[2709]: E0913 10:26:27.508508 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.510419 kubelet[2709]: E0913 10:26:27.510376 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.510419 kubelet[2709]: W0913 10:26:27.510397 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.510419 kubelet[2709]: E0913 10:26:27.510413 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.510655 kubelet[2709]: E0913 10:26:27.510625 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.510655 kubelet[2709]: W0913 10:26:27.510634 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.510776 kubelet[2709]: E0913 10:26:27.510662 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.512960 kubelet[2709]: E0913 10:26:27.510948 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.512960 kubelet[2709]: W0913 10:26:27.510994 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.512960 kubelet[2709]: E0913 10:26:27.511012 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.512960 kubelet[2709]: E0913 10:26:27.511246 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.512960 kubelet[2709]: W0913 10:26:27.511256 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.512960 kubelet[2709]: E0913 10:26:27.511306 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.512960 kubelet[2709]: E0913 10:26:27.511568 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.512960 kubelet[2709]: W0913 10:26:27.511586 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.512960 kubelet[2709]: E0913 10:26:27.511623 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.512960 kubelet[2709]: E0913 10:26:27.511947 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.513358 kubelet[2709]: W0913 10:26:27.511957 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.513358 kubelet[2709]: E0913 10:26:27.511977 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.513618 kubelet[2709]: E0913 10:26:27.513514 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.513618 kubelet[2709]: W0913 10:26:27.513536 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.513618 kubelet[2709]: E0913 10:26:27.513559 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.513776 kubelet[2709]: E0913 10:26:27.513754 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.513776 kubelet[2709]: W0913 10:26:27.513772 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.513921 kubelet[2709]: E0913 10:26:27.513898 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.514362 kubelet[2709]: E0913 10:26:27.514333 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.514418 kubelet[2709]: W0913 10:26:27.514355 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.514418 kubelet[2709]: E0913 10:26:27.514394 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.515121 kubelet[2709]: E0913 10:26:27.515050 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.515121 kubelet[2709]: W0913 10:26:27.515073 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.515121 kubelet[2709]: E0913 10:26:27.515090 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.515511 kubelet[2709]: I0913 10:26:27.515394 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c833b9e8-250f-4f26-9d9e-7e0037032f7c-socket-dir\") pod \"csi-node-driver-rg54z\" (UID: \"c833b9e8-250f-4f26-9d9e-7e0037032f7c\") " pod="calico-system/csi-node-driver-rg54z" Sep 13 10:26:27.515822 kubelet[2709]: E0913 10:26:27.515778 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.515822 kubelet[2709]: W0913 10:26:27.515794 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.516031 kubelet[2709]: E0913 10:26:27.515806 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.516162 kubelet[2709]: I0913 10:26:27.516061 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7mjp\" (UniqueName: \"kubernetes.io/projected/c833b9e8-250f-4f26-9d9e-7e0037032f7c-kube-api-access-h7mjp\") pod \"csi-node-driver-rg54z\" (UID: \"c833b9e8-250f-4f26-9d9e-7e0037032f7c\") " pod="calico-system/csi-node-driver-rg54z" Sep 13 10:26:27.517156 kubelet[2709]: E0913 10:26:27.517129 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.517156 kubelet[2709]: W0913 10:26:27.517154 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.517239 kubelet[2709]: E0913 10:26:27.517181 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.518369 kubelet[2709]: E0913 10:26:27.517821 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.518369 kubelet[2709]: W0913 10:26:27.517838 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.518369 kubelet[2709]: E0913 10:26:27.517918 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.518369 kubelet[2709]: E0913 10:26:27.518183 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.518369 kubelet[2709]: W0913 10:26:27.518192 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.518369 kubelet[2709]: E0913 10:26:27.518254 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.518369 kubelet[2709]: I0913 10:26:27.518346 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c833b9e8-250f-4f26-9d9e-7e0037032f7c-registration-dir\") pod \"csi-node-driver-rg54z\" (UID: \"c833b9e8-250f-4f26-9d9e-7e0037032f7c\") " pod="calico-system/csi-node-driver-rg54z" Sep 13 10:26:27.518573 kubelet[2709]: E0913 10:26:27.518556 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.518573 kubelet[2709]: W0913 10:26:27.518569 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.518628 kubelet[2709]: E0913 10:26:27.518593 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.518864 kubelet[2709]: E0913 10:26:27.518844 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.518864 kubelet[2709]: W0913 10:26:27.518857 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.518941 kubelet[2709]: E0913 10:26:27.518874 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.519622 kubelet[2709]: E0913 10:26:27.519439 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.519622 kubelet[2709]: W0913 10:26:27.519455 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.519622 kubelet[2709]: E0913 10:26:27.519478 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.519622 kubelet[2709]: I0913 10:26:27.519501 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c833b9e8-250f-4f26-9d9e-7e0037032f7c-kubelet-dir\") pod \"csi-node-driver-rg54z\" (UID: \"c833b9e8-250f-4f26-9d9e-7e0037032f7c\") " pod="calico-system/csi-node-driver-rg54z" Sep 13 10:26:27.519859 kubelet[2709]: E0913 10:26:27.519839 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.519859 kubelet[2709]: W0913 10:26:27.519852 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.519926 kubelet[2709]: E0913 10:26:27.519872 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.519926 kubelet[2709]: I0913 10:26:27.519888 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c833b9e8-250f-4f26-9d9e-7e0037032f7c-varrun\") pod \"csi-node-driver-rg54z\" (UID: \"c833b9e8-250f-4f26-9d9e-7e0037032f7c\") " pod="calico-system/csi-node-driver-rg54z" Sep 13 10:26:27.520247 kubelet[2709]: E0913 10:26:27.520225 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.520247 kubelet[2709]: W0913 10:26:27.520241 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.520328 kubelet[2709]: E0913 10:26:27.520263 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.520598 kubelet[2709]: E0913 10:26:27.520582 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.520598 kubelet[2709]: W0913 10:26:27.520594 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.520700 kubelet[2709]: E0913 10:26:27.520685 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.520904 kubelet[2709]: E0913 10:26:27.520891 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.520904 kubelet[2709]: W0913 10:26:27.520902 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.520970 kubelet[2709]: E0913 10:26:27.520943 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.521150 kubelet[2709]: E0913 10:26:27.521118 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.521150 kubelet[2709]: W0913 10:26:27.521130 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.521150 kubelet[2709]: E0913 10:26:27.521149 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.521368 kubelet[2709]: E0913 10:26:27.521354 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.521368 kubelet[2709]: W0913 10:26:27.521364 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.521429 kubelet[2709]: E0913 10:26:27.521373 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.521619 kubelet[2709]: E0913 10:26:27.521588 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.521619 kubelet[2709]: W0913 10:26:27.521608 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.521619 kubelet[2709]: E0913 10:26:27.521623 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.573537 containerd[1559]: time="2025-09-13T10:26:27.573472318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kng5h,Uid:7e61a537-b022-4e63-a3da-7878cb87c611,Namespace:calico-system,Attempt:0,}" Sep 13 10:26:27.599514 containerd[1559]: time="2025-09-13T10:26:27.599415820Z" level=info msg="connecting to shim abf5c96e2a4574972eb5bca93a8de7d23c84d23462b9700808c30d70a240a6e0" address="unix:///run/containerd/s/5d765ac66281223a48d0c16502c922b6ed8914c392ec86e91c81f6aef09f99bf" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:27.620715 kubelet[2709]: E0913 10:26:27.620638 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.620715 kubelet[2709]: W0913 10:26:27.620686 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.620715 kubelet[2709]: E0913 10:26:27.620720 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.621704 kubelet[2709]: E0913 10:26:27.621036 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.621704 kubelet[2709]: W0913 10:26:27.621050 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.621704 kubelet[2709]: E0913 10:26:27.621064 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.621704 kubelet[2709]: E0913 10:26:27.621386 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.621704 kubelet[2709]: W0913 10:26:27.621398 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.621704 kubelet[2709]: E0913 10:26:27.621410 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.621932 kubelet[2709]: E0913 10:26:27.621856 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.621932 kubelet[2709]: W0913 10:26:27.621869 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.622332 kubelet[2709]: E0913 10:26:27.622073 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.622420 kubelet[2709]: E0913 10:26:27.622402 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.622420 kubelet[2709]: W0913 10:26:27.622415 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.622507 kubelet[2709]: E0913 10:26:27.622433 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.622868 kubelet[2709]: E0913 10:26:27.622788 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.622868 kubelet[2709]: W0913 10:26:27.622804 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.622947 kubelet[2709]: E0913 10:26:27.622874 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.623262 kubelet[2709]: E0913 10:26:27.623197 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.623262 kubelet[2709]: W0913 10:26:27.623212 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.623438 kubelet[2709]: E0913 10:26:27.623418 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.623686 kubelet[2709]: E0913 10:26:27.623671 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.623824 kubelet[2709]: W0913 10:26:27.623767 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.623912 kubelet[2709]: E0913 10:26:27.623896 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.624199 kubelet[2709]: E0913 10:26:27.624142 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.624199 kubelet[2709]: W0913 10:26:27.624155 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.624945 kubelet[2709]: E0913 10:26:27.624346 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.626000 kubelet[2709]: E0913 10:26:27.625399 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.626000 kubelet[2709]: W0913 10:26:27.625415 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.626249 kubelet[2709]: E0913 10:26:27.626217 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.627479 kubelet[2709]: E0913 10:26:27.626934 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.627479 kubelet[2709]: W0913 10:26:27.627419 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.627630 kubelet[2709]: E0913 10:26:27.627592 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.628868 kubelet[2709]: E0913 10:26:27.628563 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.628868 kubelet[2709]: W0913 10:26:27.628798 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.628868 kubelet[2709]: E0913 10:26:27.628848 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.630297 kubelet[2709]: E0913 10:26:27.629878 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.630431 kubelet[2709]: W0913 10:26:27.630410 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.631023 kubelet[2709]: E0913 10:26:27.630803 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.631335 kubelet[2709]: E0913 10:26:27.631320 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.632386 kubelet[2709]: W0913 10:26:27.632296 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.632530 kubelet[2709]: E0913 10:26:27.632472 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.632711 kubelet[2709]: E0913 10:26:27.632698 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.633454 kubelet[2709]: W0913 10:26:27.633301 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.633579 kubelet[2709]: E0913 10:26:27.633550 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.634387 kubelet[2709]: E0913 10:26:27.634369 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.634764 kubelet[2709]: W0913 10:26:27.634575 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.634764 kubelet[2709]: E0913 10:26:27.634653 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.634933 kubelet[2709]: E0913 10:26:27.634916 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.635049 kubelet[2709]: W0913 10:26:27.635034 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.635179 kubelet[2709]: E0913 10:26:27.635144 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.635662 kubelet[2709]: E0913 10:26:27.635533 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.635662 kubelet[2709]: W0913 10:26:27.635548 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.635662 kubelet[2709]: E0913 10:26:27.635590 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.635826 systemd[1]: Started cri-containerd-abf5c96e2a4574972eb5bca93a8de7d23c84d23462b9700808c30d70a240a6e0.scope - libcontainer container abf5c96e2a4574972eb5bca93a8de7d23c84d23462b9700808c30d70a240a6e0. Sep 13 10:26:27.636517 kubelet[2709]: E0913 10:26:27.636213 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.636517 kubelet[2709]: W0913 10:26:27.636226 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.636517 kubelet[2709]: E0913 10:26:27.636283 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.638358 kubelet[2709]: E0913 10:26:27.637501 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.638358 kubelet[2709]: W0913 10:26:27.637535 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.638358 kubelet[2709]: E0913 10:26:27.637573 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.639031 kubelet[2709]: E0913 10:26:27.639001 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.639031 kubelet[2709]: W0913 10:26:27.639020 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.639125 kubelet[2709]: E0913 10:26:27.639079 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.639605 kubelet[2709]: E0913 10:26:27.639454 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.639605 kubelet[2709]: W0913 10:26:27.639469 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.640019 kubelet[2709]: E0913 10:26:27.639753 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.640019 kubelet[2709]: E0913 10:26:27.639857 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.640019 kubelet[2709]: W0913 10:26:27.639866 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.640019 kubelet[2709]: E0913 10:26:27.639911 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.640587 kubelet[2709]: E0913 10:26:27.640182 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.640587 kubelet[2709]: W0913 10:26:27.640195 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.640587 kubelet[2709]: E0913 10:26:27.640205 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.640773 kubelet[2709]: E0913 10:26:27.640713 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.640773 kubelet[2709]: W0913 10:26:27.640723 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.640773 kubelet[2709]: E0913 10:26:27.640733 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.649771 kubelet[2709]: E0913 10:26:27.649655 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:27.649771 kubelet[2709]: W0913 10:26:27.649682 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:27.649771 kubelet[2709]: E0913 10:26:27.649708 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:27.726021 containerd[1559]: time="2025-09-13T10:26:27.725884663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kng5h,Uid:7e61a537-b022-4e63-a3da-7878cb87c611,Namespace:calico-system,Attempt:0,} returns sandbox id \"abf5c96e2a4574972eb5bca93a8de7d23c84d23462b9700808c30d70a240a6e0\"" Sep 13 10:26:28.634676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3415401127.mount: Deactivated successfully. Sep 13 10:26:28.948479 kubelet[2709]: E0913 10:26:28.948344 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rg54z" podUID="c833b9e8-250f-4f26-9d9e-7e0037032f7c" Sep 13 10:26:29.232825 containerd[1559]: time="2025-09-13T10:26:29.232683107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:29.233712 containerd[1559]: time="2025-09-13T10:26:29.233689746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 10:26:29.235257 containerd[1559]: time="2025-09-13T10:26:29.235214203Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:29.238302 containerd[1559]: time="2025-09-13T10:26:29.238165643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:29.239333 containerd[1559]: time="2025-09-13T10:26:29.238954812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.868377611s" Sep 13 10:26:29.239333 containerd[1559]: time="2025-09-13T10:26:29.239015738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 10:26:29.240686 containerd[1559]: time="2025-09-13T10:26:29.240639181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 10:26:29.250823 containerd[1559]: time="2025-09-13T10:26:29.250779368Z" level=info msg="CreateContainer within sandbox \"d3c210e642025d1941ee542de4831d7275118d7406f25e79fa2c7b17d440b984\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 10:26:29.261894 containerd[1559]: time="2025-09-13T10:26:29.261806518Z" level=info msg="Container dcd05c5c074f807e197bc4b43919b3e92c7f905b357fd91c844672fbe6f5ac8a: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:29.272342 containerd[1559]: time="2025-09-13T10:26:29.272289061Z" level=info msg="CreateContainer within sandbox \"d3c210e642025d1941ee542de4831d7275118d7406f25e79fa2c7b17d440b984\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dcd05c5c074f807e197bc4b43919b3e92c7f905b357fd91c844672fbe6f5ac8a\"" Sep 13 10:26:29.273191 containerd[1559]: time="2025-09-13T10:26:29.273149094Z" level=info msg="StartContainer for \"dcd05c5c074f807e197bc4b43919b3e92c7f905b357fd91c844672fbe6f5ac8a\"" Sep 13 10:26:29.274373 containerd[1559]: time="2025-09-13T10:26:29.274343469Z" level=info msg="connecting to shim dcd05c5c074f807e197bc4b43919b3e92c7f905b357fd91c844672fbe6f5ac8a" address="unix:///run/containerd/s/7fa4d1fef6ce51e77b7e2efa7c87cda1073ba103593eb74c61c68e4dd8300da3" protocol=ttrpc version=3 Sep 13 10:26:29.302454 systemd[1]: Started cri-containerd-dcd05c5c074f807e197bc4b43919b3e92c7f905b357fd91c844672fbe6f5ac8a.scope - libcontainer container dcd05c5c074f807e197bc4b43919b3e92c7f905b357fd91c844672fbe6f5ac8a. Sep 13 10:26:29.361885 containerd[1559]: time="2025-09-13T10:26:29.361832528Z" level=info msg="StartContainer for \"dcd05c5c074f807e197bc4b43919b3e92c7f905b357fd91c844672fbe6f5ac8a\" returns successfully" Sep 13 10:26:30.033579 kubelet[2709]: E0913 10:26:30.033538 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.033579 kubelet[2709]: W0913 10:26:30.033564 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.033579 kubelet[2709]: E0913 10:26:30.033587 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.034166 kubelet[2709]: E0913 10:26:30.033934 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.034166 kubelet[2709]: W0913 10:26:30.033945 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.034166 kubelet[2709]: E0913 10:26:30.033957 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.034262 kubelet[2709]: E0913 10:26:30.034212 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.034262 kubelet[2709]: W0913 10:26:30.034223 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.034262 kubelet[2709]: E0913 10:26:30.034234 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.034496 kubelet[2709]: E0913 10:26:30.034477 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.034496 kubelet[2709]: W0913 10:26:30.034490 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.034577 kubelet[2709]: E0913 10:26:30.034502 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.034755 kubelet[2709]: E0913 10:26:30.034717 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.034755 kubelet[2709]: W0913 10:26:30.034732 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.034755 kubelet[2709]: E0913 10:26:30.034744 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.034967 kubelet[2709]: E0913 10:26:30.034950 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.034967 kubelet[2709]: W0913 10:26:30.034962 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.035072 kubelet[2709]: E0913 10:26:30.034974 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.035189 kubelet[2709]: E0913 10:26:30.035171 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.035189 kubelet[2709]: W0913 10:26:30.035183 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.035304 kubelet[2709]: E0913 10:26:30.035194 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.035505 kubelet[2709]: E0913 10:26:30.035475 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.035505 kubelet[2709]: W0913 10:26:30.035488 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.035505 kubelet[2709]: E0913 10:26:30.035499 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.035760 kubelet[2709]: E0913 10:26:30.035741 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.035760 kubelet[2709]: W0913 10:26:30.035753 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.035843 kubelet[2709]: E0913 10:26:30.035764 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.036006 kubelet[2709]: E0913 10:26:30.035977 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.036006 kubelet[2709]: W0913 10:26:30.035990 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.036006 kubelet[2709]: E0913 10:26:30.036002 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.036239 kubelet[2709]: E0913 10:26:30.036219 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.036239 kubelet[2709]: W0913 10:26:30.036231 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.036348 kubelet[2709]: E0913 10:26:30.036243 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.036494 kubelet[2709]: E0913 10:26:30.036476 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.036494 kubelet[2709]: W0913 10:26:30.036488 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.036593 kubelet[2709]: E0913 10:26:30.036499 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.036832 kubelet[2709]: E0913 10:26:30.036791 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.036832 kubelet[2709]: W0913 10:26:30.036826 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.036919 kubelet[2709]: E0913 10:26:30.036858 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.037134 kubelet[2709]: E0913 10:26:30.037112 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.037134 kubelet[2709]: W0913 10:26:30.037127 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.037134 kubelet[2709]: E0913 10:26:30.037137 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.037366 kubelet[2709]: E0913 10:26:30.037333 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.037366 kubelet[2709]: W0913 10:26:30.037345 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.037366 kubelet[2709]: E0913 10:26:30.037353 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.041949 kubelet[2709]: E0913 10:26:30.041906 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.041949 kubelet[2709]: W0913 10:26:30.041937 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.042034 kubelet[2709]: E0913 10:26:30.041968 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.042300 kubelet[2709]: E0913 10:26:30.042247 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.042300 kubelet[2709]: W0913 10:26:30.042261 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.042300 kubelet[2709]: E0913 10:26:30.042298 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.042561 kubelet[2709]: E0913 10:26:30.042513 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.042561 kubelet[2709]: W0913 10:26:30.042529 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.042561 kubelet[2709]: E0913 10:26:30.042547 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.043047 kubelet[2709]: E0913 10:26:30.043011 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.043047 kubelet[2709]: W0913 10:26:30.043023 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.043135 kubelet[2709]: E0913 10:26:30.043070 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.043350 kubelet[2709]: E0913 10:26:30.043332 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.043350 kubelet[2709]: W0913 10:26:30.043345 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.043659 kubelet[2709]: E0913 10:26:30.043436 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.043659 kubelet[2709]: E0913 10:26:30.043562 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.043659 kubelet[2709]: W0913 10:26:30.043573 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.043749 kubelet[2709]: E0913 10:26:30.043625 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.043799 kubelet[2709]: E0913 10:26:30.043779 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.043799 kubelet[2709]: W0913 10:26:30.043794 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.043890 kubelet[2709]: E0913 10:26:30.043861 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.044040 kubelet[2709]: E0913 10:26:30.044004 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.044040 kubelet[2709]: W0913 10:26:30.044021 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.044040 kubelet[2709]: E0913 10:26:30.044039 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.044259 kubelet[2709]: E0913 10:26:30.044235 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.044259 kubelet[2709]: W0913 10:26:30.044252 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.044350 kubelet[2709]: E0913 10:26:30.044290 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.044592 kubelet[2709]: E0913 10:26:30.044571 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.044592 kubelet[2709]: W0913 10:26:30.044587 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.044707 kubelet[2709]: E0913 10:26:30.044602 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.045557 kubelet[2709]: E0913 10:26:30.045533 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.045557 kubelet[2709]: W0913 10:26:30.045549 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.045662 kubelet[2709]: E0913 10:26:30.045645 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.045847 kubelet[2709]: E0913 10:26:30.045816 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.045847 kubelet[2709]: W0913 10:26:30.045838 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.046117 kubelet[2709]: E0913 10:26:30.046095 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.046192 kubelet[2709]: E0913 10:26:30.046169 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.046192 kubelet[2709]: W0913 10:26:30.046187 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.046258 kubelet[2709]: E0913 10:26:30.046211 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.046884 kubelet[2709]: E0913 10:26:30.046858 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.046884 kubelet[2709]: W0913 10:26:30.046872 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.046978 kubelet[2709]: E0913 10:26:30.046891 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.047134 kubelet[2709]: E0913 10:26:30.047115 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.047134 kubelet[2709]: W0913 10:26:30.047126 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.047204 kubelet[2709]: E0913 10:26:30.047140 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.047507 kubelet[2709]: E0913 10:26:30.047487 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.047507 kubelet[2709]: W0913 10:26:30.047500 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.047604 kubelet[2709]: E0913 10:26:30.047515 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.047769 kubelet[2709]: E0913 10:26:30.047748 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.047769 kubelet[2709]: W0913 10:26:30.047764 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.047848 kubelet[2709]: E0913 10:26:30.047778 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.048056 kubelet[2709]: E0913 10:26:30.048030 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:26:30.048056 kubelet[2709]: W0913 10:26:30.048049 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:26:30.048142 kubelet[2709]: E0913 10:26:30.048064 2709 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:26:30.628133 containerd[1559]: time="2025-09-13T10:26:30.628065141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:30.628807 containerd[1559]: time="2025-09-13T10:26:30.628765884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 10:26:30.629798 containerd[1559]: time="2025-09-13T10:26:30.629766270Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:30.631604 containerd[1559]: time="2025-09-13T10:26:30.631560045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:30.632118 containerd[1559]: time="2025-09-13T10:26:30.632081949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.391413852s" Sep 13 10:26:30.632118 containerd[1559]: time="2025-09-13T10:26:30.632115302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 10:26:30.634911 containerd[1559]: time="2025-09-13T10:26:30.634883084Z" level=info msg="CreateContainer within sandbox \"abf5c96e2a4574972eb5bca93a8de7d23c84d23462b9700808c30d70a240a6e0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 10:26:30.643993 containerd[1559]: time="2025-09-13T10:26:30.643945309Z" level=info msg="Container b73df414d2eaed1b9702c1a4135d7947cff5f0c9968c3e310491cc41769d5e3b: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:30.653125 containerd[1559]: time="2025-09-13T10:26:30.653086944Z" level=info msg="CreateContainer within sandbox \"abf5c96e2a4574972eb5bca93a8de7d23c84d23462b9700808c30d70a240a6e0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b73df414d2eaed1b9702c1a4135d7947cff5f0c9968c3e310491cc41769d5e3b\"" Sep 13 10:26:30.653625 containerd[1559]: time="2025-09-13T10:26:30.653589482Z" level=info msg="StartContainer for \"b73df414d2eaed1b9702c1a4135d7947cff5f0c9968c3e310491cc41769d5e3b\"" Sep 13 10:26:30.654962 containerd[1559]: time="2025-09-13T10:26:30.654935572Z" level=info msg="connecting to shim b73df414d2eaed1b9702c1a4135d7947cff5f0c9968c3e310491cc41769d5e3b" address="unix:///run/containerd/s/5d765ac66281223a48d0c16502c922b6ed8914c392ec86e91c81f6aef09f99bf" protocol=ttrpc version=3 Sep 13 10:26:30.685469 systemd[1]: Started cri-containerd-b73df414d2eaed1b9702c1a4135d7947cff5f0c9968c3e310491cc41769d5e3b.scope - libcontainer container b73df414d2eaed1b9702c1a4135d7947cff5f0c9968c3e310491cc41769d5e3b. Sep 13 10:26:30.736283 containerd[1559]: time="2025-09-13T10:26:30.736224493Z" level=info msg="StartContainer for \"b73df414d2eaed1b9702c1a4135d7947cff5f0c9968c3e310491cc41769d5e3b\" returns successfully" Sep 13 10:26:30.747426 systemd[1]: cri-containerd-b73df414d2eaed1b9702c1a4135d7947cff5f0c9968c3e310491cc41769d5e3b.scope: Deactivated successfully. Sep 13 10:26:30.749460 containerd[1559]: time="2025-09-13T10:26:30.749417461Z" level=info msg="received exit event container_id:\"b73df414d2eaed1b9702c1a4135d7947cff5f0c9968c3e310491cc41769d5e3b\" id:\"b73df414d2eaed1b9702c1a4135d7947cff5f0c9968c3e310491cc41769d5e3b\" pid:3408 exited_at:{seconds:1757759190 nanos:748883854}" Sep 13 10:26:30.749706 containerd[1559]: time="2025-09-13T10:26:30.749535313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b73df414d2eaed1b9702c1a4135d7947cff5f0c9968c3e310491cc41769d5e3b\" id:\"b73df414d2eaed1b9702c1a4135d7947cff5f0c9968c3e310491cc41769d5e3b\" pid:3408 exited_at:{seconds:1757759190 nanos:748883854}" Sep 13 10:26:30.780341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b73df414d2eaed1b9702c1a4135d7947cff5f0c9968c3e310491cc41769d5e3b-rootfs.mount: Deactivated successfully. Sep 13 10:26:31.408945 kubelet[2709]: E0913 10:26:30.948430 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rg54z" podUID="c833b9e8-250f-4f26-9d9e-7e0037032f7c" Sep 13 10:26:31.408945 kubelet[2709]: I0913 10:26:31.005069 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 10:26:31.408945 kubelet[2709]: I0913 10:26:31.406917 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-865d7d99fc-zqb2x" podStartSLOduration=3.5366477720000002 podStartE2EDuration="5.406896034s" podCreationTimestamp="2025-09-13 10:26:26 +0000 UTC" firstStartedPulling="2025-09-13 10:26:27.370238732 +0000 UTC m=+19.512766833" lastFinishedPulling="2025-09-13 10:26:29.240486994 +0000 UTC m=+21.383015095" observedRunningTime="2025-09-13 10:26:30.011976066 +0000 UTC m=+22.154504167" watchObservedRunningTime="2025-09-13 10:26:31.406896034 +0000 UTC m=+23.549424125" Sep 13 10:26:32.009896 containerd[1559]: time="2025-09-13T10:26:32.009820536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 10:26:32.947415 kubelet[2709]: E0913 10:26:32.947369 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rg54z" podUID="c833b9e8-250f-4f26-9d9e-7e0037032f7c" Sep 13 10:26:34.948156 kubelet[2709]: E0913 10:26:34.948091 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rg54z" podUID="c833b9e8-250f-4f26-9d9e-7e0037032f7c" Sep 13 10:26:36.947530 kubelet[2709]: E0913 10:26:36.947456 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rg54z" podUID="c833b9e8-250f-4f26-9d9e-7e0037032f7c" Sep 13 10:26:38.529652 containerd[1559]: time="2025-09-13T10:26:38.529593835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:38.530684 containerd[1559]: time="2025-09-13T10:26:38.530645224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 10:26:38.532114 containerd[1559]: time="2025-09-13T10:26:38.532068023Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:38.534519 containerd[1559]: time="2025-09-13T10:26:38.534469833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:38.535302 containerd[1559]: time="2025-09-13T10:26:38.535234923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 6.525357941s" Sep 13 10:26:38.535302 containerd[1559]: time="2025-09-13T10:26:38.535292561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 10:26:38.537616 containerd[1559]: time="2025-09-13T10:26:38.537577071Z" level=info msg="CreateContainer within sandbox \"abf5c96e2a4574972eb5bca93a8de7d23c84d23462b9700808c30d70a240a6e0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 10:26:38.548552 containerd[1559]: time="2025-09-13T10:26:38.548485282Z" level=info msg="Container 78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:38.560321 containerd[1559]: time="2025-09-13T10:26:38.560249153Z" level=info msg="CreateContainer within sandbox \"abf5c96e2a4574972eb5bca93a8de7d23c84d23462b9700808c30d70a240a6e0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4\"" Sep 13 10:26:38.560859 containerd[1559]: time="2025-09-13T10:26:38.560828945Z" level=info msg="StartContainer for \"78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4\"" Sep 13 10:26:38.562333 containerd[1559]: time="2025-09-13T10:26:38.562300063Z" level=info msg="connecting to shim 78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4" address="unix:///run/containerd/s/5d765ac66281223a48d0c16502c922b6ed8914c392ec86e91c81f6aef09f99bf" protocol=ttrpc version=3 Sep 13 10:26:38.586494 systemd[1]: Started cri-containerd-78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4.scope - libcontainer container 78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4. Sep 13 10:26:38.947603 kubelet[2709]: E0913 10:26:38.947516 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rg54z" podUID="c833b9e8-250f-4f26-9d9e-7e0037032f7c" Sep 13 10:26:39.645075 containerd[1559]: time="2025-09-13T10:26:39.645022305Z" level=info msg="StartContainer for \"78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4\" returns successfully" Sep 13 10:26:40.649658 containerd[1559]: time="2025-09-13T10:26:40.649593416Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 10:26:40.653175 systemd[1]: cri-containerd-78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4.scope: Deactivated successfully. Sep 13 10:26:40.653585 systemd[1]: cri-containerd-78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4.scope: Consumed 609ms CPU time, 178.5M memory peak, 2.8M read from disk, 171.3M written to disk. Sep 13 10:26:40.655611 containerd[1559]: time="2025-09-13T10:26:40.655568047Z" level=info msg="received exit event container_id:\"78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4\" id:\"78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4\" pid:3473 exited_at:{seconds:1757759200 nanos:655304893}" Sep 13 10:26:40.655924 containerd[1559]: time="2025-09-13T10:26:40.655867381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4\" id:\"78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4\" pid:3473 exited_at:{seconds:1757759200 nanos:655304893}" Sep 13 10:26:40.681284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78b9fc11fc6a038b600088be61d6b4d3abcbf8c0eeb55a7d59f33e2861538cf4-rootfs.mount: Deactivated successfully. Sep 13 10:26:40.696485 kubelet[2709]: I0913 10:26:40.696447 2709 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 10:26:40.954688 systemd[1]: Created slice kubepods-besteffort-podc833b9e8_250f_4f26_9d9e_7e0037032f7c.slice - libcontainer container kubepods-besteffort-podc833b9e8_250f_4f26_9d9e_7e0037032f7c.slice. Sep 13 10:26:40.977240 containerd[1559]: time="2025-09-13T10:26:40.977134551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rg54z,Uid:c833b9e8-250f-4f26-9d9e-7e0037032f7c,Namespace:calico-system,Attempt:0,}" Sep 13 10:26:41.007429 systemd[1]: Created slice kubepods-besteffort-pod581d69a4_f057_48d2_8168_3614bebbafa4.slice - libcontainer container kubepods-besteffort-pod581d69a4_f057_48d2_8168_3614bebbafa4.slice. Sep 13 10:26:41.039118 systemd[1]: Created slice kubepods-besteffort-poded2381f7_f3c1_4802_8345_09791252d48b.slice - libcontainer container kubepods-besteffort-poded2381f7_f3c1_4802_8345_09791252d48b.slice. Sep 13 10:26:41.045362 systemd[1]: Created slice kubepods-burstable-podcd777e1a_a140_42bb_a95c_8804813542ec.slice - libcontainer container kubepods-burstable-podcd777e1a_a140_42bb_a95c_8804813542ec.slice. Sep 13 10:26:41.051343 systemd[1]: Created slice kubepods-burstable-podc48e1894_4eef_4a72_a4e5_8eb63a8f6562.slice - libcontainer container kubepods-burstable-podc48e1894_4eef_4a72_a4e5_8eb63a8f6562.slice. Sep 13 10:26:41.055377 systemd[1]: Created slice kubepods-besteffort-pod3ec9c671_359f_49ee_83bf_45173542d96c.slice - libcontainer container kubepods-besteffort-pod3ec9c671_359f_49ee_83bf_45173542d96c.slice. Sep 13 10:26:41.061743 systemd[1]: Created slice kubepods-besteffort-podf9821b4d_af50_4742_9887_0f52d95f4926.slice - libcontainer container kubepods-besteffort-podf9821b4d_af50_4742_9887_0f52d95f4926.slice. Sep 13 10:26:41.065451 systemd[1]: Created slice kubepods-besteffort-podd70b8fb4_2c48_46ff_884d_b94402a77633.slice - libcontainer container kubepods-besteffort-podd70b8fb4_2c48_46ff_884d_b94402a77633.slice. Sep 13 10:26:41.174435 kubelet[2709]: I0913 10:26:41.174394 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed2381f7-f3c1-4802-8345-09791252d48b-whisker-backend-key-pair\") pod \"whisker-f8bf97d76-jl59h\" (UID: \"ed2381f7-f3c1-4802-8345-09791252d48b\") " pod="calico-system/whisker-f8bf97d76-jl59h" Sep 13 10:26:41.174594 kubelet[2709]: I0913 10:26:41.174467 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f9821b4d-af50-4742-9887-0f52d95f4926-goldmane-key-pair\") pod \"goldmane-7988f88666-4pmfw\" (UID: \"f9821b4d-af50-4742-9887-0f52d95f4926\") " pod="calico-system/goldmane-7988f88666-4pmfw" Sep 13 10:26:41.174594 kubelet[2709]: I0913 10:26:41.174488 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2w8w\" (UniqueName: \"kubernetes.io/projected/f9821b4d-af50-4742-9887-0f52d95f4926-kube-api-access-t2w8w\") pod \"goldmane-7988f88666-4pmfw\" (UID: \"f9821b4d-af50-4742-9887-0f52d95f4926\") " pod="calico-system/goldmane-7988f88666-4pmfw" Sep 13 10:26:41.174594 kubelet[2709]: I0913 10:26:41.174503 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw2h8\" (UniqueName: \"kubernetes.io/projected/cd777e1a-a140-42bb-a95c-8804813542ec-kube-api-access-cw2h8\") pod \"coredns-7c65d6cfc9-f55jf\" (UID: \"cd777e1a-a140-42bb-a95c-8804813542ec\") " pod="kube-system/coredns-7c65d6cfc9-f55jf" Sep 13 10:26:41.174594 kubelet[2709]: I0913 10:26:41.174518 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s9pq\" (UniqueName: \"kubernetes.io/projected/581d69a4-f057-48d2-8168-3614bebbafa4-kube-api-access-7s9pq\") pod \"calico-apiserver-6fcc8644f7-ql8zw\" (UID: \"581d69a4-f057-48d2-8168-3614bebbafa4\") " pod="calico-apiserver/calico-apiserver-6fcc8644f7-ql8zw" Sep 13 10:26:41.174594 kubelet[2709]: I0913 10:26:41.174536 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqbbc\" (UniqueName: \"kubernetes.io/projected/d70b8fb4-2c48-46ff-884d-b94402a77633-kube-api-access-vqbbc\") pod \"calico-kube-controllers-7d6c9cc7cf-2xxx4\" (UID: \"d70b8fb4-2c48-46ff-884d-b94402a77633\") " pod="calico-system/calico-kube-controllers-7d6c9cc7cf-2xxx4" Sep 13 10:26:41.174738 kubelet[2709]: I0913 10:26:41.174554 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c57c8\" (UniqueName: \"kubernetes.io/projected/ed2381f7-f3c1-4802-8345-09791252d48b-kube-api-access-c57c8\") pod \"whisker-f8bf97d76-jl59h\" (UID: \"ed2381f7-f3c1-4802-8345-09791252d48b\") " pod="calico-system/whisker-f8bf97d76-jl59h" Sep 13 10:26:41.174738 kubelet[2709]: I0913 10:26:41.174618 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd777e1a-a140-42bb-a95c-8804813542ec-config-volume\") pod \"coredns-7c65d6cfc9-f55jf\" (UID: \"cd777e1a-a140-42bb-a95c-8804813542ec\") " pod="kube-system/coredns-7c65d6cfc9-f55jf" Sep 13 10:26:41.174738 kubelet[2709]: I0913 10:26:41.174692 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/581d69a4-f057-48d2-8168-3614bebbafa4-calico-apiserver-certs\") pod \"calico-apiserver-6fcc8644f7-ql8zw\" (UID: \"581d69a4-f057-48d2-8168-3614bebbafa4\") " pod="calico-apiserver/calico-apiserver-6fcc8644f7-ql8zw" Sep 13 10:26:41.174825 kubelet[2709]: I0913 10:26:41.174747 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9821b4d-af50-4742-9887-0f52d95f4926-config\") pod \"goldmane-7988f88666-4pmfw\" (UID: \"f9821b4d-af50-4742-9887-0f52d95f4926\") " pod="calico-system/goldmane-7988f88666-4pmfw" Sep 13 10:26:41.174825 kubelet[2709]: I0913 10:26:41.174775 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d70b8fb4-2c48-46ff-884d-b94402a77633-tigera-ca-bundle\") pod \"calico-kube-controllers-7d6c9cc7cf-2xxx4\" (UID: \"d70b8fb4-2c48-46ff-884d-b94402a77633\") " pod="calico-system/calico-kube-controllers-7d6c9cc7cf-2xxx4" Sep 13 10:26:41.174825 kubelet[2709]: I0913 10:26:41.174793 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9821b4d-af50-4742-9887-0f52d95f4926-goldmane-ca-bundle\") pod \"goldmane-7988f88666-4pmfw\" (UID: \"f9821b4d-af50-4742-9887-0f52d95f4926\") " pod="calico-system/goldmane-7988f88666-4pmfw" Sep 13 10:26:41.174825 kubelet[2709]: I0913 10:26:41.174811 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c48e1894-4eef-4a72-a4e5-8eb63a8f6562-config-volume\") pod \"coredns-7c65d6cfc9-tx4kf\" (UID: \"c48e1894-4eef-4a72-a4e5-8eb63a8f6562\") " pod="kube-system/coredns-7c65d6cfc9-tx4kf" Sep 13 10:26:41.174942 kubelet[2709]: I0913 10:26:41.174919 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3ec9c671-359f-49ee-83bf-45173542d96c-calico-apiserver-certs\") pod \"calico-apiserver-6fcc8644f7-7bnjp\" (UID: \"3ec9c671-359f-49ee-83bf-45173542d96c\") " pod="calico-apiserver/calico-apiserver-6fcc8644f7-7bnjp" Sep 13 10:26:41.174971 kubelet[2709]: I0913 10:26:41.174944 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv97l\" (UniqueName: \"kubernetes.io/projected/c48e1894-4eef-4a72-a4e5-8eb63a8f6562-kube-api-access-gv97l\") pod \"coredns-7c65d6cfc9-tx4kf\" (UID: \"c48e1894-4eef-4a72-a4e5-8eb63a8f6562\") " pod="kube-system/coredns-7c65d6cfc9-tx4kf" Sep 13 10:26:41.174971 kubelet[2709]: I0913 10:26:41.174962 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed2381f7-f3c1-4802-8345-09791252d48b-whisker-ca-bundle\") pod \"whisker-f8bf97d76-jl59h\" (UID: \"ed2381f7-f3c1-4802-8345-09791252d48b\") " pod="calico-system/whisker-f8bf97d76-jl59h" Sep 13 10:26:41.175035 kubelet[2709]: I0913 10:26:41.174982 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzfzr\" (UniqueName: \"kubernetes.io/projected/3ec9c671-359f-49ee-83bf-45173542d96c-kube-api-access-zzfzr\") pod \"calico-apiserver-6fcc8644f7-7bnjp\" (UID: \"3ec9c671-359f-49ee-83bf-45173542d96c\") " pod="calico-apiserver/calico-apiserver-6fcc8644f7-7bnjp" Sep 13 10:26:41.342144 containerd[1559]: time="2025-09-13T10:26:41.342088262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f8bf97d76-jl59h,Uid:ed2381f7-f3c1-4802-8345-09791252d48b,Namespace:calico-system,Attempt:0,}" Sep 13 10:26:41.351589 containerd[1559]: time="2025-09-13T10:26:41.351543819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f55jf,Uid:cd777e1a-a140-42bb-a95c-8804813542ec,Namespace:kube-system,Attempt:0,}" Sep 13 10:26:41.362897 containerd[1559]: time="2025-09-13T10:26:41.362554079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc8644f7-7bnjp,Uid:3ec9c671-359f-49ee-83bf-45173542d96c,Namespace:calico-apiserver,Attempt:0,}" Sep 13 10:26:41.362897 containerd[1559]: time="2025-09-13T10:26:41.362554119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tx4kf,Uid:c48e1894-4eef-4a72-a4e5-8eb63a8f6562,Namespace:kube-system,Attempt:0,}" Sep 13 10:26:41.365924 containerd[1559]: time="2025-09-13T10:26:41.365782351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-4pmfw,Uid:f9821b4d-af50-4742-9887-0f52d95f4926,Namespace:calico-system,Attempt:0,}" Sep 13 10:26:41.370640 containerd[1559]: time="2025-09-13T10:26:41.370582631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6c9cc7cf-2xxx4,Uid:d70b8fb4-2c48-46ff-884d-b94402a77633,Namespace:calico-system,Attempt:0,}" Sep 13 10:26:41.385205 containerd[1559]: time="2025-09-13T10:26:41.385141466Z" level=error msg="Failed to destroy network for sandbox \"a075685225f55fe4c69fb40ff221efce4d8b46bf3ee56dd41dd1c3a0421522e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.393214 containerd[1559]: time="2025-09-13T10:26:41.393156401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rg54z,Uid:c833b9e8-250f-4f26-9d9e-7e0037032f7c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a075685225f55fe4c69fb40ff221efce4d8b46bf3ee56dd41dd1c3a0421522e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.396940 kubelet[2709]: E0913 10:26:41.396438 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a075685225f55fe4c69fb40ff221efce4d8b46bf3ee56dd41dd1c3a0421522e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.396940 kubelet[2709]: E0913 10:26:41.396593 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a075685225f55fe4c69fb40ff221efce4d8b46bf3ee56dd41dd1c3a0421522e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rg54z" Sep 13 10:26:41.396940 kubelet[2709]: E0913 10:26:41.396616 2709 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a075685225f55fe4c69fb40ff221efce4d8b46bf3ee56dd41dd1c3a0421522e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rg54z" Sep 13 10:26:41.397098 kubelet[2709]: E0913 10:26:41.396667 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rg54z_calico-system(c833b9e8-250f-4f26-9d9e-7e0037032f7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rg54z_calico-system(c833b9e8-250f-4f26-9d9e-7e0037032f7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a075685225f55fe4c69fb40ff221efce4d8b46bf3ee56dd41dd1c3a0421522e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rg54z" podUID="c833b9e8-250f-4f26-9d9e-7e0037032f7c" Sep 13 10:26:41.450363 containerd[1559]: time="2025-09-13T10:26:41.450288989Z" level=error msg="Failed to destroy network for sandbox \"9e684f770de8c7c110af9f575ba4e7dc5324f0eb99cd68d1c1c517dbe2036bb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.452956 containerd[1559]: time="2025-09-13T10:26:41.452884402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f8bf97d76-jl59h,Uid:ed2381f7-f3c1-4802-8345-09791252d48b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e684f770de8c7c110af9f575ba4e7dc5324f0eb99cd68d1c1c517dbe2036bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.453246 kubelet[2709]: E0913 10:26:41.453188 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e684f770de8c7c110af9f575ba4e7dc5324f0eb99cd68d1c1c517dbe2036bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.453359 kubelet[2709]: E0913 10:26:41.453327 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e684f770de8c7c110af9f575ba4e7dc5324f0eb99cd68d1c1c517dbe2036bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f8bf97d76-jl59h" Sep 13 10:26:41.453406 kubelet[2709]: E0913 10:26:41.453365 2709 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e684f770de8c7c110af9f575ba4e7dc5324f0eb99cd68d1c1c517dbe2036bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f8bf97d76-jl59h" Sep 13 10:26:41.453475 kubelet[2709]: E0913 10:26:41.453435 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f8bf97d76-jl59h_calico-system(ed2381f7-f3c1-4802-8345-09791252d48b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f8bf97d76-jl59h_calico-system(ed2381f7-f3c1-4802-8345-09791252d48b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e684f770de8c7c110af9f575ba4e7dc5324f0eb99cd68d1c1c517dbe2036bb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f8bf97d76-jl59h" podUID="ed2381f7-f3c1-4802-8345-09791252d48b" Sep 13 10:26:41.465577 containerd[1559]: time="2025-09-13T10:26:41.465485654Z" level=error msg="Failed to destroy network for sandbox \"5da97b5ecd23cc27f0233255f19e4fe371714bf86a40a2a2ec1c8cb6b2988281\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.468498 containerd[1559]: time="2025-09-13T10:26:41.468445221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f55jf,Uid:cd777e1a-a140-42bb-a95c-8804813542ec,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da97b5ecd23cc27f0233255f19e4fe371714bf86a40a2a2ec1c8cb6b2988281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.468814 kubelet[2709]: E0913 10:26:41.468760 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da97b5ecd23cc27f0233255f19e4fe371714bf86a40a2a2ec1c8cb6b2988281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.468863 kubelet[2709]: E0913 10:26:41.468847 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da97b5ecd23cc27f0233255f19e4fe371714bf86a40a2a2ec1c8cb6b2988281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f55jf" Sep 13 10:26:41.468898 kubelet[2709]: E0913 10:26:41.468872 2709 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da97b5ecd23cc27f0233255f19e4fe371714bf86a40a2a2ec1c8cb6b2988281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f55jf" Sep 13 10:26:41.468966 kubelet[2709]: E0913 10:26:41.468933 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-f55jf_kube-system(cd777e1a-a140-42bb-a95c-8804813542ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-f55jf_kube-system(cd777e1a-a140-42bb-a95c-8804813542ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5da97b5ecd23cc27f0233255f19e4fe371714bf86a40a2a2ec1c8cb6b2988281\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f55jf" podUID="cd777e1a-a140-42bb-a95c-8804813542ec" Sep 13 10:26:41.470803 containerd[1559]: time="2025-09-13T10:26:41.470641392Z" level=error msg="Failed to destroy network for sandbox \"492043e3ca7243711240982649e2b4205eb28a7dc2887cbefedbed2b18dad0ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.474290 containerd[1559]: time="2025-09-13T10:26:41.474090009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc8644f7-7bnjp,Uid:3ec9c671-359f-49ee-83bf-45173542d96c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"492043e3ca7243711240982649e2b4205eb28a7dc2887cbefedbed2b18dad0ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.474842 kubelet[2709]: E0913 10:26:41.474800 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"492043e3ca7243711240982649e2b4205eb28a7dc2887cbefedbed2b18dad0ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.474920 kubelet[2709]: E0913 10:26:41.474887 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"492043e3ca7243711240982649e2b4205eb28a7dc2887cbefedbed2b18dad0ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcc8644f7-7bnjp" Sep 13 10:26:41.474958 kubelet[2709]: E0913 10:26:41.474922 2709 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"492043e3ca7243711240982649e2b4205eb28a7dc2887cbefedbed2b18dad0ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcc8644f7-7bnjp" Sep 13 10:26:41.475017 kubelet[2709]: E0913 10:26:41.474983 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fcc8644f7-7bnjp_calico-apiserver(3ec9c671-359f-49ee-83bf-45173542d96c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fcc8644f7-7bnjp_calico-apiserver(3ec9c671-359f-49ee-83bf-45173542d96c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"492043e3ca7243711240982649e2b4205eb28a7dc2887cbefedbed2b18dad0ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fcc8644f7-7bnjp" podUID="3ec9c671-359f-49ee-83bf-45173542d96c" Sep 13 10:26:41.476555 containerd[1559]: time="2025-09-13T10:26:41.476506835Z" level=error msg="Failed to destroy network for sandbox \"011e929d6cc91a3f979b6d1dc8be3016fa4ba743000a327003f6694e63b5130a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.478351 containerd[1559]: time="2025-09-13T10:26:41.478209568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tx4kf,Uid:c48e1894-4eef-4a72-a4e5-8eb63a8f6562,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"011e929d6cc91a3f979b6d1dc8be3016fa4ba743000a327003f6694e63b5130a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.478591 kubelet[2709]: E0913 10:26:41.478546 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"011e929d6cc91a3f979b6d1dc8be3016fa4ba743000a327003f6694e63b5130a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.478680 kubelet[2709]: E0913 10:26:41.478633 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"011e929d6cc91a3f979b6d1dc8be3016fa4ba743000a327003f6694e63b5130a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tx4kf" Sep 13 10:26:41.478680 kubelet[2709]: E0913 10:26:41.478666 2709 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"011e929d6cc91a3f979b6d1dc8be3016fa4ba743000a327003f6694e63b5130a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tx4kf" Sep 13 10:26:41.478965 kubelet[2709]: E0913 10:26:41.478834 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tx4kf_kube-system(c48e1894-4eef-4a72-a4e5-8eb63a8f6562)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tx4kf_kube-system(c48e1894-4eef-4a72-a4e5-8eb63a8f6562)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"011e929d6cc91a3f979b6d1dc8be3016fa4ba743000a327003f6694e63b5130a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tx4kf" podUID="c48e1894-4eef-4a72-a4e5-8eb63a8f6562" Sep 13 10:26:41.479063 containerd[1559]: time="2025-09-13T10:26:41.478803846Z" level=error msg="Failed to destroy network for sandbox \"333b3896a19809013f37472536fa07ebf1f4521af1d11401689909298fe024fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.480091 containerd[1559]: time="2025-09-13T10:26:41.480038448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6c9cc7cf-2xxx4,Uid:d70b8fb4-2c48-46ff-884d-b94402a77633,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"333b3896a19809013f37472536fa07ebf1f4521af1d11401689909298fe024fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.480283 kubelet[2709]: E0913 10:26:41.480240 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"333b3896a19809013f37472536fa07ebf1f4521af1d11401689909298fe024fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.480404 kubelet[2709]: E0913 10:26:41.480367 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"333b3896a19809013f37472536fa07ebf1f4521af1d11401689909298fe024fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6c9cc7cf-2xxx4" Sep 13 10:26:41.480404 kubelet[2709]: E0913 10:26:41.480388 2709 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"333b3896a19809013f37472536fa07ebf1f4521af1d11401689909298fe024fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6c9cc7cf-2xxx4" Sep 13 10:26:41.480481 kubelet[2709]: E0913 10:26:41.480433 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d6c9cc7cf-2xxx4_calico-system(d70b8fb4-2c48-46ff-884d-b94402a77633)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d6c9cc7cf-2xxx4_calico-system(d70b8fb4-2c48-46ff-884d-b94402a77633)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"333b3896a19809013f37472536fa07ebf1f4521af1d11401689909298fe024fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d6c9cc7cf-2xxx4" podUID="d70b8fb4-2c48-46ff-884d-b94402a77633" Sep 13 10:26:41.488962 containerd[1559]: time="2025-09-13T10:26:41.488905467Z" level=error msg="Failed to destroy network for sandbox \"f51f91283fc6a169cd6d2ee3749bf1fe6043638f17cf800f9d4f4cd776358bc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.527645 containerd[1559]: time="2025-09-13T10:26:41.527564916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-4pmfw,Uid:f9821b4d-af50-4742-9887-0f52d95f4926,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f51f91283fc6a169cd6d2ee3749bf1fe6043638f17cf800f9d4f4cd776358bc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.527901 kubelet[2709]: E0913 10:26:41.527820 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f51f91283fc6a169cd6d2ee3749bf1fe6043638f17cf800f9d4f4cd776358bc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.527901 kubelet[2709]: E0913 10:26:41.527885 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f51f91283fc6a169cd6d2ee3749bf1fe6043638f17cf800f9d4f4cd776358bc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-4pmfw" Sep 13 10:26:41.527989 kubelet[2709]: E0913 10:26:41.527906 2709 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f51f91283fc6a169cd6d2ee3749bf1fe6043638f17cf800f9d4f4cd776358bc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-4pmfw" Sep 13 10:26:41.527989 kubelet[2709]: E0913 10:26:41.527955 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-4pmfw_calico-system(f9821b4d-af50-4742-9887-0f52d95f4926)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-4pmfw_calico-system(f9821b4d-af50-4742-9887-0f52d95f4926)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f51f91283fc6a169cd6d2ee3749bf1fe6043638f17cf800f9d4f4cd776358bc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-4pmfw" podUID="f9821b4d-af50-4742-9887-0f52d95f4926" Sep 13 10:26:41.645010 containerd[1559]: time="2025-09-13T10:26:41.644854159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc8644f7-ql8zw,Uid:581d69a4-f057-48d2-8168-3614bebbafa4,Namespace:calico-apiserver,Attempt:0,}" Sep 13 10:26:41.655704 containerd[1559]: time="2025-09-13T10:26:41.655565336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 10:26:41.710566 systemd[1]: run-netns-cni\x2d2458a46c\x2d1390\x2d3a2c\x2d2d1d\x2d08a9b6968240.mount: Deactivated successfully. Sep 13 10:26:41.776791 containerd[1559]: time="2025-09-13T10:26:41.776718077Z" level=error msg="Failed to destroy network for sandbox \"e6c5192251d282ddc58f9fb2453fe25432bc2f65d2e48d0c251ce6f873a6e8e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.779237 systemd[1]: run-netns-cni\x2dde29a93b\x2d2078\x2d02da\x2de8d4\x2d6640e83a5ccf.mount: Deactivated successfully. Sep 13 10:26:41.787732 containerd[1559]: time="2025-09-13T10:26:41.787649078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc8644f7-ql8zw,Uid:581d69a4-f057-48d2-8168-3614bebbafa4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c5192251d282ddc58f9fb2453fe25432bc2f65d2e48d0c251ce6f873a6e8e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.788026 kubelet[2709]: E0913 10:26:41.787968 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c5192251d282ddc58f9fb2453fe25432bc2f65d2e48d0c251ce6f873a6e8e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:26:41.788564 kubelet[2709]: E0913 10:26:41.788047 2709 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c5192251d282ddc58f9fb2453fe25432bc2f65d2e48d0c251ce6f873a6e8e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcc8644f7-ql8zw" Sep 13 10:26:41.788564 kubelet[2709]: E0913 10:26:41.788071 2709 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c5192251d282ddc58f9fb2453fe25432bc2f65d2e48d0c251ce6f873a6e8e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fcc8644f7-ql8zw" Sep 13 10:26:41.788564 kubelet[2709]: E0913 10:26:41.788122 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fcc8644f7-ql8zw_calico-apiserver(581d69a4-f057-48d2-8168-3614bebbafa4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fcc8644f7-ql8zw_calico-apiserver(581d69a4-f057-48d2-8168-3614bebbafa4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6c5192251d282ddc58f9fb2453fe25432bc2f65d2e48d0c251ce6f873a6e8e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fcc8644f7-ql8zw" podUID="581d69a4-f057-48d2-8168-3614bebbafa4" Sep 13 10:26:45.494240 kubelet[2709]: I0913 10:26:45.494176 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 10:26:47.998221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount235939863.mount: Deactivated successfully. Sep 13 10:26:49.704806 containerd[1559]: time="2025-09-13T10:26:49.704711348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:49.734857 containerd[1559]: time="2025-09-13T10:26:49.706365327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 10:26:49.735098 containerd[1559]: time="2025-09-13T10:26:49.709309358Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:49.735262 containerd[1559]: time="2025-09-13T10:26:49.712181244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.056569309s" Sep 13 10:26:49.735262 containerd[1559]: time="2025-09-13T10:26:49.735240249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 10:26:49.736145 containerd[1559]: time="2025-09-13T10:26:49.736096989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:49.765135 containerd[1559]: time="2025-09-13T10:26:49.764617585Z" level=info msg="CreateContainer within sandbox \"abf5c96e2a4574972eb5bca93a8de7d23c84d23462b9700808c30d70a240a6e0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 10:26:50.020542 containerd[1559]: time="2025-09-13T10:26:50.020378185Z" level=info msg="Container 194ce53ea006e4daecaebbdb25c00c0c6f13cd834a7ef357318a82dd89ecdbc6: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:50.091800 containerd[1559]: time="2025-09-13T10:26:50.091717087Z" level=info msg="CreateContainer within sandbox \"abf5c96e2a4574972eb5bca93a8de7d23c84d23462b9700808c30d70a240a6e0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"194ce53ea006e4daecaebbdb25c00c0c6f13cd834a7ef357318a82dd89ecdbc6\"" Sep 13 10:26:50.092396 containerd[1559]: time="2025-09-13T10:26:50.092360436Z" level=info msg="StartContainer for \"194ce53ea006e4daecaebbdb25c00c0c6f13cd834a7ef357318a82dd89ecdbc6\"" Sep 13 10:26:50.094079 containerd[1559]: time="2025-09-13T10:26:50.094038208Z" level=info msg="connecting to shim 194ce53ea006e4daecaebbdb25c00c0c6f13cd834a7ef357318a82dd89ecdbc6" address="unix:///run/containerd/s/5d765ac66281223a48d0c16502c922b6ed8914c392ec86e91c81f6aef09f99bf" protocol=ttrpc version=3 Sep 13 10:26:50.116429 systemd[1]: Started cri-containerd-194ce53ea006e4daecaebbdb25c00c0c6f13cd834a7ef357318a82dd89ecdbc6.scope - libcontainer container 194ce53ea006e4daecaebbdb25c00c0c6f13cd834a7ef357318a82dd89ecdbc6. Sep 13 10:26:50.164402 containerd[1559]: time="2025-09-13T10:26:50.164351224Z" level=info msg="StartContainer for \"194ce53ea006e4daecaebbdb25c00c0c6f13cd834a7ef357318a82dd89ecdbc6\" returns successfully" Sep 13 10:26:50.238339 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 10:26:50.238468 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 10:26:50.831841 containerd[1559]: time="2025-09-13T10:26:50.831780168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"194ce53ea006e4daecaebbdb25c00c0c6f13cd834a7ef357318a82dd89ecdbc6\" id:\"dbdeaf09634fc2bdbfc94529cac658172373beac0b91e20b9d56fdbaec336149\" pid:3839 exit_status:1 exited_at:{seconds:1757759210 nanos:831346914}" Sep 13 10:26:51.537960 kubelet[2709]: I0913 10:26:51.537865 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kng5h" podStartSLOduration=2.5121839489999997 podStartE2EDuration="24.537838426s" podCreationTimestamp="2025-09-13 10:26:27 +0000 UTC" firstStartedPulling="2025-09-13 10:26:27.728099607 +0000 UTC m=+19.870627708" lastFinishedPulling="2025-09-13 10:26:49.753754084 +0000 UTC m=+41.896282185" observedRunningTime="2025-09-13 10:26:50.931642827 +0000 UTC m=+43.074170948" watchObservedRunningTime="2025-09-13 10:26:51.537838426 +0000 UTC m=+43.680366527" Sep 13 10:26:51.742919 kubelet[2709]: I0913 10:26:51.742863 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed2381f7-f3c1-4802-8345-09791252d48b-whisker-backend-key-pair\") pod \"ed2381f7-f3c1-4802-8345-09791252d48b\" (UID: \"ed2381f7-f3c1-4802-8345-09791252d48b\") " Sep 13 10:26:51.743991 kubelet[2709]: I0913 10:26:51.743710 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c57c8\" (UniqueName: \"kubernetes.io/projected/ed2381f7-f3c1-4802-8345-09791252d48b-kube-api-access-c57c8\") pod \"ed2381f7-f3c1-4802-8345-09791252d48b\" (UID: \"ed2381f7-f3c1-4802-8345-09791252d48b\") " Sep 13 10:26:51.743991 kubelet[2709]: I0913 10:26:51.743754 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed2381f7-f3c1-4802-8345-09791252d48b-whisker-ca-bundle\") pod \"ed2381f7-f3c1-4802-8345-09791252d48b\" (UID: \"ed2381f7-f3c1-4802-8345-09791252d48b\") " Sep 13 10:26:51.751010 systemd[1]: var-lib-kubelet-pods-ed2381f7\x2df3c1\x2d4802\x2d8345\x2d09791252d48b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc57c8.mount: Deactivated successfully. Sep 13 10:26:51.753377 kubelet[2709]: I0913 10:26:51.752396 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed2381f7-f3c1-4802-8345-09791252d48b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ed2381f7-f3c1-4802-8345-09791252d48b" (UID: "ed2381f7-f3c1-4802-8345-09791252d48b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 10:26:51.753539 kubelet[2709]: I0913 10:26:51.753479 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed2381f7-f3c1-4802-8345-09791252d48b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ed2381f7-f3c1-4802-8345-09791252d48b" (UID: "ed2381f7-f3c1-4802-8345-09791252d48b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 10:26:51.756910 systemd[1]: var-lib-kubelet-pods-ed2381f7\x2df3c1\x2d4802\x2d8345\x2d09791252d48b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 10:26:51.758120 kubelet[2709]: I0913 10:26:51.757227 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed2381f7-f3c1-4802-8345-09791252d48b-kube-api-access-c57c8" (OuterVolumeSpecName: "kube-api-access-c57c8") pod "ed2381f7-f3c1-4802-8345-09791252d48b" (UID: "ed2381f7-f3c1-4802-8345-09791252d48b"). InnerVolumeSpecName "kube-api-access-c57c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 10:26:51.845106 kubelet[2709]: I0913 10:26:51.844422 2709 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c57c8\" (UniqueName: \"kubernetes.io/projected/ed2381f7-f3c1-4802-8345-09791252d48b-kube-api-access-c57c8\") on node \"localhost\" DevicePath \"\"" Sep 13 10:26:51.845106 kubelet[2709]: I0913 10:26:51.844470 2709 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed2381f7-f3c1-4802-8345-09791252d48b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 10:26:51.845106 kubelet[2709]: I0913 10:26:51.844481 2709 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed2381f7-f3c1-4802-8345-09791252d48b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 10:26:51.846530 containerd[1559]: time="2025-09-13T10:26:51.846488863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"194ce53ea006e4daecaebbdb25c00c0c6f13cd834a7ef357318a82dd89ecdbc6\" id:\"161eabf8ed97bea6af09ecee7ef2e450b2c63d7021962590a25f652ec1da3f7e\" pid:3873 exit_status:1 exited_at:{seconds:1757759211 nanos:846101315}" Sep 13 10:26:51.967732 systemd[1]: Removed slice kubepods-besteffort-poded2381f7_f3c1_4802_8345_09791252d48b.slice - libcontainer container kubepods-besteffort-poded2381f7_f3c1_4802_8345_09791252d48b.slice. Sep 13 10:26:52.038462 systemd[1]: Created slice kubepods-besteffort-pod3b98839c_4da6_4a8c_adc7_08996d77c37e.slice - libcontainer container kubepods-besteffort-pod3b98839c_4da6_4a8c_adc7_08996d77c37e.slice. Sep 13 10:26:52.095755 systemd[1]: Started sshd@7-10.0.0.126:22-10.0.0.1:49918.service - OpenSSH per-connection server daemon (10.0.0.1:49918). Sep 13 10:26:52.145886 kubelet[2709]: I0913 10:26:52.145830 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szgt5\" (UniqueName: \"kubernetes.io/projected/3b98839c-4da6-4a8c-adc7-08996d77c37e-kube-api-access-szgt5\") pod \"whisker-769bc54c55-k8dnk\" (UID: \"3b98839c-4da6-4a8c-adc7-08996d77c37e\") " pod="calico-system/whisker-769bc54c55-k8dnk" Sep 13 10:26:52.145886 kubelet[2709]: I0913 10:26:52.145884 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b98839c-4da6-4a8c-adc7-08996d77c37e-whisker-ca-bundle\") pod \"whisker-769bc54c55-k8dnk\" (UID: \"3b98839c-4da6-4a8c-adc7-08996d77c37e\") " pod="calico-system/whisker-769bc54c55-k8dnk" Sep 13 10:26:52.146069 kubelet[2709]: I0913 10:26:52.145904 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b98839c-4da6-4a8c-adc7-08996d77c37e-whisker-backend-key-pair\") pod \"whisker-769bc54c55-k8dnk\" (UID: \"3b98839c-4da6-4a8c-adc7-08996d77c37e\") " pod="calico-system/whisker-769bc54c55-k8dnk" Sep 13 10:26:52.167945 sshd[3900]: Accepted publickey for core from 10.0.0.1 port 49918 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:52.169443 sshd-session[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:52.174182 systemd-logind[1544]: New session 8 of user core. Sep 13 10:26:52.181447 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 10:26:52.313486 sshd[3903]: Connection closed by 10.0.0.1 port 49918 Sep 13 10:26:52.313793 sshd-session[3900]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:52.316728 systemd[1]: sshd@7-10.0.0.126:22-10.0.0.1:49918.service: Deactivated successfully. Sep 13 10:26:52.318928 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 10:26:52.320579 systemd-logind[1544]: Session 8 logged out. Waiting for processes to exit. Sep 13 10:26:52.321921 systemd-logind[1544]: Removed session 8. Sep 13 10:26:52.343501 containerd[1559]: time="2025-09-13T10:26:52.343445283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-769bc54c55-k8dnk,Uid:3b98839c-4da6-4a8c-adc7-08996d77c37e,Namespace:calico-system,Attempt:0,}" Sep 13 10:26:52.488446 systemd-networkd[1479]: calib60e1f7d296: Link UP Sep 13 10:26:52.488763 systemd-networkd[1479]: calib60e1f7d296: Gained carrier Sep 13 10:26:52.504464 containerd[1559]: 2025-09-13 10:26:52.364 [INFO][3929] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 10:26:52.504464 containerd[1559]: 2025-09-13 10:26:52.380 [INFO][3929] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--769bc54c55--k8dnk-eth0 whisker-769bc54c55- calico-system 3b98839c-4da6-4a8c-adc7-08996d77c37e 938 0 2025-09-13 10:26:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:769bc54c55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-769bc54c55-k8dnk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib60e1f7d296 [] [] }} ContainerID="4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" Namespace="calico-system" Pod="whisker-769bc54c55-k8dnk" WorkloadEndpoint="localhost-k8s-whisker--769bc54c55--k8dnk-" Sep 13 10:26:52.504464 containerd[1559]: 2025-09-13 10:26:52.380 [INFO][3929] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" Namespace="calico-system" Pod="whisker-769bc54c55-k8dnk" WorkloadEndpoint="localhost-k8s-whisker--769bc54c55--k8dnk-eth0" Sep 13 10:26:52.504464 containerd[1559]: 2025-09-13 10:26:52.445 [INFO][3944] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" HandleID="k8s-pod-network.4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" Workload="localhost-k8s-whisker--769bc54c55--k8dnk-eth0" Sep 13 10:26:52.504715 containerd[1559]: 2025-09-13 10:26:52.446 [INFO][3944] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" HandleID="k8s-pod-network.4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" Workload="localhost-k8s-whisker--769bc54c55--k8dnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000b0300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-769bc54c55-k8dnk", "timestamp":"2025-09-13 10:26:52.445382253 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:52.504715 containerd[1559]: 2025-09-13 10:26:52.446 [INFO][3944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:52.504715 containerd[1559]: 2025-09-13 10:26:52.446 [INFO][3944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:52.504715 containerd[1559]: 2025-09-13 10:26:52.446 [INFO][3944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:52.504715 containerd[1559]: 2025-09-13 10:26:52.454 [INFO][3944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" host="localhost" Sep 13 10:26:52.504715 containerd[1559]: 2025-09-13 10:26:52.459 [INFO][3944] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:52.504715 containerd[1559]: 2025-09-13 10:26:52.463 [INFO][3944] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:52.504715 containerd[1559]: 2025-09-13 10:26:52.465 [INFO][3944] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:52.504715 containerd[1559]: 2025-09-13 10:26:52.467 [INFO][3944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:52.504715 containerd[1559]: 2025-09-13 10:26:52.467 [INFO][3944] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" host="localhost" Sep 13 10:26:52.505049 containerd[1559]: 2025-09-13 10:26:52.468 [INFO][3944] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01 Sep 13 10:26:52.505049 containerd[1559]: 2025-09-13 10:26:52.473 [INFO][3944] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" host="localhost" Sep 13 10:26:52.505049 containerd[1559]: 2025-09-13 10:26:52.477 [INFO][3944] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" host="localhost" Sep 13 10:26:52.505049 containerd[1559]: 2025-09-13 10:26:52.478 [INFO][3944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" host="localhost" Sep 13 10:26:52.505049 containerd[1559]: 2025-09-13 10:26:52.478 [INFO][3944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:52.505049 containerd[1559]: 2025-09-13 10:26:52.478 [INFO][3944] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" HandleID="k8s-pod-network.4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" Workload="localhost-k8s-whisker--769bc54c55--k8dnk-eth0" Sep 13 10:26:52.505241 containerd[1559]: 2025-09-13 10:26:52.481 [INFO][3929] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" Namespace="calico-system" Pod="whisker-769bc54c55-k8dnk" WorkloadEndpoint="localhost-k8s-whisker--769bc54c55--k8dnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--769bc54c55--k8dnk-eth0", GenerateName:"whisker-769bc54c55-", Namespace:"calico-system", SelfLink:"", UID:"3b98839c-4da6-4a8c-adc7-08996d77c37e", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"769bc54c55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-769bc54c55-k8dnk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib60e1f7d296", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:52.505241 containerd[1559]: 2025-09-13 10:26:52.481 [INFO][3929] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" Namespace="calico-system" Pod="whisker-769bc54c55-k8dnk" WorkloadEndpoint="localhost-k8s-whisker--769bc54c55--k8dnk-eth0" Sep 13 10:26:52.505411 containerd[1559]: 2025-09-13 10:26:52.481 [INFO][3929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib60e1f7d296 ContainerID="4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" Namespace="calico-system" Pod="whisker-769bc54c55-k8dnk" WorkloadEndpoint="localhost-k8s-whisker--769bc54c55--k8dnk-eth0" Sep 13 10:26:52.505411 containerd[1559]: 2025-09-13 10:26:52.488 [INFO][3929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" Namespace="calico-system" Pod="whisker-769bc54c55-k8dnk" WorkloadEndpoint="localhost-k8s-whisker--769bc54c55--k8dnk-eth0" Sep 13 10:26:52.505469 containerd[1559]: 2025-09-13 10:26:52.489 [INFO][3929] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" Namespace="calico-system" Pod="whisker-769bc54c55-k8dnk" WorkloadEndpoint="localhost-k8s-whisker--769bc54c55--k8dnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--769bc54c55--k8dnk-eth0", GenerateName:"whisker-769bc54c55-", Namespace:"calico-system", SelfLink:"", UID:"3b98839c-4da6-4a8c-adc7-08996d77c37e", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"769bc54c55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01", Pod:"whisker-769bc54c55-k8dnk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib60e1f7d296", MAC:"f6:dc:6b:16:5e:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:52.505544 containerd[1559]: 2025-09-13 10:26:52.500 [INFO][3929] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" Namespace="calico-system" Pod="whisker-769bc54c55-k8dnk" WorkloadEndpoint="localhost-k8s-whisker--769bc54c55--k8dnk-eth0" Sep 13 10:26:52.605904 containerd[1559]: time="2025-09-13T10:26:52.605840805Z" level=info msg="connecting to shim 4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01" address="unix:///run/containerd/s/490d23e927636ec9caf5f64121e31947763839e74257e2e720b6c2c9c810fdb3" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:52.635420 systemd[1]: Started cri-containerd-4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01.scope - libcontainer container 4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01. Sep 13 10:26:52.649369 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:52.901095 containerd[1559]: time="2025-09-13T10:26:52.901037869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-769bc54c55-k8dnk,Uid:3b98839c-4da6-4a8c-adc7-08996d77c37e,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01\"" Sep 13 10:26:52.905323 containerd[1559]: time="2025-09-13T10:26:52.904168159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 10:26:52.949322 containerd[1559]: time="2025-09-13T10:26:52.949218939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rg54z,Uid:c833b9e8-250f-4f26-9d9e-7e0037032f7c,Namespace:calico-system,Attempt:0,}" Sep 13 10:26:52.950366 containerd[1559]: time="2025-09-13T10:26:52.950341297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6c9cc7cf-2xxx4,Uid:d70b8fb4-2c48-46ff-884d-b94402a77633,Namespace:calico-system,Attempt:0,}" Sep 13 10:26:53.491459 systemd-networkd[1479]: vxlan.calico: Link UP Sep 13 10:26:53.491477 systemd-networkd[1479]: vxlan.calico: Gained carrier Sep 13 10:26:53.516359 systemd-networkd[1479]: cali798550e0b2e: Link UP Sep 13 10:26:53.516603 systemd-networkd[1479]: cali798550e0b2e: Gained carrier Sep 13 10:26:53.534253 containerd[1559]: 2025-09-13 10:26:53.415 [INFO][4132] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rg54z-eth0 csi-node-driver- calico-system c833b9e8-250f-4f26-9d9e-7e0037032f7c 718 0 2025-09-13 10:26:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rg54z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali798550e0b2e [] [] }} ContainerID="31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" Namespace="calico-system" Pod="csi-node-driver-rg54z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rg54z-" Sep 13 10:26:53.534253 containerd[1559]: 2025-09-13 10:26:53.415 [INFO][4132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" Namespace="calico-system" Pod="csi-node-driver-rg54z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rg54z-eth0" Sep 13 10:26:53.534253 containerd[1559]: 2025-09-13 10:26:53.457 [INFO][4174] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" HandleID="k8s-pod-network.31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" Workload="localhost-k8s-csi--node--driver--rg54z-eth0" Sep 13 10:26:53.534518 containerd[1559]: 2025-09-13 10:26:53.461 [INFO][4174] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" HandleID="k8s-pod-network.31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" Workload="localhost-k8s-csi--node--driver--rg54z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a6680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rg54z", "timestamp":"2025-09-13 10:26:53.457153297 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:53.534518 containerd[1559]: 2025-09-13 10:26:53.461 [INFO][4174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:53.534518 containerd[1559]: 2025-09-13 10:26:53.461 [INFO][4174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:53.534518 containerd[1559]: 2025-09-13 10:26:53.461 [INFO][4174] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:53.534518 containerd[1559]: 2025-09-13 10:26:53.468 [INFO][4174] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" host="localhost" Sep 13 10:26:53.534518 containerd[1559]: 2025-09-13 10:26:53.471 [INFO][4174] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:53.534518 containerd[1559]: 2025-09-13 10:26:53.476 [INFO][4174] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:53.534518 containerd[1559]: 2025-09-13 10:26:53.480 [INFO][4174] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:53.534518 containerd[1559]: 2025-09-13 10:26:53.482 [INFO][4174] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:53.534518 containerd[1559]: 2025-09-13 10:26:53.482 [INFO][4174] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" host="localhost" Sep 13 10:26:53.534733 containerd[1559]: 2025-09-13 10:26:53.484 [INFO][4174] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92 Sep 13 10:26:53.534733 containerd[1559]: 2025-09-13 10:26:53.489 [INFO][4174] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" host="localhost" Sep 13 10:26:53.534733 containerd[1559]: 2025-09-13 10:26:53.501 [INFO][4174] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" host="localhost" Sep 13 10:26:53.534733 containerd[1559]: 2025-09-13 10:26:53.502 [INFO][4174] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" host="localhost" Sep 13 10:26:53.534733 containerd[1559]: 2025-09-13 10:26:53.502 [INFO][4174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:53.534733 containerd[1559]: 2025-09-13 10:26:53.502 [INFO][4174] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" HandleID="k8s-pod-network.31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" Workload="localhost-k8s-csi--node--driver--rg54z-eth0" Sep 13 10:26:53.534851 containerd[1559]: 2025-09-13 10:26:53.506 [INFO][4132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" Namespace="calico-system" Pod="csi-node-driver-rg54z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rg54z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rg54z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c833b9e8-250f-4f26-9d9e-7e0037032f7c", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rg54z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali798550e0b2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:53.534907 containerd[1559]: 2025-09-13 10:26:53.506 [INFO][4132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" Namespace="calico-system" Pod="csi-node-driver-rg54z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rg54z-eth0" Sep 13 10:26:53.534907 containerd[1559]: 2025-09-13 10:26:53.506 [INFO][4132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali798550e0b2e ContainerID="31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" Namespace="calico-system" Pod="csi-node-driver-rg54z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rg54z-eth0" Sep 13 10:26:53.534907 containerd[1559]: 2025-09-13 10:26:53.515 [INFO][4132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" Namespace="calico-system" Pod="csi-node-driver-rg54z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rg54z-eth0" Sep 13 10:26:53.534967 containerd[1559]: 2025-09-13 10:26:53.517 [INFO][4132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" Namespace="calico-system" Pod="csi-node-driver-rg54z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rg54z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rg54z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c833b9e8-250f-4f26-9d9e-7e0037032f7c", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92", Pod:"csi-node-driver-rg54z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali798550e0b2e", MAC:"b2:67:87:ca:f4:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:53.535020 containerd[1559]: 2025-09-13 10:26:53.527 [INFO][4132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" Namespace="calico-system" Pod="csi-node-driver-rg54z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rg54z-eth0" Sep 13 10:26:53.586213 containerd[1559]: time="2025-09-13T10:26:53.585755613Z" level=info msg="connecting to shim 31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92" address="unix:///run/containerd/s/4a802b29814d494d65feffb400376b0d77369a711be1f23d42b31fff75fc73c5" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:53.609638 systemd-networkd[1479]: cali700c2a2bb14: Link UP Sep 13 10:26:53.614030 systemd-networkd[1479]: cali700c2a2bb14: Gained carrier Sep 13 10:26:53.637641 containerd[1559]: 2025-09-13 10:26:53.445 [INFO][4146] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-eth0 calico-kube-controllers-7d6c9cc7cf- calico-system d70b8fb4-2c48-46ff-884d-b94402a77633 835 0 2025-09-13 10:26:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d6c9cc7cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7d6c9cc7cf-2xxx4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali700c2a2bb14 [] [] }} ContainerID="93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" Namespace="calico-system" Pod="calico-kube-controllers-7d6c9cc7cf-2xxx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-" Sep 13 10:26:53.637641 containerd[1559]: 2025-09-13 10:26:53.446 [INFO][4146] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" Namespace="calico-system" Pod="calico-kube-controllers-7d6c9cc7cf-2xxx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-eth0" Sep 13 10:26:53.637641 containerd[1559]: 2025-09-13 10:26:53.481 [INFO][4184] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" HandleID="k8s-pod-network.93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" Workload="localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-eth0" Sep 13 10:26:53.637854 containerd[1559]: 2025-09-13 10:26:53.481 [INFO][4184] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" HandleID="k8s-pod-network.93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" Workload="localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000130e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7d6c9cc7cf-2xxx4", "timestamp":"2025-09-13 10:26:53.481220028 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:53.637854 containerd[1559]: 2025-09-13 10:26:53.481 [INFO][4184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:53.637854 containerd[1559]: 2025-09-13 10:26:53.502 [INFO][4184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:53.637854 containerd[1559]: 2025-09-13 10:26:53.502 [INFO][4184] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:53.637854 containerd[1559]: 2025-09-13 10:26:53.569 [INFO][4184] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" host="localhost" Sep 13 10:26:53.637854 containerd[1559]: 2025-09-13 10:26:53.577 [INFO][4184] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:53.637854 containerd[1559]: 2025-09-13 10:26:53.582 [INFO][4184] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:53.637854 containerd[1559]: 2025-09-13 10:26:53.584 [INFO][4184] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:53.637854 containerd[1559]: 2025-09-13 10:26:53.587 [INFO][4184] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:53.637854 containerd[1559]: 2025-09-13 10:26:53.588 [INFO][4184] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" host="localhost" Sep 13 10:26:53.638097 containerd[1559]: 2025-09-13 10:26:53.589 [INFO][4184] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c Sep 13 10:26:53.638097 containerd[1559]: 2025-09-13 10:26:53.593 [INFO][4184] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" host="localhost" Sep 13 10:26:53.638097 containerd[1559]: 2025-09-13 10:26:53.600 [INFO][4184] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" host="localhost" Sep 13 10:26:53.638097 containerd[1559]: 2025-09-13 10:26:53.600 [INFO][4184] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" host="localhost" Sep 13 10:26:53.638097 containerd[1559]: 2025-09-13 10:26:53.601 [INFO][4184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:53.638097 containerd[1559]: 2025-09-13 10:26:53.601 [INFO][4184] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" HandleID="k8s-pod-network.93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" Workload="localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-eth0" Sep 13 10:26:53.638371 containerd[1559]: 2025-09-13 10:26:53.606 [INFO][4146] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" Namespace="calico-system" Pod="calico-kube-controllers-7d6c9cc7cf-2xxx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-eth0", GenerateName:"calico-kube-controllers-7d6c9cc7cf-", Namespace:"calico-system", SelfLink:"", UID:"d70b8fb4-2c48-46ff-884d-b94402a77633", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d6c9cc7cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7d6c9cc7cf-2xxx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali700c2a2bb14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:53.638430 containerd[1559]: 2025-09-13 10:26:53.606 [INFO][4146] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" Namespace="calico-system" Pod="calico-kube-controllers-7d6c9cc7cf-2xxx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-eth0" Sep 13 10:26:53.638430 containerd[1559]: 2025-09-13 10:26:53.606 [INFO][4146] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali700c2a2bb14 ContainerID="93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" Namespace="calico-system" Pod="calico-kube-controllers-7d6c9cc7cf-2xxx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-eth0" Sep 13 10:26:53.638430 containerd[1559]: 2025-09-13 10:26:53.613 [INFO][4146] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" Namespace="calico-system" Pod="calico-kube-controllers-7d6c9cc7cf-2xxx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-eth0" Sep 13 10:26:53.638495 containerd[1559]: 2025-09-13 10:26:53.619 [INFO][4146] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" Namespace="calico-system" Pod="calico-kube-controllers-7d6c9cc7cf-2xxx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-eth0", GenerateName:"calico-kube-controllers-7d6c9cc7cf-", Namespace:"calico-system", SelfLink:"", UID:"d70b8fb4-2c48-46ff-884d-b94402a77633", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d6c9cc7cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c", Pod:"calico-kube-controllers-7d6c9cc7cf-2xxx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali700c2a2bb14", MAC:"de:b3:ac:eb:4c:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:53.638556 containerd[1559]: 2025-09-13 10:26:53.631 [INFO][4146] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" Namespace="calico-system" Pod="calico-kube-controllers-7d6c9cc7cf-2xxx4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6c9cc7cf--2xxx4-eth0" Sep 13 10:26:53.644472 systemd[1]: Started cri-containerd-31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92.scope - libcontainer container 31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92. Sep 13 10:26:53.659409 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:53.668377 containerd[1559]: time="2025-09-13T10:26:53.668240372Z" level=info msg="connecting to shim 93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c" address="unix:///run/containerd/s/7d109fe9b085f5c5684f41fe3de08ad856c35f60f366ca2d3952d1c2b35fefe2" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:53.682541 containerd[1559]: time="2025-09-13T10:26:53.682362155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rg54z,Uid:c833b9e8-250f-4f26-9d9e-7e0037032f7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92\"" Sep 13 10:26:53.706413 systemd[1]: Started cri-containerd-93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c.scope - libcontainer container 93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c. Sep 13 10:26:53.722329 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:53.761080 containerd[1559]: time="2025-09-13T10:26:53.760941179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6c9cc7cf-2xxx4,Uid:d70b8fb4-2c48-46ff-884d-b94402a77633,Namespace:calico-system,Attempt:0,} returns sandbox id \"93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c\"" Sep 13 10:26:53.948301 containerd[1559]: time="2025-09-13T10:26:53.948224767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f55jf,Uid:cd777e1a-a140-42bb-a95c-8804813542ec,Namespace:kube-system,Attempt:0,}" Sep 13 10:26:53.948707 containerd[1559]: time="2025-09-13T10:26:53.948224937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc8644f7-7bnjp,Uid:3ec9c671-359f-49ee-83bf-45173542d96c,Namespace:calico-apiserver,Attempt:0,}" Sep 13 10:26:53.951840 kubelet[2709]: I0913 10:26:53.951784 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed2381f7-f3c1-4802-8345-09791252d48b" path="/var/lib/kubelet/pods/ed2381f7-f3c1-4802-8345-09791252d48b/volumes" Sep 13 10:26:54.051764 systemd-networkd[1479]: cali2fe01e5565a: Link UP Sep 13 10:26:54.052137 systemd-networkd[1479]: cali2fe01e5565a: Gained carrier Sep 13 10:26:54.068499 containerd[1559]: 2025-09-13 10:26:53.986 [INFO][4363] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--f55jf-eth0 coredns-7c65d6cfc9- kube-system cd777e1a-a140-42bb-a95c-8804813542ec 834 0 2025-09-13 10:26:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-f55jf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2fe01e5565a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f55jf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f55jf-" Sep 13 10:26:54.068499 containerd[1559]: 2025-09-13 10:26:53.987 [INFO][4363] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f55jf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f55jf-eth0" Sep 13 10:26:54.068499 containerd[1559]: 2025-09-13 10:26:54.016 [INFO][4394] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" HandleID="k8s-pod-network.6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" Workload="localhost-k8s-coredns--7c65d6cfc9--f55jf-eth0" Sep 13 10:26:54.068771 containerd[1559]: 2025-09-13 10:26:54.016 [INFO][4394] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" HandleID="k8s-pod-network.6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" Workload="localhost-k8s-coredns--7c65d6cfc9--f55jf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001393a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-f55jf", "timestamp":"2025-09-13 10:26:54.016016868 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:54.068771 containerd[1559]: 2025-09-13 10:26:54.016 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:54.068771 containerd[1559]: 2025-09-13 10:26:54.016 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:54.068771 containerd[1559]: 2025-09-13 10:26:54.016 [INFO][4394] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:54.068771 containerd[1559]: 2025-09-13 10:26:54.022 [INFO][4394] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" host="localhost" Sep 13 10:26:54.068771 containerd[1559]: 2025-09-13 10:26:54.028 [INFO][4394] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:54.068771 containerd[1559]: 2025-09-13 10:26:54.032 [INFO][4394] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:54.068771 containerd[1559]: 2025-09-13 10:26:54.033 [INFO][4394] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:54.068771 containerd[1559]: 2025-09-13 10:26:54.035 [INFO][4394] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:54.068771 containerd[1559]: 2025-09-13 10:26:54.035 [INFO][4394] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" host="localhost" Sep 13 10:26:54.069003 containerd[1559]: 2025-09-13 10:26:54.037 [INFO][4394] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa Sep 13 10:26:54.069003 containerd[1559]: 2025-09-13 10:26:54.041 [INFO][4394] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" host="localhost" Sep 13 10:26:54.069003 containerd[1559]: 2025-09-13 10:26:54.045 [INFO][4394] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" host="localhost" Sep 13 10:26:54.069003 containerd[1559]: 2025-09-13 10:26:54.045 [INFO][4394] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" host="localhost" Sep 13 10:26:54.069003 containerd[1559]: 2025-09-13 10:26:54.045 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:54.069003 containerd[1559]: 2025-09-13 10:26:54.045 [INFO][4394] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" HandleID="k8s-pod-network.6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" Workload="localhost-k8s-coredns--7c65d6cfc9--f55jf-eth0" Sep 13 10:26:54.069143 containerd[1559]: 2025-09-13 10:26:54.048 [INFO][4363] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f55jf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f55jf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f55jf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cd777e1a-a140-42bb-a95c-8804813542ec", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-f55jf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fe01e5565a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:54.069222 containerd[1559]: 2025-09-13 10:26:54.048 [INFO][4363] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f55jf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f55jf-eth0" Sep 13 10:26:54.069222 containerd[1559]: 2025-09-13 10:26:54.048 [INFO][4363] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fe01e5565a ContainerID="6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f55jf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f55jf-eth0" Sep 13 10:26:54.069222 containerd[1559]: 2025-09-13 10:26:54.052 [INFO][4363] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f55jf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f55jf-eth0" Sep 13 10:26:54.069334 containerd[1559]: 2025-09-13 10:26:54.052 [INFO][4363] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f55jf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f55jf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f55jf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cd777e1a-a140-42bb-a95c-8804813542ec", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa", Pod:"coredns-7c65d6cfc9-f55jf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fe01e5565a", MAC:"66:21:a8:fb:fe:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:54.069334 containerd[1559]: 2025-09-13 10:26:54.064 [INFO][4363] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f55jf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f55jf-eth0" Sep 13 10:26:54.155149 containerd[1559]: time="2025-09-13T10:26:54.155077114Z" level=info msg="connecting to shim 6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa" address="unix:///run/containerd/s/fb040e53edb526fe76732684a2db48adbb3ba8811c7a55b3867f8269b63769d0" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:54.157199 systemd-networkd[1479]: cali55ebe318f67: Link UP Sep 13 10:26:54.159461 systemd-networkd[1479]: cali55ebe318f67: Gained carrier Sep 13 10:26:54.175539 systemd-networkd[1479]: calib60e1f7d296: Gained IPv6LL Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:53.994 [INFO][4371] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-eth0 calico-apiserver-6fcc8644f7- calico-apiserver 3ec9c671-359f-49ee-83bf-45173542d96c 832 0 2025-09-13 10:26:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fcc8644f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fcc8644f7-7bnjp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali55ebe318f67 [] [] }} ContainerID="c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-7bnjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-" Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:53.994 [INFO][4371] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-7bnjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-eth0" Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.020 [INFO][4400] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" HandleID="k8s-pod-network.c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" Workload="localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-eth0" Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.020 [INFO][4400] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" HandleID="k8s-pod-network.c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" Workload="localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c70e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fcc8644f7-7bnjp", "timestamp":"2025-09-13 10:26:54.02051682 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.021 [INFO][4400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.045 [INFO][4400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.045 [INFO][4400] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.123 [INFO][4400] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" host="localhost" Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.128 [INFO][4400] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.132 [INFO][4400] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.134 [INFO][4400] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.135 [INFO][4400] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.136 [INFO][4400] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" host="localhost" Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.137 [INFO][4400] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775 Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.141 [INFO][4400] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" host="localhost" Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.148 [INFO][4400] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" host="localhost" Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.148 [INFO][4400] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" host="localhost" Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.148 [INFO][4400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:54.185632 containerd[1559]: 2025-09-13 10:26:54.148 [INFO][4400] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" HandleID="k8s-pod-network.c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" Workload="localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-eth0" Sep 13 10:26:54.186298 containerd[1559]: 2025-09-13 10:26:54.153 [INFO][4371] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-7bnjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-eth0", GenerateName:"calico-apiserver-6fcc8644f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ec9c671-359f-49ee-83bf-45173542d96c", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcc8644f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fcc8644f7-7bnjp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55ebe318f67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:54.186298 containerd[1559]: 2025-09-13 10:26:54.153 [INFO][4371] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-7bnjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-eth0" Sep 13 10:26:54.186298 containerd[1559]: 2025-09-13 10:26:54.153 [INFO][4371] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55ebe318f67 ContainerID="c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-7bnjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-eth0" Sep 13 10:26:54.186298 containerd[1559]: 2025-09-13 10:26:54.161 [INFO][4371] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-7bnjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-eth0" Sep 13 10:26:54.186298 containerd[1559]: 2025-09-13 10:26:54.162 [INFO][4371] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-7bnjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-eth0", GenerateName:"calico-apiserver-6fcc8644f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ec9c671-359f-49ee-83bf-45173542d96c", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcc8644f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775", Pod:"calico-apiserver-6fcc8644f7-7bnjp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55ebe318f67", MAC:"ba:fd:87:c9:f0:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:54.186298 containerd[1559]: 2025-09-13 10:26:54.178 [INFO][4371] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-7bnjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--7bnjp-eth0" Sep 13 10:26:54.192427 systemd[1]: Started cri-containerd-6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa.scope - libcontainer container 6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa. Sep 13 10:26:54.208234 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:54.222735 containerd[1559]: time="2025-09-13T10:26:54.222677578Z" level=info msg="connecting to shim c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775" address="unix:///run/containerd/s/734b19c0986d36273bd95208cae0c41160faa19cc1c4bdde333b8458e571b1d0" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:54.242986 containerd[1559]: time="2025-09-13T10:26:54.242932066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f55jf,Uid:cd777e1a-a140-42bb-a95c-8804813542ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa\"" Sep 13 10:26:54.246582 containerd[1559]: time="2025-09-13T10:26:54.246531315Z" level=info msg="CreateContainer within sandbox \"6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 10:26:54.251405 systemd[1]: Started cri-containerd-c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775.scope - libcontainer container c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775. Sep 13 10:26:54.260306 containerd[1559]: time="2025-09-13T10:26:54.260235783Z" level=info msg="Container 66cbfaf8fd828609ec510327bf9025f9f34b91b647652e7204b7896eecb91db0: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:54.266925 containerd[1559]: time="2025-09-13T10:26:54.266695303Z" level=info msg="CreateContainer within sandbox \"6740b0c7d794ba776926993474f1d904c6955b0ba04b6b46b5b1fca1ed523cfa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66cbfaf8fd828609ec510327bf9025f9f34b91b647652e7204b7896eecb91db0\"" Sep 13 10:26:54.268065 containerd[1559]: time="2025-09-13T10:26:54.268030632Z" level=info msg="StartContainer for \"66cbfaf8fd828609ec510327bf9025f9f34b91b647652e7204b7896eecb91db0\"" Sep 13 10:26:54.268145 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:54.270179 containerd[1559]: time="2025-09-13T10:26:54.270136556Z" level=info msg="connecting to shim 66cbfaf8fd828609ec510327bf9025f9f34b91b647652e7204b7896eecb91db0" address="unix:///run/containerd/s/fb040e53edb526fe76732684a2db48adbb3ba8811c7a55b3867f8269b63769d0" protocol=ttrpc version=3 Sep 13 10:26:54.290565 systemd[1]: Started cri-containerd-66cbfaf8fd828609ec510327bf9025f9f34b91b647652e7204b7896eecb91db0.scope - libcontainer container 66cbfaf8fd828609ec510327bf9025f9f34b91b647652e7204b7896eecb91db0. Sep 13 10:26:54.300560 containerd[1559]: time="2025-09-13T10:26:54.300509517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc8644f7-7bnjp,Uid:3ec9c671-359f-49ee-83bf-45173542d96c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775\"" Sep 13 10:26:54.332340 containerd[1559]: time="2025-09-13T10:26:54.332127897Z" level=info msg="StartContainer for \"66cbfaf8fd828609ec510327bf9025f9f34b91b647652e7204b7896eecb91db0\" returns successfully" Sep 13 10:26:54.520076 containerd[1559]: time="2025-09-13T10:26:54.520013066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:54.520806 containerd[1559]: time="2025-09-13T10:26:54.520749468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 10:26:54.521928 containerd[1559]: time="2025-09-13T10:26:54.521898426Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:54.524518 containerd[1559]: time="2025-09-13T10:26:54.524460598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:54.524974 containerd[1559]: time="2025-09-13T10:26:54.524929168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.620726905s" Sep 13 10:26:54.524974 containerd[1559]: time="2025-09-13T10:26:54.524958964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 10:26:54.526124 containerd[1559]: time="2025-09-13T10:26:54.526002063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 10:26:54.527002 containerd[1559]: time="2025-09-13T10:26:54.526965603Z" level=info msg="CreateContainer within sandbox \"4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 10:26:54.545865 containerd[1559]: time="2025-09-13T10:26:54.545812908Z" level=info msg="Container 174148b8595960820735dfaac091cdec9fa37423854b0c6ddd22f841ecf877af: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:54.552972 containerd[1559]: time="2025-09-13T10:26:54.552914334Z" level=info msg="CreateContainer within sandbox \"4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"174148b8595960820735dfaac091cdec9fa37423854b0c6ddd22f841ecf877af\"" Sep 13 10:26:54.553739 containerd[1559]: time="2025-09-13T10:26:54.553416858Z" level=info msg="StartContainer for \"174148b8595960820735dfaac091cdec9fa37423854b0c6ddd22f841ecf877af\"" Sep 13 10:26:54.554567 containerd[1559]: time="2025-09-13T10:26:54.554520120Z" level=info msg="connecting to shim 174148b8595960820735dfaac091cdec9fa37423854b0c6ddd22f841ecf877af" address="unix:///run/containerd/s/490d23e927636ec9caf5f64121e31947763839e74257e2e720b6c2c9c810fdb3" protocol=ttrpc version=3 Sep 13 10:26:54.587414 systemd[1]: Started cri-containerd-174148b8595960820735dfaac091cdec9fa37423854b0c6ddd22f841ecf877af.scope - libcontainer container 174148b8595960820735dfaac091cdec9fa37423854b0c6ddd22f841ecf877af. Sep 13 10:26:54.633674 containerd[1559]: time="2025-09-13T10:26:54.633632022Z" level=info msg="StartContainer for \"174148b8595960820735dfaac091cdec9fa37423854b0c6ddd22f841ecf877af\" returns successfully" Sep 13 10:26:54.948544 containerd[1559]: time="2025-09-13T10:26:54.948415978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc8644f7-ql8zw,Uid:581d69a4-f057-48d2-8168-3614bebbafa4,Namespace:calico-apiserver,Attempt:0,}" Sep 13 10:26:54.948894 containerd[1559]: time="2025-09-13T10:26:54.948653124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-4pmfw,Uid:f9821b4d-af50-4742-9887-0f52d95f4926,Namespace:calico-system,Attempt:0,}" Sep 13 10:26:54.984076 kubelet[2709]: I0913 10:26:54.983220 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-f55jf" podStartSLOduration=40.983195214 podStartE2EDuration="40.983195214s" podCreationTimestamp="2025-09-13 10:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:26:54.968200794 +0000 UTC m=+47.110728895" watchObservedRunningTime="2025-09-13 10:26:54.983195214 +0000 UTC m=+47.125723315" Sep 13 10:26:55.096171 systemd-networkd[1479]: calib1830ae65a9: Link UP Sep 13 10:26:55.098330 systemd-networkd[1479]: calib1830ae65a9: Gained carrier Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.029 [INFO][4599] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-eth0 calico-apiserver-6fcc8644f7- calico-apiserver 581d69a4-f057-48d2-8168-3614bebbafa4 824 0 2025-09-13 10:26:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fcc8644f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fcc8644f7-ql8zw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib1830ae65a9 [] [] }} ContainerID="33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-ql8zw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-" Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.030 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-ql8zw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-eth0" Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.059 [INFO][4633] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" HandleID="k8s-pod-network.33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" Workload="localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-eth0" Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.059 [INFO][4633] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" HandleID="k8s-pod-network.33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" Workload="localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139410), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fcc8644f7-ql8zw", "timestamp":"2025-09-13 10:26:55.059597099 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.060 [INFO][4633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.060 [INFO][4633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.060 [INFO][4633] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.066 [INFO][4633] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" host="localhost" Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.070 [INFO][4633] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.073 [INFO][4633] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.075 [INFO][4633] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.077 [INFO][4633] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.077 [INFO][4633] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" host="localhost" Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.078 [INFO][4633] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.082 [INFO][4633] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" host="localhost" Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.090 [INFO][4633] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" host="localhost" Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.090 [INFO][4633] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" host="localhost" Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.090 [INFO][4633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:55.110960 containerd[1559]: 2025-09-13 10:26:55.090 [INFO][4633] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" HandleID="k8s-pod-network.33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" Workload="localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-eth0" Sep 13 10:26:55.111562 containerd[1559]: 2025-09-13 10:26:55.092 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-ql8zw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-eth0", GenerateName:"calico-apiserver-6fcc8644f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"581d69a4-f057-48d2-8168-3614bebbafa4", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcc8644f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fcc8644f7-ql8zw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib1830ae65a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:55.111562 containerd[1559]: 2025-09-13 10:26:55.092 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-ql8zw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-eth0" Sep 13 10:26:55.111562 containerd[1559]: 2025-09-13 10:26:55.092 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1830ae65a9 ContainerID="33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-ql8zw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-eth0" Sep 13 10:26:55.111562 containerd[1559]: 2025-09-13 10:26:55.098 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-ql8zw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-eth0" Sep 13 10:26:55.111562 containerd[1559]: 2025-09-13 10:26:55.099 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-ql8zw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-eth0", GenerateName:"calico-apiserver-6fcc8644f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"581d69a4-f057-48d2-8168-3614bebbafa4", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fcc8644f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a", Pod:"calico-apiserver-6fcc8644f7-ql8zw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib1830ae65a9", MAC:"32:55:d7:46:f1:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:55.111562 containerd[1559]: 2025-09-13 10:26:55.107 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" Namespace="calico-apiserver" Pod="calico-apiserver-6fcc8644f7-ql8zw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fcc8644f7--ql8zw-eth0" Sep 13 10:26:55.135303 containerd[1559]: time="2025-09-13T10:26:55.134940874Z" level=info msg="connecting to shim 33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a" address="unix:///run/containerd/s/7e963cda879165ab521f63443783d8764175e87d39446c25dd3158c3d8454315" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:55.164465 systemd[1]: Started cri-containerd-33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a.scope - libcontainer container 33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a. Sep 13 10:26:55.179817 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:55.198427 systemd-networkd[1479]: cali700c2a2bb14: Gained IPv6LL Sep 13 10:26:55.216679 systemd-networkd[1479]: calibc344f968ba: Link UP Sep 13 10:26:55.216893 systemd-networkd[1479]: calibc344f968ba: Gained carrier Sep 13 10:26:55.230183 containerd[1559]: time="2025-09-13T10:26:55.230138501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fcc8644f7-ql8zw,Uid:581d69a4-f057-48d2-8168-3614bebbafa4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a\"" Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.033 [INFO][4613] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--4pmfw-eth0 goldmane-7988f88666- calico-system f9821b4d-af50-4742-9887-0f52d95f4926 827 0 2025-09-13 10:26:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-4pmfw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibc344f968ba [] [] }} ContainerID="2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" Namespace="calico-system" Pod="goldmane-7988f88666-4pmfw" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4pmfw-" Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.033 [INFO][4613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" Namespace="calico-system" Pod="goldmane-7988f88666-4pmfw" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4pmfw-eth0" Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.060 [INFO][4635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" HandleID="k8s-pod-network.2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" Workload="localhost-k8s-goldmane--7988f88666--4pmfw-eth0" Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.060 [INFO][4635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" HandleID="k8s-pod-network.2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" Workload="localhost-k8s-goldmane--7988f88666--4pmfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b2e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-4pmfw", "timestamp":"2025-09-13 10:26:55.060686134 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.060 [INFO][4635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.090 [INFO][4635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.090 [INFO][4635] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.167 [INFO][4635] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" host="localhost" Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.173 [INFO][4635] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.181 [INFO][4635] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.184 [INFO][4635] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.186 [INFO][4635] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.186 [INFO][4635] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" host="localhost" Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.188 [INFO][4635] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.192 [INFO][4635] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" host="localhost" Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.200 [INFO][4635] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" host="localhost" Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.200 [INFO][4635] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" host="localhost" Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.202 [INFO][4635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:55.240822 containerd[1559]: 2025-09-13 10:26:55.202 [INFO][4635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" HandleID="k8s-pod-network.2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" Workload="localhost-k8s-goldmane--7988f88666--4pmfw-eth0" Sep 13 10:26:55.244468 containerd[1559]: 2025-09-13 10:26:55.214 [INFO][4613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" Namespace="calico-system" Pod="goldmane-7988f88666-4pmfw" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4pmfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--4pmfw-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f9821b4d-af50-4742-9887-0f52d95f4926", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-4pmfw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibc344f968ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:55.244468 containerd[1559]: 2025-09-13 10:26:55.214 [INFO][4613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" Namespace="calico-system" Pod="goldmane-7988f88666-4pmfw" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4pmfw-eth0" Sep 13 10:26:55.244468 containerd[1559]: 2025-09-13 10:26:55.214 [INFO][4613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc344f968ba ContainerID="2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" Namespace="calico-system" Pod="goldmane-7988f88666-4pmfw" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4pmfw-eth0" Sep 13 10:26:55.244468 containerd[1559]: 2025-09-13 10:26:55.217 [INFO][4613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" Namespace="calico-system" Pod="goldmane-7988f88666-4pmfw" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4pmfw-eth0" Sep 13 10:26:55.244468 containerd[1559]: 2025-09-13 10:26:55.217 [INFO][4613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" Namespace="calico-system" Pod="goldmane-7988f88666-4pmfw" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4pmfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--4pmfw-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f9821b4d-af50-4742-9887-0f52d95f4926", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa", Pod:"goldmane-7988f88666-4pmfw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibc344f968ba", MAC:"56:c1:c1:a7:7b:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:55.244468 containerd[1559]: 2025-09-13 10:26:55.232 [INFO][4613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" Namespace="calico-system" Pod="goldmane-7988f88666-4pmfw" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4pmfw-eth0" Sep 13 10:26:55.263424 systemd-networkd[1479]: vxlan.calico: Gained IPv6LL Sep 13 10:26:55.268716 containerd[1559]: time="2025-09-13T10:26:55.268641755Z" level=info msg="connecting to shim 2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa" address="unix:///run/containerd/s/640a32f71bcb6eb33aec01aaedf94f1001efcb5a1ffcddfd91452f9539cc156b" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:55.300437 systemd[1]: Started cri-containerd-2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa.scope - libcontainer container 2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa. Sep 13 10:26:55.322771 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:55.359817 containerd[1559]: time="2025-09-13T10:26:55.359771895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-4pmfw,Uid:f9821b4d-af50-4742-9887-0f52d95f4926,Namespace:calico-system,Attempt:0,} returns sandbox id \"2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa\"" Sep 13 10:26:55.391252 systemd-networkd[1479]: cali798550e0b2e: Gained IPv6LL Sep 13 10:26:55.775831 systemd-networkd[1479]: cali55ebe318f67: Gained IPv6LL Sep 13 10:26:56.030483 systemd-networkd[1479]: cali2fe01e5565a: Gained IPv6LL Sep 13 10:26:56.144361 containerd[1559]: time="2025-09-13T10:26:56.144293758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:56.145192 containerd[1559]: time="2025-09-13T10:26:56.145152340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 10:26:56.146554 containerd[1559]: time="2025-09-13T10:26:56.146512704Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:56.152902 containerd[1559]: time="2025-09-13T10:26:56.152856076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:56.153575 containerd[1559]: time="2025-09-13T10:26:56.153530852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.627481701s" Sep 13 10:26:56.153575 containerd[1559]: time="2025-09-13T10:26:56.153571940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 10:26:56.154875 containerd[1559]: time="2025-09-13T10:26:56.154837967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 10:26:56.156150 containerd[1559]: time="2025-09-13T10:26:56.155606892Z" level=info msg="CreateContainer within sandbox \"31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 10:26:56.170575 containerd[1559]: time="2025-09-13T10:26:56.170520846Z" level=info msg="Container d227c90535feab08418538c9cce027e7d5f5694484183fd5698dc66a800ef300: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:56.217224 containerd[1559]: time="2025-09-13T10:26:56.217175060Z" level=info msg="CreateContainer within sandbox \"31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d227c90535feab08418538c9cce027e7d5f5694484183fd5698dc66a800ef300\"" Sep 13 10:26:56.218981 containerd[1559]: time="2025-09-13T10:26:56.218034823Z" level=info msg="StartContainer for \"d227c90535feab08418538c9cce027e7d5f5694484183fd5698dc66a800ef300\"" Sep 13 10:26:56.219932 containerd[1559]: time="2025-09-13T10:26:56.219903342Z" level=info msg="connecting to shim d227c90535feab08418538c9cce027e7d5f5694484183fd5698dc66a800ef300" address="unix:///run/containerd/s/4a802b29814d494d65feffb400376b0d77369a711be1f23d42b31fff75fc73c5" protocol=ttrpc version=3 Sep 13 10:26:56.244462 systemd[1]: Started cri-containerd-d227c90535feab08418538c9cce027e7d5f5694484183fd5698dc66a800ef300.scope - libcontainer container d227c90535feab08418538c9cce027e7d5f5694484183fd5698dc66a800ef300. Sep 13 10:26:56.365347 containerd[1559]: time="2025-09-13T10:26:56.365289868Z" level=info msg="StartContainer for \"d227c90535feab08418538c9cce027e7d5f5694484183fd5698dc66a800ef300\" returns successfully" Sep 13 10:26:56.478446 systemd-networkd[1479]: calib1830ae65a9: Gained IPv6LL Sep 13 10:26:56.606537 systemd-networkd[1479]: calibc344f968ba: Gained IPv6LL Sep 13 10:26:56.948796 containerd[1559]: time="2025-09-13T10:26:56.948741891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tx4kf,Uid:c48e1894-4eef-4a72-a4e5-8eb63a8f6562,Namespace:kube-system,Attempt:0,}" Sep 13 10:26:57.045750 systemd-networkd[1479]: cali56ed3fb8865: Link UP Sep 13 10:26:57.046028 systemd-networkd[1479]: cali56ed3fb8865: Gained carrier Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:56.984 [INFO][4795] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--tx4kf-eth0 coredns-7c65d6cfc9- kube-system c48e1894-4eef-4a72-a4e5-8eb63a8f6562 833 0 2025-09-13 10:26:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-tx4kf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali56ed3fb8865 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tx4kf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tx4kf-" Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:56.984 [INFO][4795] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tx4kf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tx4kf-eth0" Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.007 [INFO][4811] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" HandleID="k8s-pod-network.e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" Workload="localhost-k8s-coredns--7c65d6cfc9--tx4kf-eth0" Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.007 [INFO][4811] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" HandleID="k8s-pod-network.e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" Workload="localhost-k8s-coredns--7c65d6cfc9--tx4kf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001397a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-tx4kf", "timestamp":"2025-09-13 10:26:57.007651597 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.007 [INFO][4811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.007 [INFO][4811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.007 [INFO][4811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.014 [INFO][4811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" host="localhost" Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.018 [INFO][4811] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.022 [INFO][4811] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.024 [INFO][4811] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.026 [INFO][4811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.026 [INFO][4811] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" host="localhost" Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.027 [INFO][4811] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.031 [INFO][4811] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" host="localhost" Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.037 [INFO][4811] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" host="localhost" Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.037 [INFO][4811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" host="localhost" Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.037 [INFO][4811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:57.060751 containerd[1559]: 2025-09-13 10:26:57.037 [INFO][4811] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" HandleID="k8s-pod-network.e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" Workload="localhost-k8s-coredns--7c65d6cfc9--tx4kf-eth0" Sep 13 10:26:57.061300 containerd[1559]: 2025-09-13 10:26:57.041 [INFO][4795] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tx4kf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tx4kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tx4kf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c48e1894-4eef-4a72-a4e5-8eb63a8f6562", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-tx4kf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56ed3fb8865", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:57.061300 containerd[1559]: 2025-09-13 10:26:57.041 [INFO][4795] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tx4kf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tx4kf-eth0" Sep 13 10:26:57.061300 containerd[1559]: 2025-09-13 10:26:57.041 [INFO][4795] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56ed3fb8865 ContainerID="e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tx4kf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tx4kf-eth0" Sep 13 10:26:57.061300 containerd[1559]: 2025-09-13 10:26:57.045 [INFO][4795] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tx4kf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tx4kf-eth0" Sep 13 10:26:57.061300 containerd[1559]: 2025-09-13 10:26:57.046 [INFO][4795] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tx4kf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tx4kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tx4kf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c48e1894-4eef-4a72-a4e5-8eb63a8f6562", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f", Pod:"coredns-7c65d6cfc9-tx4kf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56ed3fb8865", MAC:"72:c7:0e:44:46:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:57.061300 containerd[1559]: 2025-09-13 10:26:57.057 [INFO][4795] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tx4kf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tx4kf-eth0" Sep 13 10:26:57.084112 containerd[1559]: time="2025-09-13T10:26:57.084023623Z" level=info msg="connecting to shim e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f" address="unix:///run/containerd/s/3c11122d5c5a17d904fd9ab6d4be0ff2bee510d00cb355e5130fea099691c3c2" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:57.121511 systemd[1]: Started cri-containerd-e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f.scope - libcontainer container e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f. Sep 13 10:26:57.135504 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:57.167689 containerd[1559]: time="2025-09-13T10:26:57.167626216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tx4kf,Uid:c48e1894-4eef-4a72-a4e5-8eb63a8f6562,Namespace:kube-system,Attempt:0,} returns sandbox id \"e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f\"" Sep 13 10:26:57.170353 containerd[1559]: time="2025-09-13T10:26:57.170318711Z" level=info msg="CreateContainer within sandbox \"e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 10:26:57.184528 containerd[1559]: time="2025-09-13T10:26:57.184484499Z" level=info msg="Container 9c5eb032b9f45b05d1ab17cb0c5b906673c09bc3e1786a3afc38f8ddc4af440c: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:57.192623 containerd[1559]: time="2025-09-13T10:26:57.192586631Z" level=info msg="CreateContainer within sandbox \"e026623ad1cdc51720c6e1a5ea7919b530b8fbf63825c2b66d58f73819adcd3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c5eb032b9f45b05d1ab17cb0c5b906673c09bc3e1786a3afc38f8ddc4af440c\"" Sep 13 10:26:57.193305 containerd[1559]: time="2025-09-13T10:26:57.193064308Z" level=info msg="StartContainer for \"9c5eb032b9f45b05d1ab17cb0c5b906673c09bc3e1786a3afc38f8ddc4af440c\"" Sep 13 10:26:57.193995 containerd[1559]: time="2025-09-13T10:26:57.193973885Z" level=info msg="connecting to shim 9c5eb032b9f45b05d1ab17cb0c5b906673c09bc3e1786a3afc38f8ddc4af440c" address="unix:///run/containerd/s/3c11122d5c5a17d904fd9ab6d4be0ff2bee510d00cb355e5130fea099691c3c2" protocol=ttrpc version=3 Sep 13 10:26:57.219404 systemd[1]: Started cri-containerd-9c5eb032b9f45b05d1ab17cb0c5b906673c09bc3e1786a3afc38f8ddc4af440c.scope - libcontainer container 9c5eb032b9f45b05d1ab17cb0c5b906673c09bc3e1786a3afc38f8ddc4af440c. Sep 13 10:26:57.250837 containerd[1559]: time="2025-09-13T10:26:57.250797929Z" level=info msg="StartContainer for \"9c5eb032b9f45b05d1ab17cb0c5b906673c09bc3e1786a3afc38f8ddc4af440c\" returns successfully" Sep 13 10:26:57.327450 systemd[1]: Started sshd@8-10.0.0.126:22-10.0.0.1:49926.service - OpenSSH per-connection server daemon (10.0.0.1:49926). Sep 13 10:26:57.382707 sshd[4909]: Accepted publickey for core from 10.0.0.1 port 49926 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:57.384606 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:57.389696 systemd-logind[1544]: New session 9 of user core. Sep 13 10:26:57.398420 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 10:26:57.542516 sshd[4912]: Connection closed by 10.0.0.1 port 49926 Sep 13 10:26:57.542935 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:57.553447 systemd[1]: sshd@8-10.0.0.126:22-10.0.0.1:49926.service: Deactivated successfully. Sep 13 10:26:57.556130 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 10:26:57.557978 systemd-logind[1544]: Session 9 logged out. Waiting for processes to exit. Sep 13 10:26:57.560201 systemd-logind[1544]: Removed session 9. Sep 13 10:26:57.751750 kubelet[2709]: I0913 10:26:57.751665 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tx4kf" podStartSLOduration=43.75164047 podStartE2EDuration="43.75164047s" podCreationTimestamp="2025-09-13 10:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:26:57.74999444 +0000 UTC m=+49.892522541" watchObservedRunningTime="2025-09-13 10:26:57.75164047 +0000 UTC m=+49.894168572" Sep 13 10:26:58.718947 systemd-networkd[1479]: cali56ed3fb8865: Gained IPv6LL Sep 13 10:26:59.399692 containerd[1559]: time="2025-09-13T10:26:59.399633315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:59.403913 containerd[1559]: time="2025-09-13T10:26:59.403879336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 10:26:59.405317 containerd[1559]: time="2025-09-13T10:26:59.405254457Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:59.407950 containerd[1559]: time="2025-09-13T10:26:59.407897059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:59.408551 containerd[1559]: time="2025-09-13T10:26:59.408515780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.25364421s" Sep 13 10:26:59.408609 containerd[1559]: time="2025-09-13T10:26:59.408550876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 10:26:59.409367 containerd[1559]: time="2025-09-13T10:26:59.409340849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 10:26:59.418083 containerd[1559]: time="2025-09-13T10:26:59.418018710Z" level=info msg="CreateContainer within sandbox \"93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 10:26:59.429295 containerd[1559]: time="2025-09-13T10:26:59.429238073Z" level=info msg="Container 9fc6542a150532963fcb141e5f5d1785a7a2465e1b44d9ead26db866934a0886: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:59.441084 containerd[1559]: time="2025-09-13T10:26:59.441028038Z" level=info msg="CreateContainer within sandbox \"93df3ce64df5d453d54f9cf38fed1a43dedcbed99bdb60e8f37c4612dabb786c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9fc6542a150532963fcb141e5f5d1785a7a2465e1b44d9ead26db866934a0886\"" Sep 13 10:26:59.441553 containerd[1559]: time="2025-09-13T10:26:59.441525120Z" level=info msg="StartContainer for \"9fc6542a150532963fcb141e5f5d1785a7a2465e1b44d9ead26db866934a0886\"" Sep 13 10:26:59.442624 containerd[1559]: time="2025-09-13T10:26:59.442598816Z" level=info msg="connecting to shim 9fc6542a150532963fcb141e5f5d1785a7a2465e1b44d9ead26db866934a0886" address="unix:///run/containerd/s/7d109fe9b085f5c5684f41fe3de08ad856c35f60f366ca2d3952d1c2b35fefe2" protocol=ttrpc version=3 Sep 13 10:26:59.463431 systemd[1]: Started cri-containerd-9fc6542a150532963fcb141e5f5d1785a7a2465e1b44d9ead26db866934a0886.scope - libcontainer container 9fc6542a150532963fcb141e5f5d1785a7a2465e1b44d9ead26db866934a0886. Sep 13 10:26:59.515780 containerd[1559]: time="2025-09-13T10:26:59.515738137Z" level=info msg="StartContainer for \"9fc6542a150532963fcb141e5f5d1785a7a2465e1b44d9ead26db866934a0886\" returns successfully" Sep 13 10:26:59.933929 kubelet[2709]: I0913 10:26:59.933847 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d6c9cc7cf-2xxx4" podStartSLOduration=27.299897082 podStartE2EDuration="32.933827777s" podCreationTimestamp="2025-09-13 10:26:27 +0000 UTC" firstStartedPulling="2025-09-13 10:26:53.775236949 +0000 UTC m=+45.917765050" lastFinishedPulling="2025-09-13 10:26:59.409167644 +0000 UTC m=+51.551695745" observedRunningTime="2025-09-13 10:26:59.933572597 +0000 UTC m=+52.076100698" watchObservedRunningTime="2025-09-13 10:26:59.933827777 +0000 UTC m=+52.076355878" Sep 13 10:27:00.962260 containerd[1559]: time="2025-09-13T10:27:00.962209272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fc6542a150532963fcb141e5f5d1785a7a2465e1b44d9ead26db866934a0886\" id:\"9d8b196de007351fda61bce9f289ba56e07052fc6d54142c91cb027d1437b56b\" pid:5002 exited_at:{seconds:1757759220 nanos:961950707}" Sep 13 10:27:02.558295 systemd[1]: Started sshd@9-10.0.0.126:22-10.0.0.1:33120.service - OpenSSH per-connection server daemon (10.0.0.1:33120). Sep 13 10:27:02.809951 sshd[5015]: Accepted publickey for core from 10.0.0.1 port 33120 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:02.812065 sshd-session[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:02.820154 systemd-logind[1544]: New session 10 of user core. Sep 13 10:27:02.825433 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 10:27:03.169283 sshd[5022]: Connection closed by 10.0.0.1 port 33120 Sep 13 10:27:03.169646 sshd-session[5015]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:03.174182 systemd[1]: sshd@9-10.0.0.126:22-10.0.0.1:33120.service: Deactivated successfully. Sep 13 10:27:03.177039 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 10:27:03.178454 systemd-logind[1544]: Session 10 logged out. Waiting for processes to exit. Sep 13 10:27:03.180987 systemd-logind[1544]: Removed session 10. Sep 13 10:27:03.555374 containerd[1559]: time="2025-09-13T10:27:03.555322906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:27:03.556500 containerd[1559]: time="2025-09-13T10:27:03.556423332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 10:27:03.558156 containerd[1559]: time="2025-09-13T10:27:03.558100991Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:27:03.560916 containerd[1559]: time="2025-09-13T10:27:03.560889073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:27:03.561704 containerd[1559]: time="2025-09-13T10:27:03.561455006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.152081425s" Sep 13 10:27:03.561704 containerd[1559]: time="2025-09-13T10:27:03.561501794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 10:27:03.565767 containerd[1559]: time="2025-09-13T10:27:03.565733377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 10:27:03.573326 containerd[1559]: time="2025-09-13T10:27:03.573253482Z" level=info msg="CreateContainer within sandbox \"c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 10:27:03.589659 containerd[1559]: time="2025-09-13T10:27:03.589602579Z" level=info msg="Container cbf3ce94ac9e506aed41c8c5feb73958140be53b98d8a746a75fcb79076e55d1: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:27:03.634575 containerd[1559]: time="2025-09-13T10:27:03.634405404Z" level=info msg="CreateContainer within sandbox \"c2a768e48d3d71eb4e34dec80d71989b5323189b918208980122b6c822cda775\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cbf3ce94ac9e506aed41c8c5feb73958140be53b98d8a746a75fcb79076e55d1\"" Sep 13 10:27:03.637003 containerd[1559]: time="2025-09-13T10:27:03.636961982Z" level=info msg="StartContainer for \"cbf3ce94ac9e506aed41c8c5feb73958140be53b98d8a746a75fcb79076e55d1\"" Sep 13 10:27:03.641445 containerd[1559]: time="2025-09-13T10:27:03.641396757Z" level=info msg="connecting to shim cbf3ce94ac9e506aed41c8c5feb73958140be53b98d8a746a75fcb79076e55d1" address="unix:///run/containerd/s/734b19c0986d36273bd95208cae0c41160faa19cc1c4bdde333b8458e571b1d0" protocol=ttrpc version=3 Sep 13 10:27:03.714545 systemd[1]: Started cri-containerd-cbf3ce94ac9e506aed41c8c5feb73958140be53b98d8a746a75fcb79076e55d1.scope - libcontainer container cbf3ce94ac9e506aed41c8c5feb73958140be53b98d8a746a75fcb79076e55d1. Sep 13 10:27:03.771785 containerd[1559]: time="2025-09-13T10:27:03.771735381Z" level=info msg="StartContainer for \"cbf3ce94ac9e506aed41c8c5feb73958140be53b98d8a746a75fcb79076e55d1\" returns successfully" Sep 13 10:27:04.168086 kubelet[2709]: I0913 10:27:04.167962 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fcc8644f7-7bnjp" podStartSLOduration=30.904801988 podStartE2EDuration="40.167943418s" podCreationTimestamp="2025-09-13 10:26:24 +0000 UTC" firstStartedPulling="2025-09-13 10:26:54.302285231 +0000 UTC m=+46.444813332" lastFinishedPulling="2025-09-13 10:27:03.565426661 +0000 UTC m=+55.707954762" observedRunningTime="2025-09-13 10:27:04.167778028 +0000 UTC m=+56.310306129" watchObservedRunningTime="2025-09-13 10:27:04.167943418 +0000 UTC m=+56.310471520" Sep 13 10:27:05.009922 kubelet[2709]: I0913 10:27:05.009885 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 10:27:07.749125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121536676.mount: Deactivated successfully. Sep 13 10:27:08.098144 containerd[1559]: time="2025-09-13T10:27:08.098064926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:27:08.099320 containerd[1559]: time="2025-09-13T10:27:08.099288232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 10:27:08.100897 containerd[1559]: time="2025-09-13T10:27:08.100839372Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:27:08.102869 containerd[1559]: time="2025-09-13T10:27:08.102835168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:27:08.103446 containerd[1559]: time="2025-09-13T10:27:08.103396001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.53763387s" Sep 13 10:27:08.103446 containerd[1559]: time="2025-09-13T10:27:08.103442879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 10:27:08.105863 containerd[1559]: time="2025-09-13T10:27:08.105837723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 10:27:08.115789 containerd[1559]: time="2025-09-13T10:27:08.115735397Z" level=info msg="CreateContainer within sandbox \"4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 10:27:08.123598 containerd[1559]: time="2025-09-13T10:27:08.123556835Z" level=info msg="Container 07821badfd9ea9ffff8686cb80389694ac8e12858b5daf801c938aca0510e417: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:27:08.135303 containerd[1559]: time="2025-09-13T10:27:08.135251451Z" level=info msg="CreateContainer within sandbox \"4ec8e2d9f967811bff4c053783c0ac78de2cdfa68428bd0f4a3d957f1c51be01\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"07821badfd9ea9ffff8686cb80389694ac8e12858b5daf801c938aca0510e417\"" Sep 13 10:27:08.136251 containerd[1559]: time="2025-09-13T10:27:08.136178862Z" level=info msg="StartContainer for \"07821badfd9ea9ffff8686cb80389694ac8e12858b5daf801c938aca0510e417\"" Sep 13 10:27:08.138137 containerd[1559]: time="2025-09-13T10:27:08.138088516Z" level=info msg="connecting to shim 07821badfd9ea9ffff8686cb80389694ac8e12858b5daf801c938aca0510e417" address="unix:///run/containerd/s/490d23e927636ec9caf5f64121e31947763839e74257e2e720b6c2c9c810fdb3" protocol=ttrpc version=3 Sep 13 10:27:08.172580 systemd[1]: Started cri-containerd-07821badfd9ea9ffff8686cb80389694ac8e12858b5daf801c938aca0510e417.scope - libcontainer container 07821badfd9ea9ffff8686cb80389694ac8e12858b5daf801c938aca0510e417. Sep 13 10:27:08.183941 systemd[1]: Started sshd@10-10.0.0.126:22-10.0.0.1:33136.service - OpenSSH per-connection server daemon (10.0.0.1:33136). Sep 13 10:27:08.243855 containerd[1559]: time="2025-09-13T10:27:08.243758039Z" level=info msg="StartContainer for \"07821badfd9ea9ffff8686cb80389694ac8e12858b5daf801c938aca0510e417\" returns successfully" Sep 13 10:27:08.262199 sshd[5119]: Accepted publickey for core from 10.0.0.1 port 33136 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:08.264253 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:08.271241 systemd-logind[1544]: New session 11 of user core. Sep 13 10:27:08.275548 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 10:27:08.413766 sshd[5134]: Connection closed by 10.0.0.1 port 33136 Sep 13 10:27:08.414050 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:08.423758 systemd[1]: sshd@10-10.0.0.126:22-10.0.0.1:33136.service: Deactivated successfully. Sep 13 10:27:08.425998 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 10:27:08.426905 systemd-logind[1544]: Session 11 logged out. Waiting for processes to exit. Sep 13 10:27:08.430414 systemd[1]: Started sshd@11-10.0.0.126:22-10.0.0.1:33148.service - OpenSSH per-connection server daemon (10.0.0.1:33148). Sep 13 10:27:08.431018 systemd-logind[1544]: Removed session 11. Sep 13 10:27:08.481882 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 33148 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:08.483598 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:08.488082 systemd-logind[1544]: New session 12 of user core. Sep 13 10:27:08.502398 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 10:27:08.679978 containerd[1559]: time="2025-09-13T10:27:08.679825431Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:27:08.681174 containerd[1559]: time="2025-09-13T10:27:08.681128426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 10:27:08.683241 containerd[1559]: time="2025-09-13T10:27:08.683065552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 577.201019ms" Sep 13 10:27:08.683241 containerd[1559]: time="2025-09-13T10:27:08.683101570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 10:27:08.684606 containerd[1559]: time="2025-09-13T10:27:08.684570487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 10:27:08.693380 containerd[1559]: time="2025-09-13T10:27:08.693171949Z" level=info msg="CreateContainer within sandbox \"33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 10:27:08.707304 containerd[1559]: time="2025-09-13T10:27:08.705781141Z" level=info msg="Container 41f640c050cace5ddcf87ca0e3f765a645601d587aeae94474bf61bfdfec0fba: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:27:08.716705 containerd[1559]: time="2025-09-13T10:27:08.716646882Z" level=info msg="CreateContainer within sandbox \"33cc86604b2e37af7e24356c0db8251a4afec135bc30f8819947dcc5e4efc40a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"41f640c050cace5ddcf87ca0e3f765a645601d587aeae94474bf61bfdfec0fba\"" Sep 13 10:27:08.717483 containerd[1559]: time="2025-09-13T10:27:08.717408211Z" level=info msg="StartContainer for \"41f640c050cace5ddcf87ca0e3f765a645601d587aeae94474bf61bfdfec0fba\"" Sep 13 10:27:08.720015 containerd[1559]: time="2025-09-13T10:27:08.719976360Z" level=info msg="connecting to shim 41f640c050cace5ddcf87ca0e3f765a645601d587aeae94474bf61bfdfec0fba" address="unix:///run/containerd/s/7e963cda879165ab521f63443783d8764175e87d39446c25dd3158c3d8454315" protocol=ttrpc version=3 Sep 13 10:27:08.744143 sshd[5152]: Connection closed by 10.0.0.1 port 33148 Sep 13 10:27:08.745352 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:08.749828 systemd[1]: Started cri-containerd-41f640c050cace5ddcf87ca0e3f765a645601d587aeae94474bf61bfdfec0fba.scope - libcontainer container 41f640c050cace5ddcf87ca0e3f765a645601d587aeae94474bf61bfdfec0fba. Sep 13 10:27:08.757765 systemd[1]: sshd@11-10.0.0.126:22-10.0.0.1:33148.service: Deactivated successfully. Sep 13 10:27:08.761660 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 10:27:08.764521 systemd-logind[1544]: Session 12 logged out. Waiting for processes to exit. Sep 13 10:27:08.769614 systemd[1]: Started sshd@12-10.0.0.126:22-10.0.0.1:33158.service - OpenSSH per-connection server daemon (10.0.0.1:33158). Sep 13 10:27:08.772605 systemd-logind[1544]: Removed session 12. Sep 13 10:27:08.826516 sshd[5181]: Accepted publickey for core from 10.0.0.1 port 33158 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:08.827892 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:08.828259 containerd[1559]: time="2025-09-13T10:27:08.828192093Z" level=info msg="StartContainer for \"41f640c050cace5ddcf87ca0e3f765a645601d587aeae94474bf61bfdfec0fba\" returns successfully" Sep 13 10:27:08.833303 systemd-logind[1544]: New session 13 of user core. Sep 13 10:27:08.844511 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 10:27:08.973586 sshd[5197]: Connection closed by 10.0.0.1 port 33158 Sep 13 10:27:08.975608 sshd-session[5181]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:08.980581 systemd[1]: sshd@12-10.0.0.126:22-10.0.0.1:33158.service: Deactivated successfully. Sep 13 10:27:08.983296 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 10:27:08.984322 systemd-logind[1544]: Session 13 logged out. Waiting for processes to exit. Sep 13 10:27:08.986181 systemd-logind[1544]: Removed session 13. Sep 13 10:27:09.125302 kubelet[2709]: I0913 10:27:09.124957 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fcc8644f7-ql8zw" podStartSLOduration=31.674506128 podStartE2EDuration="45.124938723s" podCreationTimestamp="2025-09-13 10:26:24 +0000 UTC" firstStartedPulling="2025-09-13 10:26:55.233632052 +0000 UTC m=+47.376160163" lastFinishedPulling="2025-09-13 10:27:08.684064657 +0000 UTC m=+60.826592758" observedRunningTime="2025-09-13 10:27:09.124542629 +0000 UTC m=+61.267070740" watchObservedRunningTime="2025-09-13 10:27:09.124938723 +0000 UTC m=+61.267466824" Sep 13 10:27:09.140178 kubelet[2709]: I0913 10:27:09.139849 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-769bc54c55-k8dnk" podStartSLOduration=1.93784428 podStartE2EDuration="17.139828585s" podCreationTimestamp="2025-09-13 10:26:52 +0000 UTC" firstStartedPulling="2025-09-13 10:26:52.903568623 +0000 UTC m=+45.046096724" lastFinishedPulling="2025-09-13 10:27:08.105552928 +0000 UTC m=+60.248081029" observedRunningTime="2025-09-13 10:27:09.139462958 +0000 UTC m=+61.281991059" watchObservedRunningTime="2025-09-13 10:27:09.139828585 +0000 UTC m=+61.282356686" Sep 13 10:27:11.429535 containerd[1559]: time="2025-09-13T10:27:11.429448328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fc6542a150532963fcb141e5f5d1785a7a2465e1b44d9ead26db866934a0886\" id:\"924e5a9ddf60df819c918e1d087d45e0cec6b04efe095f391f8039256e00e64e\" pid:5233 exited_at:{seconds:1757759231 nanos:429157592}" Sep 13 10:27:12.154022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount244278938.mount: Deactivated successfully. Sep 13 10:27:12.828816 containerd[1559]: time="2025-09-13T10:27:12.828761886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:27:12.829725 containerd[1559]: time="2025-09-13T10:27:12.829691808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 10:27:12.830972 containerd[1559]: time="2025-09-13T10:27:12.830916464Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:27:12.833217 containerd[1559]: time="2025-09-13T10:27:12.833175237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:27:12.833816 containerd[1559]: time="2025-09-13T10:27:12.833779946Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.149176527s" Sep 13 10:27:12.833816 containerd[1559]: time="2025-09-13T10:27:12.833811755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 10:27:12.836302 containerd[1559]: time="2025-09-13T10:27:12.835393444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 10:27:12.837304 containerd[1559]: time="2025-09-13T10:27:12.837211798Z" level=info msg="CreateContainer within sandbox \"2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 10:27:12.848058 containerd[1559]: time="2025-09-13T10:27:12.848013167Z" level=info msg="Container 202c04513b4115ded430edbc91799aaad883e4ecac7ebd4cf70b2ce2eaa0885c: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:27:12.855721 containerd[1559]: time="2025-09-13T10:27:12.855684443Z" level=info msg="CreateContainer within sandbox \"2139f6347a116c99c1353985b30169d132cb6902658c22586e643b9579a3f9aa\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"202c04513b4115ded430edbc91799aaad883e4ecac7ebd4cf70b2ce2eaa0885c\"" Sep 13 10:27:12.856227 containerd[1559]: time="2025-09-13T10:27:12.856192118Z" level=info msg="StartContainer for \"202c04513b4115ded430edbc91799aaad883e4ecac7ebd4cf70b2ce2eaa0885c\"" Sep 13 10:27:12.857233 containerd[1559]: time="2025-09-13T10:27:12.857204225Z" level=info msg="connecting to shim 202c04513b4115ded430edbc91799aaad883e4ecac7ebd4cf70b2ce2eaa0885c" address="unix:///run/containerd/s/640a32f71bcb6eb33aec01aaedf94f1001efcb5a1ffcddfd91452f9539cc156b" protocol=ttrpc version=3 Sep 13 10:27:12.879424 systemd[1]: Started cri-containerd-202c04513b4115ded430edbc91799aaad883e4ecac7ebd4cf70b2ce2eaa0885c.scope - libcontainer container 202c04513b4115ded430edbc91799aaad883e4ecac7ebd4cf70b2ce2eaa0885c. Sep 13 10:27:12.928978 containerd[1559]: time="2025-09-13T10:27:12.928938543Z" level=info msg="StartContainer for \"202c04513b4115ded430edbc91799aaad883e4ecac7ebd4cf70b2ce2eaa0885c\" returns successfully" Sep 13 10:27:13.131043 kubelet[2709]: I0913 10:27:13.130891 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-4pmfw" podStartSLOduration=29.657329769 podStartE2EDuration="47.130873093s" podCreationTimestamp="2025-09-13 10:26:26 +0000 UTC" firstStartedPulling="2025-09-13 10:26:55.361123703 +0000 UTC m=+47.503651804" lastFinishedPulling="2025-09-13 10:27:12.834667027 +0000 UTC m=+64.977195128" observedRunningTime="2025-09-13 10:27:13.12973504 +0000 UTC m=+65.272263141" watchObservedRunningTime="2025-09-13 10:27:13.130873093 +0000 UTC m=+65.273401194" Sep 13 10:27:13.214664 containerd[1559]: time="2025-09-13T10:27:13.214611682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"202c04513b4115ded430edbc91799aaad883e4ecac7ebd4cf70b2ce2eaa0885c\" id:\"51bfb928b1d6cb5865882b01b675584e3fb00c7057a3a87e43e9cad65f2317ac\" pid:5307 exit_status:1 exited_at:{seconds:1757759233 nanos:214237087}" Sep 13 10:27:13.989486 systemd[1]: Started sshd@13-10.0.0.126:22-10.0.0.1:45182.service - OpenSSH per-connection server daemon (10.0.0.1:45182). Sep 13 10:27:14.061518 sshd[5328]: Accepted publickey for core from 10.0.0.1 port 45182 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:14.063315 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:14.068284 systemd-logind[1544]: New session 14 of user core. Sep 13 10:27:14.079415 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 10:27:14.211114 containerd[1559]: time="2025-09-13T10:27:14.211060631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"202c04513b4115ded430edbc91799aaad883e4ecac7ebd4cf70b2ce2eaa0885c\" id:\"57ed4e13612c9eef66d11e3ffdb68ee3a223c3f21e6b12d3e85e96978b4746a6\" pid:5348 exit_status:1 exited_at:{seconds:1757759234 nanos:210718008}" Sep 13 10:27:14.227184 sshd[5331]: Connection closed by 10.0.0.1 port 45182 Sep 13 10:27:14.228462 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:14.233412 systemd[1]: sshd@13-10.0.0.126:22-10.0.0.1:45182.service: Deactivated successfully. Sep 13 10:27:14.235902 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 10:27:14.236744 systemd-logind[1544]: Session 14 logged out. Waiting for processes to exit. Sep 13 10:27:14.238035 systemd-logind[1544]: Removed session 14. Sep 13 10:27:14.965755 containerd[1559]: time="2025-09-13T10:27:14.965674446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:27:14.966525 containerd[1559]: time="2025-09-13T10:27:14.966476858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 10:27:14.967716 containerd[1559]: time="2025-09-13T10:27:14.967677288Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:27:14.969750 containerd[1559]: time="2025-09-13T10:27:14.969710750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:27:14.970331 containerd[1559]: time="2025-09-13T10:27:14.970304187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.134874976s" Sep 13 10:27:14.970394 containerd[1559]: time="2025-09-13T10:27:14.970335678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 10:27:14.973532 containerd[1559]: time="2025-09-13T10:27:14.973496520Z" level=info msg="CreateContainer within sandbox \"31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 10:27:14.983444 containerd[1559]: time="2025-09-13T10:27:14.983398747Z" level=info msg="Container f7807ca335fe97ed1c892827833d8b5162a8ea801d983b17b2ea458a52cce0bc: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:27:14.995068 containerd[1559]: time="2025-09-13T10:27:14.995020006Z" level=info msg="CreateContainer within sandbox \"31c49e8e6b84c215b015f7604d5e206b3c6864000e035f90b49a6dc39fb1de92\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f7807ca335fe97ed1c892827833d8b5162a8ea801d983b17b2ea458a52cce0bc\"" Sep 13 10:27:14.996044 containerd[1559]: time="2025-09-13T10:27:14.996016033Z" level=info msg="StartContainer for \"f7807ca335fe97ed1c892827833d8b5162a8ea801d983b17b2ea458a52cce0bc\"" Sep 13 10:27:14.998024 containerd[1559]: time="2025-09-13T10:27:14.997967335Z" level=info msg="connecting to shim f7807ca335fe97ed1c892827833d8b5162a8ea801d983b17b2ea458a52cce0bc" address="unix:///run/containerd/s/4a802b29814d494d65feffb400376b0d77369a711be1f23d42b31fff75fc73c5" protocol=ttrpc version=3 Sep 13 10:27:15.026334 systemd[1]: Started cri-containerd-f7807ca335fe97ed1c892827833d8b5162a8ea801d983b17b2ea458a52cce0bc.scope - libcontainer container f7807ca335fe97ed1c892827833d8b5162a8ea801d983b17b2ea458a52cce0bc. Sep 13 10:27:15.068107 containerd[1559]: time="2025-09-13T10:27:15.068051037Z" level=info msg="TaskExit event in podsandbox handler container_id:\"194ce53ea006e4daecaebbdb25c00c0c6f13cd834a7ef357318a82dd89ecdbc6\" id:\"43d3bbe6d2237986691a8bd320351e67acad15afa96a6251335dc25c81a3e4d9\" pid:5386 exit_status:1 exited_at:{seconds:1757759235 nanos:67694088}" Sep 13 10:27:15.178141 containerd[1559]: time="2025-09-13T10:27:15.178097328Z" level=info msg="StartContainer for \"f7807ca335fe97ed1c892827833d8b5162a8ea801d983b17b2ea458a52cce0bc\" returns successfully" Sep 13 10:27:16.066875 kubelet[2709]: I0913 10:27:16.066818 2709 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 10:27:16.066875 kubelet[2709]: I0913 10:27:16.066871 2709 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 10:27:16.193895 kubelet[2709]: I0913 10:27:16.193818 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rg54z" podStartSLOduration=27.906492829 podStartE2EDuration="49.193796816s" podCreationTimestamp="2025-09-13 10:26:27 +0000 UTC" firstStartedPulling="2025-09-13 10:26:53.683797281 +0000 UTC m=+45.826325382" lastFinishedPulling="2025-09-13 10:27:14.971101268 +0000 UTC m=+67.113629369" observedRunningTime="2025-09-13 10:27:16.192346737 +0000 UTC m=+68.334874858" watchObservedRunningTime="2025-09-13 10:27:16.193796816 +0000 UTC m=+68.336324917" Sep 13 10:27:19.240340 systemd[1]: Started sshd@14-10.0.0.126:22-10.0.0.1:45198.service - OpenSSH per-connection server daemon (10.0.0.1:45198). Sep 13 10:27:19.292837 sshd[5440]: Accepted publickey for core from 10.0.0.1 port 45198 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:19.294170 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:19.298602 systemd-logind[1544]: New session 15 of user core. Sep 13 10:27:19.313431 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 10:27:19.421583 sshd[5443]: Connection closed by 10.0.0.1 port 45198 Sep 13 10:27:19.421915 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:19.425920 systemd[1]: sshd@14-10.0.0.126:22-10.0.0.1:45198.service: Deactivated successfully. Sep 13 10:27:19.428081 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 10:27:19.428919 systemd-logind[1544]: Session 15 logged out. Waiting for processes to exit. Sep 13 10:27:19.430166 systemd-logind[1544]: Removed session 15. Sep 13 10:27:24.438434 systemd[1]: Started sshd@15-10.0.0.126:22-10.0.0.1:39936.service - OpenSSH per-connection server daemon (10.0.0.1:39936). Sep 13 10:27:24.493855 sshd[5456]: Accepted publickey for core from 10.0.0.1 port 39936 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:24.495700 sshd-session[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:24.500197 systemd-logind[1544]: New session 16 of user core. Sep 13 10:27:24.513521 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 10:27:24.631672 sshd[5459]: Connection closed by 10.0.0.1 port 39936 Sep 13 10:27:24.632078 sshd-session[5456]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:24.636423 systemd[1]: sshd@15-10.0.0.126:22-10.0.0.1:39936.service: Deactivated successfully. Sep 13 10:27:24.638766 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 10:27:24.639646 systemd-logind[1544]: Session 16 logged out. Waiting for processes to exit. Sep 13 10:27:24.641001 systemd-logind[1544]: Removed session 16. Sep 13 10:27:26.814949 kubelet[2709]: I0913 10:27:26.814886 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 10:27:29.654128 systemd[1]: Started sshd@16-10.0.0.126:22-10.0.0.1:39940.service - OpenSSH per-connection server daemon (10.0.0.1:39940). Sep 13 10:27:29.723427 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 39940 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:29.726338 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:29.731768 systemd-logind[1544]: New session 17 of user core. Sep 13 10:27:29.743653 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 10:27:29.913801 sshd[5478]: Connection closed by 10.0.0.1 port 39940 Sep 13 10:27:29.914105 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:29.923089 systemd[1]: sshd@16-10.0.0.126:22-10.0.0.1:39940.service: Deactivated successfully. Sep 13 10:27:29.925169 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 10:27:29.926085 systemd-logind[1544]: Session 17 logged out. Waiting for processes to exit. Sep 13 10:27:29.929688 systemd[1]: Started sshd@17-10.0.0.126:22-10.0.0.1:60160.service - OpenSSH per-connection server daemon (10.0.0.1:60160). Sep 13 10:27:29.930725 systemd-logind[1544]: Removed session 17. Sep 13 10:27:29.984585 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 60160 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:29.986928 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:29.992377 systemd-logind[1544]: New session 18 of user core. Sep 13 10:27:30.002483 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 10:27:30.316937 sshd[5495]: Connection closed by 10.0.0.1 port 60160 Sep 13 10:27:30.317416 sshd-session[5492]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:30.332203 systemd[1]: sshd@17-10.0.0.126:22-10.0.0.1:60160.service: Deactivated successfully. Sep 13 10:27:30.334655 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 10:27:30.335618 systemd-logind[1544]: Session 18 logged out. Waiting for processes to exit. Sep 13 10:27:30.340066 systemd[1]: Started sshd@18-10.0.0.126:22-10.0.0.1:60166.service - OpenSSH per-connection server daemon (10.0.0.1:60166). Sep 13 10:27:30.340976 systemd-logind[1544]: Removed session 18. Sep 13 10:27:30.405146 sshd[5507]: Accepted publickey for core from 10.0.0.1 port 60166 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:30.406751 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:30.411316 systemd-logind[1544]: New session 19 of user core. Sep 13 10:27:30.419411 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 10:27:32.071282 sshd[5510]: Connection closed by 10.0.0.1 port 60166 Sep 13 10:27:32.072044 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:32.084187 systemd[1]: sshd@18-10.0.0.126:22-10.0.0.1:60166.service: Deactivated successfully. Sep 13 10:27:32.087483 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 10:27:32.087736 systemd[1]: session-19.scope: Consumed 651ms CPU time, 77.6M memory peak. Sep 13 10:27:32.090596 systemd-logind[1544]: Session 19 logged out. Waiting for processes to exit. Sep 13 10:27:32.094661 systemd[1]: Started sshd@19-10.0.0.126:22-10.0.0.1:60172.service - OpenSSH per-connection server daemon (10.0.0.1:60172). Sep 13 10:27:32.099353 systemd-logind[1544]: Removed session 19. Sep 13 10:27:32.144939 sshd[5529]: Accepted publickey for core from 10.0.0.1 port 60172 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:32.146396 sshd-session[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:32.151232 systemd-logind[1544]: New session 20 of user core. Sep 13 10:27:32.161488 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 10:27:32.575831 sshd[5533]: Connection closed by 10.0.0.1 port 60172 Sep 13 10:27:32.576244 sshd-session[5529]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:32.590877 systemd[1]: sshd@19-10.0.0.126:22-10.0.0.1:60172.service: Deactivated successfully. Sep 13 10:27:32.593781 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 10:27:32.595296 systemd-logind[1544]: Session 20 logged out. Waiting for processes to exit. Sep 13 10:27:32.598983 systemd[1]: Started sshd@20-10.0.0.126:22-10.0.0.1:60180.service - OpenSSH per-connection server daemon (10.0.0.1:60180). Sep 13 10:27:32.600247 systemd-logind[1544]: Removed session 20. Sep 13 10:27:32.649797 sshd[5544]: Accepted publickey for core from 10.0.0.1 port 60180 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:32.651776 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:32.656760 systemd-logind[1544]: New session 21 of user core. Sep 13 10:27:32.662432 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 10:27:32.780010 sshd[5547]: Connection closed by 10.0.0.1 port 60180 Sep 13 10:27:32.780391 sshd-session[5544]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:32.784571 systemd[1]: sshd@20-10.0.0.126:22-10.0.0.1:60180.service: Deactivated successfully. Sep 13 10:27:32.787216 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 10:27:32.788157 systemd-logind[1544]: Session 21 logged out. Waiting for processes to exit. Sep 13 10:27:32.789957 systemd-logind[1544]: Removed session 21. Sep 13 10:27:37.798528 systemd[1]: Started sshd@21-10.0.0.126:22-10.0.0.1:60192.service - OpenSSH per-connection server daemon (10.0.0.1:60192). Sep 13 10:27:37.849929 sshd[5569]: Accepted publickey for core from 10.0.0.1 port 60192 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:37.851512 sshd-session[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:37.856626 systemd-logind[1544]: New session 22 of user core. Sep 13 10:27:37.873552 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 10:27:37.987560 sshd[5572]: Connection closed by 10.0.0.1 port 60192 Sep 13 10:27:37.987943 sshd-session[5569]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:37.992566 systemd-logind[1544]: Session 22 logged out. Waiting for processes to exit. Sep 13 10:27:37.992996 systemd[1]: sshd@21-10.0.0.126:22-10.0.0.1:60192.service: Deactivated successfully. Sep 13 10:27:37.995414 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 10:27:37.997630 systemd-logind[1544]: Removed session 22. Sep 13 10:27:41.415768 containerd[1559]: time="2025-09-13T10:27:41.415700887Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fc6542a150532963fcb141e5f5d1785a7a2465e1b44d9ead26db866934a0886\" id:\"b5ffd06afe8d91f9a2e56a0222ba86fa55e558f7fd895eef7c9a1ba56efc6b4b\" pid:5619 exited_at:{seconds:1757759261 nanos:413899768}" Sep 13 10:27:41.449512 containerd[1559]: time="2025-09-13T10:27:41.449462661Z" level=info msg="TaskExit event in podsandbox handler container_id:\"202c04513b4115ded430edbc91799aaad883e4ecac7ebd4cf70b2ce2eaa0885c\" id:\"de5da7e5d07898b48dc5b330e2f9003bf39e940e03d5d028d7ae688386703395\" pid:5608 exited_at:{seconds:1757759261 nanos:449080653}" Sep 13 10:27:43.006980 systemd[1]: Started sshd@22-10.0.0.126:22-10.0.0.1:40066.service - OpenSSH per-connection server daemon (10.0.0.1:40066). Sep 13 10:27:43.076715 sshd[5638]: Accepted publickey for core from 10.0.0.1 port 40066 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:43.078384 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:43.083678 systemd-logind[1544]: New session 23 of user core. Sep 13 10:27:43.105464 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 10:27:43.231443 sshd[5641]: Connection closed by 10.0.0.1 port 40066 Sep 13 10:27:43.231785 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:43.235226 systemd[1]: sshd@22-10.0.0.126:22-10.0.0.1:40066.service: Deactivated successfully. Sep 13 10:27:43.237668 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 10:27:43.239558 systemd-logind[1544]: Session 23 logged out. Waiting for processes to exit. Sep 13 10:27:43.241484 systemd-logind[1544]: Removed session 23. Sep 13 10:27:44.858641 containerd[1559]: time="2025-09-13T10:27:44.858569976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"194ce53ea006e4daecaebbdb25c00c0c6f13cd834a7ef357318a82dd89ecdbc6\" id:\"8c069df1b6fcb98bd6db0a87f2c82f06dba88511b45b5c0efb74bdc7512ffe4f\" pid:5666 exited_at:{seconds:1757759264 nanos:858248435}" Sep 13 10:27:48.249468 systemd[1]: Started sshd@23-10.0.0.126:22-10.0.0.1:40110.service - OpenSSH per-connection server daemon (10.0.0.1:40110). Sep 13 10:27:48.315767 sshd[5681]: Accepted publickey for core from 10.0.0.1 port 40110 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:48.317247 sshd-session[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:48.321800 systemd-logind[1544]: New session 24 of user core. Sep 13 10:27:48.331423 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 10:27:48.528514 sshd[5684]: Connection closed by 10.0.0.1 port 40110 Sep 13 10:27:48.528762 sshd-session[5681]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:48.532947 systemd[1]: sshd@23-10.0.0.126:22-10.0.0.1:40110.service: Deactivated successfully. Sep 13 10:27:48.535585 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 10:27:48.536436 systemd-logind[1544]: Session 24 logged out. Waiting for processes to exit. Sep 13 10:27:48.537968 systemd-logind[1544]: Removed session 24. Sep 13 10:27:48.867672 containerd[1559]: time="2025-09-13T10:27:48.867610220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"202c04513b4115ded430edbc91799aaad883e4ecac7ebd4cf70b2ce2eaa0885c\" id:\"7fe7331dad4e94f100873af0840f64c397adb35c54fc34685c9962a9a9766543\" pid:5709 exited_at:{seconds:1757759268 nanos:867203357}" Sep 13 10:27:53.541615 systemd[1]: Started sshd@24-10.0.0.126:22-10.0.0.1:33256.service - OpenSSH per-connection server daemon (10.0.0.1:33256). Sep 13 10:27:53.618094 sshd[5722]: Accepted publickey for core from 10.0.0.1 port 33256 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:53.619924 sshd-session[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:53.624924 systemd-logind[1544]: New session 25 of user core. Sep 13 10:27:53.631442 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 10:27:53.842862 sshd[5726]: Connection closed by 10.0.0.1 port 33256 Sep 13 10:27:53.844378 sshd-session[5722]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:53.849752 systemd-logind[1544]: Session 25 logged out. Waiting for processes to exit. Sep 13 10:27:53.850829 systemd[1]: sshd@24-10.0.0.126:22-10.0.0.1:33256.service: Deactivated successfully. Sep 13 10:27:53.853185 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 10:27:53.856379 systemd-logind[1544]: Removed session 25.