Sep 5 00:28:30.872987 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:12:48 -00 2025 Sep 5 00:28:30.873015 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 00:28:30.873024 kernel: BIOS-provided physical RAM map: Sep 5 00:28:30.873031 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 5 00:28:30.873038 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 5 00:28:30.873044 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 5 00:28:30.873052 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 5 00:28:30.873059 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 5 00:28:30.873070 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 00:28:30.873077 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 5 00:28:30.873084 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 00:28:30.873090 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 5 00:28:30.873097 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 00:28:30.873104 kernel: NX (Execute Disable) protection: active Sep 5 00:28:30.873115 kernel: APIC: Static calls initialized Sep 5 00:28:30.873122 kernel: SMBIOS 2.8 present. Sep 5 00:28:30.873132 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 5 00:28:30.873139 kernel: DMI: Memory slots populated: 1/1 Sep 5 00:28:30.873146 kernel: Hypervisor detected: KVM Sep 5 00:28:30.873154 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:28:30.873161 kernel: kvm-clock: using sched offset of 5099256696 cycles Sep 5 00:28:30.873169 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:28:30.873176 kernel: tsc: Detected 2794.748 MHz processor Sep 5 00:28:30.873186 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:28:30.873194 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:28:30.873201 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 5 00:28:30.873209 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 5 00:28:30.873216 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:28:30.873223 kernel: Using GB pages for direct mapping Sep 5 00:28:30.873231 kernel: ACPI: Early table checksum verification disabled Sep 5 00:28:30.873238 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 5 00:28:30.873245 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:28:30.873255 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:28:30.873305 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:28:30.873312 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 5 00:28:30.873320 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:28:30.873327 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:28:30.873334 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:28:30.873350 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:28:30.873357 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 5 00:28:30.873371 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 5 00:28:30.873380 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 5 00:28:30.873390 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 5 00:28:30.873399 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 5 00:28:30.873408 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 5 00:28:30.873417 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 5 00:28:30.873430 kernel: No NUMA configuration found Sep 5 00:28:30.873439 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 5 00:28:30.873448 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 5 00:28:30.873458 kernel: Zone ranges: Sep 5 00:28:30.873468 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:28:30.873482 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 5 00:28:30.873493 kernel: Normal empty Sep 5 00:28:30.873519 kernel: Device empty Sep 5 00:28:30.873531 kernel: Movable zone start for each node Sep 5 00:28:30.873540 kernel: Early memory node ranges Sep 5 00:28:30.873576 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 5 00:28:30.873584 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 5 00:28:30.873592 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 5 00:28:30.873600 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:28:30.873607 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 5 00:28:30.873632 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 5 00:28:30.873641 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:28:30.873652 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:28:30.873660 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:28:30.873672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:28:30.873679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:28:30.873689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:28:30.873697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:28:30.873704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:28:30.873712 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:28:30.873719 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:28:30.873727 kernel: TSC deadline timer available Sep 5 00:28:30.873735 kernel: CPU topo: Max. logical packages: 1 Sep 5 00:28:30.873745 kernel: CPU topo: Max. logical dies: 1 Sep 5 00:28:30.873752 kernel: CPU topo: Max. dies per package: 1 Sep 5 00:28:30.873760 kernel: CPU topo: Max. threads per core: 1 Sep 5 00:28:30.873767 kernel: CPU topo: Num. cores per package: 4 Sep 5 00:28:30.873775 kernel: CPU topo: Num. threads per package: 4 Sep 5 00:28:30.873782 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 5 00:28:30.873790 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:28:30.873797 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:28:30.873805 kernel: kvm-guest: setup PV sched yield Sep 5 00:28:30.873812 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 5 00:28:30.873822 kernel: Booting paravirtualized kernel on KVM Sep 5 00:28:30.873830 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:28:30.873838 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:28:30.873845 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 5 00:28:30.873853 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 5 00:28:30.873861 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:28:30.873868 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:28:30.873875 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:28:30.873884 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 00:28:30.873895 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:28:30.873902 kernel: random: crng init done Sep 5 00:28:30.873910 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:28:30.873917 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:28:30.873925 kernel: Fallback order for Node 0: 0 Sep 5 00:28:30.873932 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 5 00:28:30.873940 kernel: Policy zone: DMA32 Sep 5 00:28:30.873947 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:28:30.873957 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:28:30.873964 kernel: ftrace: allocating 40102 entries in 157 pages Sep 5 00:28:30.873972 kernel: ftrace: allocated 157 pages with 5 groups Sep 5 00:28:30.873979 kernel: Dynamic Preempt: voluntary Sep 5 00:28:30.873987 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:28:30.874006 kernel: rcu: RCU event tracing is enabled. Sep 5 00:28:30.874014 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:28:30.874021 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:28:30.874033 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:28:30.874044 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:28:30.874053 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:28:30.874063 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:28:30.874070 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:28:30.874078 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:28:30.874086 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:28:30.874093 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:28:30.874101 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:28:30.874118 kernel: Console: colour VGA+ 80x25 Sep 5 00:28:30.874126 kernel: printk: legacy console [ttyS0] enabled Sep 5 00:28:30.874133 kernel: ACPI: Core revision 20240827 Sep 5 00:28:30.874141 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:28:30.874152 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:28:30.874159 kernel: x2apic enabled Sep 5 00:28:30.874167 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:28:30.874177 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:28:30.874186 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:28:30.874195 kernel: kvm-guest: setup PV IPIs Sep 5 00:28:30.874203 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:28:30.874211 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 00:28:30.874219 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 5 00:28:30.874227 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:28:30.874235 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:28:30.874243 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:28:30.874251 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:28:30.874274 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:28:30.874282 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:28:30.874290 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:28:30.874302 kernel: active return thunk: retbleed_return_thunk Sep 5 00:28:30.874310 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:28:30.874318 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:28:30.874326 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:28:30.874333 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:28:30.874349 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:28:30.874360 kernel: active return thunk: srso_return_thunk Sep 5 00:28:30.874368 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:28:30.874376 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:28:30.874384 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:28:30.874392 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:28:30.874400 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:28:30.874408 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:28:30.874416 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:28:30.874423 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:28:30.874433 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 5 00:28:30.874441 kernel: landlock: Up and running. Sep 5 00:28:30.874449 kernel: SELinux: Initializing. Sep 5 00:28:30.874459 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:28:30.874467 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:28:30.874475 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:28:30.874483 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:28:30.874491 kernel: ... version: 0 Sep 5 00:28:30.874499 kernel: ... bit width: 48 Sep 5 00:28:30.874509 kernel: ... generic registers: 6 Sep 5 00:28:30.874517 kernel: ... value mask: 0000ffffffffffff Sep 5 00:28:30.874524 kernel: ... max period: 00007fffffffffff Sep 5 00:28:30.874532 kernel: ... fixed-purpose events: 0 Sep 5 00:28:30.874540 kernel: ... event mask: 000000000000003f Sep 5 00:28:30.874548 kernel: signal: max sigframe size: 1776 Sep 5 00:28:30.874556 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:28:30.874564 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:28:30.874571 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 5 00:28:30.874582 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:28:30.874589 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:28:30.874597 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:28:30.874605 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:28:30.874620 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 5 00:28:30.874635 kernel: Memory: 2428920K/2571752K available (14336K kernel code, 2428K rwdata, 9956K rodata, 54044K init, 2924K bss, 136904K reserved, 0K cma-reserved) Sep 5 00:28:30.874650 kernel: devtmpfs: initialized Sep 5 00:28:30.874666 kernel: x86/mm: Memory block size: 128MB Sep 5 00:28:30.874677 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:28:30.874688 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:28:30.874695 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:28:30.874703 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:28:30.874711 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:28:30.874719 kernel: audit: type=2000 audit(1757032107.777:1): state=initialized audit_enabled=0 res=1 Sep 5 00:28:30.874727 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:28:30.874735 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:28:30.874742 kernel: cpuidle: using governor menu Sep 5 00:28:30.874750 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:28:30.874760 kernel: dca service started, version 1.12.1 Sep 5 00:28:30.874768 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 5 00:28:30.874776 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 00:28:30.874784 kernel: PCI: Using configuration type 1 for base access Sep 5 00:28:30.874792 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:28:30.874799 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:28:30.874807 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:28:30.874815 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:28:30.874823 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:28:30.874833 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:28:30.874840 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:28:30.874848 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:28:30.874856 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:28:30.874864 kernel: ACPI: Interpreter enabled Sep 5 00:28:30.874872 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:28:30.874880 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:28:30.874888 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:28:30.874896 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:28:30.874906 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:28:30.874913 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:28:30.875204 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:28:30.875366 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:28:30.875491 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:28:30.875502 kernel: PCI host bridge to bus 0000:00 Sep 5 00:28:30.875641 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:28:30.875760 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:28:30.875871 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:28:30.875981 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 5 00:28:30.876091 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 00:28:30.876315 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 5 00:28:30.876445 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:28:30.876607 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 5 00:28:30.876752 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 5 00:28:30.876876 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 5 00:28:30.876997 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 5 00:28:30.877117 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 5 00:28:30.877293 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:28:30.877503 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 5 00:28:30.877677 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 5 00:28:30.877821 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 5 00:28:30.877943 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 5 00:28:30.878091 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 5 00:28:30.878214 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 5 00:28:30.878373 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 5 00:28:30.878496 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 5 00:28:30.878637 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 5 00:28:30.878772 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 5 00:28:30.878900 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 5 00:28:30.879020 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 5 00:28:30.879145 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 5 00:28:30.879368 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 5 00:28:30.879495 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:28:30.879636 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 5 00:28:30.879758 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 5 00:28:30.879877 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 5 00:28:30.880021 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 5 00:28:30.880145 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 5 00:28:30.880155 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:28:30.880167 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:28:30.880176 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:28:30.880184 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:28:30.880192 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:28:30.880200 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:28:30.880208 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:28:30.880216 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:28:30.880224 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:28:30.880232 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:28:30.880242 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:28:30.880250 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:28:30.880258 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:28:30.880282 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:28:30.880290 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:28:30.880298 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:28:30.880306 kernel: iommu: Default domain type: Translated Sep 5 00:28:30.880314 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:28:30.880322 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:28:30.880333 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:28:30.880348 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 5 00:28:30.880356 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 5 00:28:30.880481 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:28:30.880602 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:28:30.880721 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:28:30.880732 kernel: vgaarb: loaded Sep 5 00:28:30.880740 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:28:30.880748 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:28:30.880759 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:28:30.880767 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:28:30.880775 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:28:30.880783 kernel: pnp: PnP ACPI init Sep 5 00:28:30.880927 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 00:28:30.880939 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:28:30.880948 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:28:30.880955 kernel: NET: Registered PF_INET protocol family Sep 5 00:28:30.880967 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:28:30.880975 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:28:30.880983 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:28:30.880991 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:28:30.880999 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:28:30.881007 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:28:30.881015 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:28:30.881023 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:28:30.881033 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:28:30.881041 kernel: NET: Registered PF_XDP protocol family Sep 5 00:28:30.881154 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:28:30.881284 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:28:30.881409 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:28:30.881525 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 5 00:28:30.881639 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 00:28:30.881774 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 5 00:28:30.881786 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:28:30.881799 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 00:28:30.881808 kernel: Initialise system trusted keyrings Sep 5 00:28:30.881815 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:28:30.881823 kernel: Key type asymmetric registered Sep 5 00:28:30.881831 kernel: Asymmetric key parser 'x509' registered Sep 5 00:28:30.881839 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 00:28:30.881847 kernel: io scheduler mq-deadline registered Sep 5 00:28:30.881855 kernel: io scheduler kyber registered Sep 5 00:28:30.881863 kernel: io scheduler bfq registered Sep 5 00:28:30.881873 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:28:30.881882 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:28:30.881890 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:28:30.881898 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:28:30.881906 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:28:30.881914 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:28:30.881923 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:28:30.881931 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:28:30.881938 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:28:30.881949 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:28:30.882092 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:28:30.882210 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:28:30.882356 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:28:30 UTC (1757032110) Sep 5 00:28:30.882473 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 5 00:28:30.882484 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:28:30.882492 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:28:30.882500 kernel: Segment Routing with IPv6 Sep 5 00:28:30.882512 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:28:30.882520 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:28:30.882528 kernel: Key type dns_resolver registered Sep 5 00:28:30.882536 kernel: IPI shorthand broadcast: enabled Sep 5 00:28:30.882544 kernel: sched_clock: Marking stable (3253002659, 112972276)->(3386743573, -20768638) Sep 5 00:28:30.882552 kernel: registered taskstats version 1 Sep 5 00:28:30.882560 kernel: Loading compiled-in X.509 certificates Sep 5 00:28:30.882568 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 55c9ce6358d6eed45ca94030a2308729ee6a249f' Sep 5 00:28:30.882576 kernel: Demotion targets for Node 0: null Sep 5 00:28:30.882586 kernel: Key type .fscrypt registered Sep 5 00:28:30.882593 kernel: Key type fscrypt-provisioning registered Sep 5 00:28:30.882601 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:28:30.882609 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:28:30.882617 kernel: ima: No architecture policies found Sep 5 00:28:30.882625 kernel: clk: Disabling unused clocks Sep 5 00:28:30.882633 kernel: Warning: unable to open an initial console. Sep 5 00:28:30.882641 kernel: Freeing unused kernel image (initmem) memory: 54044K Sep 5 00:28:30.882651 kernel: Write protecting the kernel read-only data: 24576k Sep 5 00:28:30.882659 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 5 00:28:30.882667 kernel: Run /init as init process Sep 5 00:28:30.882676 kernel: with arguments: Sep 5 00:28:30.882686 kernel: /init Sep 5 00:28:30.882696 kernel: with environment: Sep 5 00:28:30.882707 kernel: HOME=/ Sep 5 00:28:30.882717 kernel: TERM=linux Sep 5 00:28:30.882728 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:28:30.882744 systemd[1]: Successfully made /usr/ read-only. Sep 5 00:28:30.882774 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:28:30.882789 systemd[1]: Detected virtualization kvm. Sep 5 00:28:30.882800 systemd[1]: Detected architecture x86-64. Sep 5 00:28:30.882812 systemd[1]: Running in initrd. Sep 5 00:28:30.882824 systemd[1]: No hostname configured, using default hostname. Sep 5 00:28:30.882839 systemd[1]: Hostname set to . Sep 5 00:28:30.882851 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:28:30.882863 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:28:30.882874 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:28:30.882886 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:28:30.882898 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:28:30.882910 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:28:30.882922 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:28:30.882939 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:28:30.882955 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:28:30.882967 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:28:30.882979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:28:30.882990 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:28:30.883002 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:28:30.883018 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:28:30.883029 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:28:30.883041 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:28:30.883054 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:28:30.883066 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:28:30.883076 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:28:30.883085 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 5 00:28:30.883094 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:28:30.883102 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:28:30.883114 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:28:30.883122 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:28:30.883131 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:28:30.883140 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:28:30.883150 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:28:30.883161 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 5 00:28:30.883170 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:28:30.883179 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:28:30.883187 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:28:30.883196 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:28:30.883205 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:28:30.883216 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:28:30.883225 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:28:30.883234 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:28:30.883315 systemd-journald[220]: Collecting audit messages is disabled. Sep 5 00:28:30.883360 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:28:30.883372 systemd-journald[220]: Journal started Sep 5 00:28:30.883395 systemd-journald[220]: Runtime Journal (/run/log/journal/f624d575a3274790b4db3c82332db6b4) is 6M, max 48.6M, 42.5M free. Sep 5 00:28:30.874980 systemd-modules-load[221]: Inserted module 'overlay' Sep 5 00:28:30.917645 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:28:30.917689 kernel: Bridge firewalling registered Sep 5 00:28:30.907842 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 5 00:28:30.920857 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:28:30.921564 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:28:30.924337 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:28:30.931759 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:28:30.935926 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:28:30.943510 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:28:30.947423 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:28:30.952943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:28:30.958355 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:28:30.961050 systemd-tmpfiles[246]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 5 00:28:30.967602 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:28:30.970353 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:28:30.972605 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:28:30.975840 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:28:30.999946 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 00:28:31.018361 systemd-resolved[261]: Positive Trust Anchors: Sep 5 00:28:31.018383 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:28:31.018411 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:28:31.021201 systemd-resolved[261]: Defaulting to hostname 'linux'. Sep 5 00:28:31.022597 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:28:31.029985 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:28:31.134329 kernel: SCSI subsystem initialized Sep 5 00:28:31.144292 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:28:31.155313 kernel: iscsi: registered transport (tcp) Sep 5 00:28:31.178310 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:28:31.178355 kernel: QLogic iSCSI HBA Driver Sep 5 00:28:31.202235 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:28:31.236627 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:28:31.240472 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:28:31.299797 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:28:31.302119 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:28:31.403331 kernel: raid6: avx2x4 gen() 29176 MB/s Sep 5 00:28:31.420338 kernel: raid6: avx2x2 gen() 30629 MB/s Sep 5 00:28:31.437376 kernel: raid6: avx2x1 gen() 25479 MB/s Sep 5 00:28:31.437448 kernel: raid6: using algorithm avx2x2 gen() 30629 MB/s Sep 5 00:28:31.455364 kernel: raid6: .... xor() 19561 MB/s, rmw enabled Sep 5 00:28:31.455439 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:28:31.510339 kernel: xor: automatically using best checksumming function avx Sep 5 00:28:31.673326 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:28:31.683142 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:28:31.685014 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:28:31.715671 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 5 00:28:31.721305 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:28:31.755651 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:28:31.780901 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Sep 5 00:28:31.812048 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:28:31.814324 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:28:31.890078 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:28:31.894632 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:28:31.984303 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:28:31.988296 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:28:31.999305 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:28:32.001491 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 5 00:28:32.003690 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:28:32.003721 kernel: GPT:9289727 != 19775487 Sep 5 00:28:32.003737 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:28:32.005395 kernel: GPT:9289727 != 19775487 Sep 5 00:28:32.005420 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:28:32.005433 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:28:32.011292 kernel: AES CTR mode by8 optimization enabled Sep 5 00:28:32.041238 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:28:32.041434 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:28:32.054297 kernel: libata version 3.00 loaded. Sep 5 00:28:32.054408 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:28:32.058568 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:28:32.060058 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:28:32.072297 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:28:32.074963 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:28:32.074994 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 5 00:28:32.075175 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 5 00:28:32.076233 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:28:32.078341 kernel: scsi host0: ahci Sep 5 00:28:32.079524 kernel: scsi host1: ahci Sep 5 00:28:32.079711 kernel: scsi host2: ahci Sep 5 00:28:32.080287 kernel: scsi host3: ahci Sep 5 00:28:32.081059 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:28:32.086567 kernel: scsi host4: ahci Sep 5 00:28:32.086811 kernel: scsi host5: ahci Sep 5 00:28:32.087021 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 5 00:28:32.087039 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 5 00:28:32.087065 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 5 00:28:32.087078 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 5 00:28:32.087092 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 5 00:28:32.089616 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 5 00:28:32.112635 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:28:32.151463 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:28:32.159845 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:28:32.159958 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:28:32.171555 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:28:32.174299 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:28:32.404319 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:28:32.404411 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:28:32.405323 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:28:32.406317 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:28:32.407306 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:28:32.408381 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 00:28:32.408396 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:28:32.409705 kernel: ata3.00: applying bridge limits Sep 5 00:28:32.410290 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:28:32.411305 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 00:28:32.411319 kernel: ata3.00: configured for UDMA/100 Sep 5 00:28:32.412302 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:28:32.445121 disk-uuid[636]: Primary Header is updated. Sep 5 00:28:32.445121 disk-uuid[636]: Secondary Entries is updated. Sep 5 00:28:32.445121 disk-uuid[636]: Secondary Header is updated. Sep 5 00:28:32.449645 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:28:32.461909 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:28:32.462374 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:28:32.476308 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:28:32.831594 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:28:32.832363 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:28:32.835191 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:28:32.835340 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:28:32.839517 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:28:32.873183 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:28:33.457920 disk-uuid[637]: The operation has completed successfully. Sep 5 00:28:33.459501 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:28:33.491108 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:28:33.491241 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:28:33.527107 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:28:33.556670 sh[665]: Success Sep 5 00:28:33.579364 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:28:33.579446 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:28:33.580672 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 5 00:28:33.591329 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 5 00:28:33.626632 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:28:33.631776 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:28:33.654790 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:28:33.663339 kernel: BTRFS: device fsid bbfaff22-5589-4cab-94aa-ce3e6be0b7e8 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (677) Sep 5 00:28:33.663383 kernel: BTRFS info (device dm-0): first mount of filesystem bbfaff22-5589-4cab-94aa-ce3e6be0b7e8 Sep 5 00:28:33.663398 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:28:33.670312 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:28:33.670353 kernel: BTRFS info (device dm-0): enabling free space tree Sep 5 00:28:33.672038 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:28:33.674525 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:28:33.676832 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:28:33.679746 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:28:33.682572 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:28:33.725300 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Sep 5 00:28:33.727382 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:28:33.727407 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:28:33.730836 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:28:33.730861 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:28:33.737337 kernel: BTRFS info (device vda6): last unmount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:28:33.738513 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:28:33.740746 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:28:33.903996 ignition[751]: Ignition 2.21.0 Sep 5 00:28:33.904015 ignition[751]: Stage: fetch-offline Sep 5 00:28:33.904049 ignition[751]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:28:33.904058 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:28:33.904155 ignition[751]: parsed url from cmdline: "" Sep 5 00:28:33.904162 ignition[751]: no config URL provided Sep 5 00:28:33.904167 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:28:33.904176 ignition[751]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:28:33.904200 ignition[751]: op(1): [started] loading QEMU firmware config module Sep 5 00:28:33.904209 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:28:33.949777 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:28:33.953611 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:28:33.955331 ignition[751]: op(1): [finished] loading QEMU firmware config module Sep 5 00:28:33.993314 ignition[751]: parsing config with SHA512: 09e0a6e98484e07941ceb19a92a29e8e712cbeb8d60910c948648c78b16c7135e36de6861f947c213a29ddfc765b9e8ac8047c8015425a792febf3d9b54312e1 Sep 5 00:28:33.997105 unknown[751]: fetched base config from "system" Sep 5 00:28:33.997118 unknown[751]: fetched user config from "qemu" Sep 5 00:28:33.997599 ignition[751]: fetch-offline: fetch-offline passed Sep 5 00:28:33.997682 ignition[751]: Ignition finished successfully Sep 5 00:28:34.000911 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:28:34.037243 systemd-networkd[854]: lo: Link UP Sep 5 00:28:34.037280 systemd-networkd[854]: lo: Gained carrier Sep 5 00:28:34.038927 systemd-networkd[854]: Enumeration completed Sep 5 00:28:34.039095 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:28:34.039379 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:28:34.039383 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:28:34.041594 systemd-networkd[854]: eth0: Link UP Sep 5 00:28:34.060301 systemd[1]: Reached target network.target - Network. Sep 5 00:28:34.061149 systemd-networkd[854]: eth0: Gained carrier Sep 5 00:28:34.061158 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:28:34.061171 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:28:34.062218 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:28:34.140350 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:28:34.152555 ignition[858]: Ignition 2.21.0 Sep 5 00:28:34.152575 ignition[858]: Stage: kargs Sep 5 00:28:34.152758 ignition[858]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:28:34.152774 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:28:34.155008 ignition[858]: kargs: kargs passed Sep 5 00:28:34.155114 ignition[858]: Ignition finished successfully Sep 5 00:28:34.159667 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:28:34.162193 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:28:34.213416 ignition[867]: Ignition 2.21.0 Sep 5 00:28:34.213450 ignition[867]: Stage: disks Sep 5 00:28:34.213665 ignition[867]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:28:34.213677 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:28:34.215325 ignition[867]: disks: disks passed Sep 5 00:28:34.215405 ignition[867]: Ignition finished successfully Sep 5 00:28:34.219645 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:28:34.286978 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:28:34.288867 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:28:34.289073 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:28:34.289592 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:28:34.289899 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:28:34.291356 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:28:34.321672 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 5 00:28:35.083092 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:28:35.085833 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:28:35.338332 kernel: EXT4-fs (vda9): mounted filesystem a99dab41-6cdd-4037-a941-eeee48403b9e r/w with ordered data mode. Quota mode: none. Sep 5 00:28:35.338973 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:28:35.341249 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:28:35.344819 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:28:35.347477 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:28:35.349447 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:28:35.349498 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:28:35.349520 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:28:35.368837 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:28:35.388419 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:28:35.393609 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (885) Sep 5 00:28:35.393631 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:28:35.393642 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:28:35.396612 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:28:35.396673 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:28:35.398172 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:28:35.516133 systemd-networkd[854]: eth0: Gained IPv6LL Sep 5 00:28:35.520835 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:28:35.525473 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:28:35.530172 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:28:35.535299 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:28:35.658488 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:28:35.676391 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:28:35.678797 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:28:35.699610 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:28:35.743007 kernel: BTRFS info (device vda6): last unmount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:28:35.751553 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:28:35.783726 ignition[1000]: INFO : Ignition 2.21.0 Sep 5 00:28:35.783726 ignition[1000]: INFO : Stage: mount Sep 5 00:28:35.796178 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:28:35.796178 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:28:35.796178 ignition[1000]: INFO : mount: mount passed Sep 5 00:28:35.796178 ignition[1000]: INFO : Ignition finished successfully Sep 5 00:28:35.788406 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:28:35.796595 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:28:36.340807 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:28:36.376899 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Sep 5 00:28:36.376990 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:28:36.377002 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:28:36.381295 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:28:36.381358 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:28:36.383539 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:28:36.453051 ignition[1028]: INFO : Ignition 2.21.0 Sep 5 00:28:36.453051 ignition[1028]: INFO : Stage: files Sep 5 00:28:36.455058 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:28:36.455058 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:28:36.455058 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:28:36.455058 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:28:36.455058 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:28:36.462059 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:28:36.462059 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:28:36.462059 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:28:36.462059 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:28:36.462059 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 5 00:28:36.458570 unknown[1028]: wrote ssh authorized keys file for user: core Sep 5 00:28:36.516110 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:28:37.078014 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:28:37.080024 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:28:37.080024 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:28:37.080024 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:28:37.080024 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:28:37.080024 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:28:37.080024 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:28:37.080024 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:28:37.092760 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:28:37.092760 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:28:37.092760 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:28:37.092760 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:28:37.092760 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:28:37.092760 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:28:37.092760 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 5 00:28:37.761786 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 00:28:38.636961 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:28:38.636961 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 00:28:38.640655 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:28:38.737854 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:28:38.737854 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 00:28:38.737854 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 00:28:38.750303 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:28:38.750303 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:28:38.750303 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 00:28:38.750303 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:28:38.795515 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:28:38.807755 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:28:38.810067 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:28:38.810067 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:28:38.810067 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:28:38.810067 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:28:38.810067 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:28:38.810067 ignition[1028]: INFO : files: files passed Sep 5 00:28:38.810067 ignition[1028]: INFO : Ignition finished successfully Sep 5 00:28:38.822077 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:28:38.825573 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:28:38.828519 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:28:38.853369 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:28:38.853684 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:28:38.858178 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:28:38.863163 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:28:38.863163 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:28:38.867178 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:28:38.868794 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:28:38.869331 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:28:38.873737 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:28:38.956139 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:28:38.956316 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:28:38.959589 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:28:38.961329 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:28:38.961596 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:28:38.964916 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:28:39.012172 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:28:39.016215 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:28:39.049837 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:28:39.051577 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:28:39.054036 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:28:39.055334 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:28:39.055534 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:28:39.059195 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:28:39.060455 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:28:39.060791 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:28:39.061129 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:28:39.061712 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:28:39.062090 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:28:39.062690 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:28:39.063067 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:28:39.063647 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:28:39.077470 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:28:39.079552 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:28:39.080640 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:28:39.080796 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:28:39.084562 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:28:39.084935 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:28:39.085305 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:28:39.085440 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:28:39.085980 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:28:39.086155 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:28:39.094483 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:28:39.094606 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:28:39.095502 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:28:39.097489 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:28:39.101387 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:28:39.101800 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:28:39.102056 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:28:39.102582 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:28:39.102684 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:28:39.109114 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:28:39.109201 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:28:39.110041 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:28:39.110187 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:28:39.113029 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:28:39.113191 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:28:39.117022 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:28:39.121427 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:28:39.123433 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:28:39.123595 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:28:39.125833 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:28:39.125975 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:28:39.131855 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:28:39.135481 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:28:39.158947 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:28:39.339707 ignition[1085]: INFO : Ignition 2.21.0 Sep 5 00:28:39.339707 ignition[1085]: INFO : Stage: umount Sep 5 00:28:39.341971 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:28:39.341971 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:28:39.344880 ignition[1085]: INFO : umount: umount passed Sep 5 00:28:39.345919 ignition[1085]: INFO : Ignition finished successfully Sep 5 00:28:39.349162 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:28:39.349348 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:28:39.351864 systemd[1]: Stopped target network.target - Network. Sep 5 00:28:39.353801 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:28:39.353879 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:28:39.356127 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:28:39.356196 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:28:39.358230 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:28:39.358333 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:28:39.360144 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:28:39.360203 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:28:39.362349 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:28:39.364048 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:28:39.369381 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:28:39.369583 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:28:39.374213 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 5 00:28:39.374658 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:28:39.374728 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:28:39.379954 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:28:39.380423 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:28:39.380599 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:28:39.384637 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 5 00:28:39.385377 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 5 00:28:39.386025 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:28:39.386090 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:28:39.390203 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:28:39.391288 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:28:39.391349 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:28:39.394449 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:28:39.394510 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:28:39.398178 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:28:39.398232 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:28:39.399242 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:28:39.400611 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 00:28:39.420820 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:28:39.421028 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:28:39.422918 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:28:39.423029 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:28:39.426147 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:28:39.426244 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:28:39.426864 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:28:39.426920 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:28:39.429635 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:28:39.429720 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:28:39.431009 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:28:39.431065 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:28:39.435147 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:28:39.435234 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:28:39.439565 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:28:39.440395 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 5 00:28:39.440477 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:28:39.476927 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:28:39.477023 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:28:39.481100 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 00:28:39.481154 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:28:39.484997 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:28:39.485054 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:28:39.486361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:28:39.486412 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:28:39.506300 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:28:39.506502 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:28:39.532027 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:28:39.532203 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:28:39.534651 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:28:39.535458 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:28:39.535535 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:28:39.541625 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:28:39.563432 systemd[1]: Switching root. Sep 5 00:28:39.607457 systemd-journald[220]: Journal stopped Sep 5 00:28:41.474603 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 5 00:28:41.474686 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:28:41.474707 kernel: SELinux: policy capability open_perms=1 Sep 5 00:28:41.474719 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:28:41.474736 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:28:41.474747 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:28:41.474758 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:28:41.474770 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:28:41.474781 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:28:41.474793 kernel: SELinux: policy capability userspace_initial_context=0 Sep 5 00:28:41.474804 kernel: audit: type=1403 audit(1757032120.377:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:28:41.474825 systemd[1]: Successfully loaded SELinux policy in 64.473ms. Sep 5 00:28:41.474857 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.669ms. Sep 5 00:28:41.474876 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:28:41.474889 systemd[1]: Detected virtualization kvm. Sep 5 00:28:41.474901 systemd[1]: Detected architecture x86-64. Sep 5 00:28:41.474913 systemd[1]: Detected first boot. Sep 5 00:28:41.474925 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:28:41.474938 zram_generator::config[1130]: No configuration found. Sep 5 00:28:41.474951 kernel: Guest personality initialized and is inactive Sep 5 00:28:41.474964 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 5 00:28:41.474980 kernel: Initialized host personality Sep 5 00:28:41.474992 kernel: NET: Registered PF_VSOCK protocol family Sep 5 00:28:41.475004 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:28:41.475020 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 5 00:28:41.475043 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:28:41.475058 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:28:41.475080 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:28:41.475096 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:28:41.475119 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:28:41.475135 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:28:41.475150 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:28:41.475166 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:28:41.475182 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:28:41.475198 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:28:41.475213 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:28:41.475229 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:28:41.475245 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:28:41.475287 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:28:41.475304 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:28:41.475322 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:28:41.475341 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:28:41.475356 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:28:41.475372 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:28:41.475388 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:28:41.475404 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:28:41.475428 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:28:41.475445 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:28:41.475462 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:28:41.475478 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:28:41.475494 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:28:41.475510 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:28:41.475525 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:28:41.475542 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:28:41.475555 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:28:41.475574 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 5 00:28:41.475587 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:28:41.475599 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:28:41.475612 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:28:41.475624 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:28:41.475643 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:28:41.475656 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:28:41.475670 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:28:41.475682 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:28:41.475697 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:28:41.475709 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:28:41.475721 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:28:41.475733 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:28:41.475746 systemd[1]: Reached target machines.target - Containers. Sep 5 00:28:41.475758 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:28:41.475770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:28:41.475783 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:28:41.475801 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:28:41.475814 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:28:41.475826 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:28:41.475838 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:28:41.475850 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:28:41.475863 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:28:41.475876 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:28:41.475888 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:28:41.475900 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:28:41.475918 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:28:41.475934 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:28:41.475955 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:28:41.475971 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:28:41.475985 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:28:41.475996 kernel: loop: module loaded Sep 5 00:28:41.476009 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:28:41.476023 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:28:41.476050 kernel: ACPI: bus type drm_connector registered Sep 5 00:28:41.476062 kernel: fuse: init (API version 7.41) Sep 5 00:28:41.476074 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 5 00:28:41.476087 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:28:41.476103 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:28:41.476127 systemd[1]: Stopped verity-setup.service. Sep 5 00:28:41.476144 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:28:41.476160 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:28:41.476175 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:28:41.476190 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:28:41.476206 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:28:41.476279 systemd-journald[1215]: Collecting audit messages is disabled. Sep 5 00:28:41.476315 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:28:41.476331 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:28:41.476348 systemd-journald[1215]: Journal started Sep 5 00:28:41.476395 systemd-journald[1215]: Runtime Journal (/run/log/journal/f624d575a3274790b4db3c82332db6b4) is 6M, max 48.6M, 42.5M free. Sep 5 00:28:41.102535 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:28:41.122789 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:28:41.123373 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:28:41.480328 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:28:41.482297 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:28:41.485521 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:28:41.487294 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:28:41.487595 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:28:41.489532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:28:41.489802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:28:41.491666 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:28:41.491947 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:28:41.493720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:28:41.493998 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:28:41.495697 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:28:41.495968 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:28:41.497581 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:28:41.497854 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:28:41.499529 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:28:41.501161 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:28:41.503079 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:28:41.504774 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 5 00:28:41.521064 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:28:41.524120 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:28:41.526580 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:28:41.527963 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:28:41.527994 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:28:41.530180 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 5 00:28:41.547477 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:28:41.548919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:28:41.551011 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:28:41.555155 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:28:41.556671 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:28:41.559523 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:28:41.560828 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:28:41.562632 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:28:41.565373 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:28:41.654671 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:28:41.657895 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:28:41.660198 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:28:41.663697 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:28:41.676606 systemd-journald[1215]: Time spent on flushing to /var/log/journal/f624d575a3274790b4db3c82332db6b4 is 18.689ms for 984 entries. Sep 5 00:28:41.676606 systemd-journald[1215]: System Journal (/var/log/journal/f624d575a3274790b4db3c82332db6b4) is 8M, max 195.6M, 187.6M free. Sep 5 00:28:41.724702 systemd-journald[1215]: Received client request to flush runtime journal. Sep 5 00:28:41.724771 kernel: loop0: detected capacity change from 0 to 128016 Sep 5 00:28:41.724801 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:28:41.683190 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:28:41.684803 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:28:41.688708 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 5 00:28:41.693712 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:28:41.706987 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 5 00:28:41.707001 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 5 00:28:41.713895 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:28:41.717830 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:28:41.732463 kernel: loop1: detected capacity change from 0 to 229808 Sep 5 00:28:41.732252 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:28:41.746085 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 5 00:28:41.771303 kernel: loop2: detected capacity change from 0 to 111000 Sep 5 00:28:41.777504 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:28:41.783054 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:28:41.811553 kernel: loop3: detected capacity change from 0 to 128016 Sep 5 00:28:41.813426 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 5 00:28:41.813454 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 5 00:28:41.819175 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:28:41.829294 kernel: loop4: detected capacity change from 0 to 229808 Sep 5 00:28:41.841304 kernel: loop5: detected capacity change from 0 to 111000 Sep 5 00:28:41.850299 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:28:41.850884 (sd-merge)[1274]: Merged extensions into '/usr'. Sep 5 00:28:41.855952 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:28:41.856133 systemd[1]: Reloading... Sep 5 00:28:42.063593 zram_generator::config[1304]: No configuration found. Sep 5 00:28:42.213322 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:28:42.344641 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:28:42.345637 systemd[1]: Reloading finished in 488 ms. Sep 5 00:28:42.387402 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:28:42.389481 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:28:42.417546 systemd[1]: Starting ensure-sysext.service... Sep 5 00:28:42.421220 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:28:42.448835 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 5 00:28:42.449077 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 5 00:28:42.449415 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:28:42.449685 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:28:42.450742 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:28:42.451023 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 5 00:28:42.451098 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 5 00:28:42.451889 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:28:42.451910 systemd[1]: Reloading... Sep 5 00:28:42.455850 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:28:42.455858 systemd-tmpfiles[1340]: Skipping /boot Sep 5 00:28:42.469730 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:28:42.469746 systemd-tmpfiles[1340]: Skipping /boot Sep 5 00:28:42.527301 zram_generator::config[1367]: No configuration found. Sep 5 00:28:42.784858 systemd[1]: Reloading finished in 332 ms. Sep 5 00:28:42.805730 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:28:42.863381 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:28:42.873534 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:28:42.876395 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:28:42.878925 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:28:42.893598 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:28:42.897457 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:28:42.900458 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:28:42.905673 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:28:42.905896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:28:42.917469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:28:42.921463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:28:42.927455 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:28:42.928585 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:28:42.928701 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:28:42.931057 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:28:42.932106 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:28:42.933920 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:28:42.935803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:28:42.936030 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:28:42.936516 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:28:42.936716 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:28:42.947384 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:28:42.947639 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:28:42.954974 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:28:42.955350 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:28:43.004613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:28:43.010762 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Sep 5 00:28:43.057714 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:28:43.158798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:28:43.159004 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:28:43.159113 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:28:43.161082 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:28:43.164136 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:28:43.168487 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:28:43.170766 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:28:43.171018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:28:43.172768 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:28:43.172999 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:28:43.174884 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:28:43.175166 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:28:43.184377 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:28:43.191246 augenrules[1447]: No rules Sep 5 00:28:43.191886 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:28:43.192261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:28:43.193818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:28:43.197682 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:28:43.200023 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:28:43.203546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:28:43.204786 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:28:43.205365 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:28:43.205502 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:28:43.206224 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:28:43.207788 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:28:43.209578 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:28:43.210307 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:28:43.216494 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:28:43.268703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:28:43.268927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:28:43.270560 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:28:43.270789 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:28:43.272330 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:28:43.272532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:28:43.274237 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:28:43.274457 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:28:43.282042 systemd[1]: Finished ensure-sysext.service. Sep 5 00:28:43.376863 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:28:43.378014 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:28:43.378123 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:28:43.380558 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:28:43.382003 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:28:43.392386 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:28:43.446736 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:28:43.516479 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:28:43.531909 systemd-resolved[1409]: Positive Trust Anchors: Sep 5 00:28:43.532313 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:28:43.532406 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:28:43.536297 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:28:43.539566 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:28:43.539735 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:28:43.543764 systemd-resolved[1409]: Defaulting to hostname 'linux'. Sep 5 00:28:43.544310 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 5 00:28:43.546136 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:28:43.550669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:28:43.552004 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:28:43.553328 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:28:43.554627 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:28:43.556325 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:28:43.557669 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 5 00:28:43.559042 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:28:43.560311 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:28:43.560682 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:28:43.561962 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:28:43.563225 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:28:43.563271 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:28:43.564232 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:28:43.565664 systemd-networkd[1496]: lo: Link UP Sep 5 00:28:43.565676 systemd-networkd[1496]: lo: Gained carrier Sep 5 00:28:43.566013 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:28:43.568352 systemd-networkd[1496]: Enumeration completed Sep 5 00:28:43.568910 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:28:43.568925 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:28:43.568930 systemd-networkd[1496]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:28:43.570214 systemd-networkd[1496]: eth0: Link UP Sep 5 00:28:43.570392 systemd-networkd[1496]: eth0: Gained carrier Sep 5 00:28:43.570416 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:28:43.573803 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 5 00:28:43.575285 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 5 00:28:43.576725 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 5 00:28:43.584344 systemd-networkd[1496]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:28:43.585220 systemd-timesyncd[1498]: Network configuration changed, trying to establish connection. Sep 5 00:28:44.126526 systemd-resolved[1409]: Clock change detected. Flushing caches. Sep 5 00:28:44.126753 systemd-timesyncd[1498]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:28:44.126798 systemd-timesyncd[1498]: Initial clock synchronization to Fri 2025-09-05 00:28:44.126479 UTC. Sep 5 00:28:44.127348 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:28:44.127626 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:28:44.128518 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:28:44.130187 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 5 00:28:44.132324 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:28:44.133856 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:28:44.141140 systemd[1]: Reached target network.target - Network. Sep 5 00:28:44.142266 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:28:44.145084 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:28:44.146118 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:28:44.146153 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:28:44.148260 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:28:44.150547 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:28:44.153366 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:28:44.156259 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:28:44.164273 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:28:44.165481 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:28:44.167261 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 5 00:28:44.174895 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:28:44.179890 jq[1533]: false Sep 5 00:28:44.180409 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:28:44.183624 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing passwd entry cache Sep 5 00:28:44.183635 oslogin_cache_refresh[1535]: Refreshing passwd entry cache Sep 5 00:28:44.184301 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:28:44.186646 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:28:44.190721 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting users, quitting Sep 5 00:28:44.190721 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 00:28:44.190690 oslogin_cache_refresh[1535]: Failure getting users, quitting Sep 5 00:28:44.190867 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing group entry cache Sep 5 00:28:44.190714 oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 00:28:44.190771 oslogin_cache_refresh[1535]: Refreshing group entry cache Sep 5 00:28:44.195944 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:28:44.198096 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting groups, quitting Sep 5 00:28:44.198096 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 00:28:44.198089 oslogin_cache_refresh[1535]: Failure getting groups, quitting Sep 5 00:28:44.198102 oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 00:28:44.203379 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 5 00:28:44.207344 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:28:44.209583 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:28:44.210126 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:28:44.214082 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:28:44.221289 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:28:44.225582 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:28:44.227166 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:28:44.227469 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:28:44.227808 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 5 00:28:44.229236 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 5 00:28:44.231628 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:28:44.231890 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:28:44.242847 extend-filesystems[1534]: Found /dev/vda6 Sep 5 00:28:44.331208 extend-filesystems[1534]: Found /dev/vda9 Sep 5 00:28:44.331208 extend-filesystems[1534]: Checking size of /dev/vda9 Sep 5 00:28:44.356094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:28:44.362145 jq[1547]: true Sep 5 00:28:44.431508 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:28:44.431815 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:28:44.461730 update_engine[1546]: I20250905 00:28:44.461614 1546 main.cc:92] Flatcar Update Engine starting Sep 5 00:28:44.462402 (ntainerd)[1572]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:28:44.467486 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 5 00:28:44.477061 kernel: kvm_amd: TSC scaling supported Sep 5 00:28:44.477097 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:28:44.477130 kernel: kvm_amd: Nested Paging enabled Sep 5 00:28:44.477143 kernel: kvm_amd: LBR virtualization supported Sep 5 00:28:44.477155 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:28:44.477167 kernel: kvm_amd: Virtual GIF supported Sep 5 00:28:44.477246 tar[1552]: linux-amd64/LICENSE Sep 5 00:28:44.477485 tar[1552]: linux-amd64/helm Sep 5 00:28:44.491597 jq[1571]: true Sep 5 00:28:44.494228 dbus-daemon[1531]: [system] SELinux support is enabled Sep 5 00:28:44.494789 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:28:44.497358 update_engine[1546]: I20250905 00:28:44.497259 1546 update_check_scheduler.cc:74] Next update check in 3m28s Sep 5 00:28:44.620525 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:28:44.621920 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:28:44.621941 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:28:44.623879 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:28:44.623906 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:28:44.630218 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:28:44.635683 systemd-logind[1540]: Watching system buttons on /dev/input/event2 (Power Button) Sep 5 00:28:44.636941 extend-filesystems[1534]: Resized partition /dev/vda9 Sep 5 00:28:44.635726 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:28:44.642504 systemd-logind[1540]: New seat seat0. Sep 5 00:28:44.643922 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:28:44.825788 extend-filesystems[1599]: resize2fs 1.47.2 (1-Jan-2025) Sep 5 00:28:44.842003 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:28:44.987843 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:28:45.036182 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:28:45.041225 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:28:45.095259 tar[1552]: linux-amd64/README.md Sep 5 00:28:45.104821 systemd[1]: Started sshd@0-10.0.0.38:22-10.0.0.1:44580.service - OpenSSH per-connection server daemon (10.0.0.1:44580). Sep 5 00:28:45.116633 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:28:45.120606 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:28:45.190074 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:28:45.190932 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:28:45.623405 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:28:45.266322 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:28:45.269262 systemd-networkd[1496]: eth0: Gained IPv6LL Sep 5 00:28:45.270897 locksmithd[1596]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:28:45.272991 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:28:45.273586 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:28:45.275336 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:28:45.444380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:28:45.446335 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:28:45.518206 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:28:45.518495 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:28:45.518882 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:28:45.582878 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:28:45.624047 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:28:45.627359 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:28:45.627583 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:28:45.710718 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:28:45.858457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:28:46.132078 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:28:46.192396 containerd[1572]: time="2025-09-05T00:28:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 5 00:28:47.604724 sshd[1612]: Connection closed by authenticating user core 10.0.0.1 port 44580 [preauth] Sep 5 00:28:46.508310 systemd[1]: sshd@0-10.0.0.38:22-10.0.0.1:44580.service: Deactivated successfully. Sep 5 00:28:47.606197 containerd[1572]: time="2025-09-05T00:28:47.606052874Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 5 00:28:47.620765 containerd[1572]: time="2025-09-05T00:28:47.620692127Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.458µs" Sep 5 00:28:47.620765 containerd[1572]: time="2025-09-05T00:28:47.620742452Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 5 00:28:47.620849 containerd[1572]: time="2025-09-05T00:28:47.620780233Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 5 00:28:47.621157 containerd[1572]: time="2025-09-05T00:28:47.621108739Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 5 00:28:47.621157 containerd[1572]: time="2025-09-05T00:28:47.621156669Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 5 00:28:47.621258 containerd[1572]: time="2025-09-05T00:28:47.621192225Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:28:47.621288 containerd[1572]: time="2025-09-05T00:28:47.621275421Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:28:47.621315 containerd[1572]: time="2025-09-05T00:28:47.621289999Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:28:47.621647 containerd[1572]: time="2025-09-05T00:28:47.621592566Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:28:47.621647 containerd[1572]: time="2025-09-05T00:28:47.621612554Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:28:47.621647 containerd[1572]: time="2025-09-05T00:28:47.621625899Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:28:47.621647 containerd[1572]: time="2025-09-05T00:28:47.621643291Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 5 00:28:47.621764 containerd[1572]: time="2025-09-05T00:28:47.621744160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 5 00:28:47.622104 containerd[1572]: time="2025-09-05T00:28:47.622065513Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:28:47.622143 containerd[1572]: time="2025-09-05T00:28:47.622106961Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:28:47.622143 containerd[1572]: time="2025-09-05T00:28:47.622117080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 5 00:28:47.622194 containerd[1572]: time="2025-09-05T00:28:47.622163397Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 5 00:28:47.622503 containerd[1572]: time="2025-09-05T00:28:47.622468479Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 5 00:28:47.622601 containerd[1572]: time="2025-09-05T00:28:47.622551835Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:28:47.661422 extend-filesystems[1599]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:28:47.661422 extend-filesystems[1599]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:28:47.661422 extend-filesystems[1599]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:28:47.667233 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Sep 5 00:28:47.668191 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:28:47.668594 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:28:47.864605 bash[1595]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:28:47.867173 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:28:47.869644 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:28:47.870574 containerd[1572]: time="2025-09-05T00:28:47.870534451Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 5 00:28:47.870967 containerd[1572]: time="2025-09-05T00:28:47.870934030Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 5 00:28:47.871005 containerd[1572]: time="2025-09-05T00:28:47.870978203Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 5 00:28:47.871045 containerd[1572]: time="2025-09-05T00:28:47.871001968Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 5 00:28:47.871045 containerd[1572]: time="2025-09-05T00:28:47.871018038Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 5 00:28:47.871148 containerd[1572]: time="2025-09-05T00:28:47.871049627Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 5 00:28:47.871148 containerd[1572]: time="2025-09-05T00:28:47.871071408Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 5 00:28:47.871148 containerd[1572]: time="2025-09-05T00:28:47.871089021Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 5 00:28:47.871148 containerd[1572]: time="2025-09-05T00:28:47.871104350Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 5 00:28:47.871148 containerd[1572]: time="2025-09-05T00:28:47.871117454Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 5 00:28:47.871148 containerd[1572]: time="2025-09-05T00:28:47.871131020Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 5 00:28:47.871259 containerd[1572]: time="2025-09-05T00:28:47.871151288Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 5 00:28:47.871369 containerd[1572]: time="2025-09-05T00:28:47.871345572Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 5 00:28:47.871403 containerd[1572]: time="2025-09-05T00:28:47.871387381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 5 00:28:47.871424 containerd[1572]: time="2025-09-05T00:28:47.871411887Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 5 00:28:47.871443 containerd[1572]: time="2025-09-05T00:28:47.871426614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 5 00:28:47.871475 containerd[1572]: time="2025-09-05T00:28:47.871459977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 5 00:28:47.871502 containerd[1572]: time="2025-09-05T00:28:47.871477329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 5 00:28:47.871523 containerd[1572]: time="2025-09-05T00:28:47.871502106Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 5 00:28:47.871523 containerd[1572]: time="2025-09-05T00:28:47.871517144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 5 00:28:47.871564 containerd[1572]: time="2025-09-05T00:28:47.871533665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 5 00:28:47.871564 containerd[1572]: time="2025-09-05T00:28:47.871547030Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 5 00:28:47.871564 containerd[1572]: time="2025-09-05T00:28:47.871560064Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 5 00:28:47.871696 containerd[1572]: time="2025-09-05T00:28:47.871672165Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 5 00:28:47.871729 containerd[1572]: time="2025-09-05T00:28:47.871699396Z" level=info msg="Start snapshots syncer" Sep 5 00:28:47.871754 containerd[1572]: time="2025-09-05T00:28:47.871734532Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 5 00:28:47.872273 containerd[1572]: time="2025-09-05T00:28:47.872202379Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 5 00:28:47.872402 containerd[1572]: time="2025-09-05T00:28:47.872290154Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 5 00:28:47.874539 containerd[1572]: time="2025-09-05T00:28:47.874515256Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 5 00:28:47.874708 containerd[1572]: time="2025-09-05T00:28:47.874685646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 5 00:28:47.874737 containerd[1572]: time="2025-09-05T00:28:47.874711414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 5 00:28:47.874737 containerd[1572]: time="2025-09-05T00:28:47.874725671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 5 00:28:47.874788 containerd[1572]: time="2025-09-05T00:28:47.874737443Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 5 00:28:47.874788 containerd[1572]: time="2025-09-05T00:28:47.874748524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 5 00:28:47.874788 containerd[1572]: time="2025-09-05T00:28:47.874762550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 5 00:28:47.874788 containerd[1572]: time="2025-09-05T00:28:47.874775545Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 5 00:28:47.874856 containerd[1572]: time="2025-09-05T00:28:47.874805190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 5 00:28:47.874856 containerd[1572]: time="2025-09-05T00:28:47.874815069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 5 00:28:47.874856 containerd[1572]: time="2025-09-05T00:28:47.874825809Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 5 00:28:47.874928 containerd[1572]: time="2025-09-05T00:28:47.874865153Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:28:47.874928 containerd[1572]: time="2025-09-05T00:28:47.874885441Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:28:47.874928 containerd[1572]: time="2025-09-05T00:28:47.874900098Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:28:47.874928 containerd[1572]: time="2025-09-05T00:28:47.874911920Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:28:47.874928 containerd[1572]: time="2025-09-05T00:28:47.874924073Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 5 00:28:47.875063 containerd[1572]: time="2025-09-05T00:28:47.874936296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 5 00:28:47.875063 containerd[1572]: time="2025-09-05T00:28:47.874955773Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 5 00:28:47.875063 containerd[1572]: time="2025-09-05T00:28:47.875017979Z" level=info msg="runtime interface created" Sep 5 00:28:47.875063 containerd[1572]: time="2025-09-05T00:28:47.875044198Z" level=info msg="created NRI interface" Sep 5 00:28:47.875133 containerd[1572]: time="2025-09-05T00:28:47.875075477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 5 00:28:47.875133 containerd[1572]: time="2025-09-05T00:28:47.875097118Z" level=info msg="Connect containerd service" Sep 5 00:28:47.875177 containerd[1572]: time="2025-09-05T00:28:47.875131061Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:28:47.876349 containerd[1572]: time="2025-09-05T00:28:47.876312767Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:28:48.042043 containerd[1572]: time="2025-09-05T00:28:48.041941020Z" level=info msg="Start subscribing containerd event" Sep 5 00:28:48.042194 containerd[1572]: time="2025-09-05T00:28:48.042124784Z" level=info msg="Start recovering state" Sep 5 00:28:48.042217 containerd[1572]: time="2025-09-05T00:28:48.042179877Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:28:48.042832 containerd[1572]: time="2025-09-05T00:28:48.042255399Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:28:48.042832 containerd[1572]: time="2025-09-05T00:28:48.042312456Z" level=info msg="Start event monitor" Sep 5 00:28:48.042832 containerd[1572]: time="2025-09-05T00:28:48.042332604Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:28:48.042832 containerd[1572]: time="2025-09-05T00:28:48.042344146Z" level=info msg="Start streaming server" Sep 5 00:28:48.042832 containerd[1572]: time="2025-09-05T00:28:48.042363512Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 5 00:28:48.042832 containerd[1572]: time="2025-09-05T00:28:48.042373701Z" level=info msg="runtime interface starting up..." Sep 5 00:28:48.042832 containerd[1572]: time="2025-09-05T00:28:48.042385082Z" level=info msg="starting plugins..." Sep 5 00:28:48.042832 containerd[1572]: time="2025-09-05T00:28:48.042407244Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 5 00:28:48.042822 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:28:48.043411 containerd[1572]: time="2025-09-05T00:28:48.043376281Z" level=info msg="containerd successfully booted in 1.851523s" Sep 5 00:28:49.212927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:28:49.214704 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:28:49.216105 systemd[1]: Startup finished in 3.316s (kernel) + 9.738s (initrd) + 8.362s (userspace) = 21.417s. Sep 5 00:28:49.257760 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:28:49.855746 kubelet[1684]: E0905 00:28:49.855621 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:28:49.861042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:28:49.861259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:28:49.861712 systemd[1]: kubelet.service: Consumed 1.897s CPU time, 266.9M memory peak. Sep 5 00:28:56.522146 systemd[1]: Started sshd@1-10.0.0.38:22-10.0.0.1:49858.service - OpenSSH per-connection server daemon (10.0.0.1:49858). Sep 5 00:28:56.591321 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 49858 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:28:56.593145 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:28:56.599490 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:28:56.600706 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:28:56.608658 systemd-logind[1540]: New session 1 of user core. Sep 5 00:28:56.630249 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:28:56.633630 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:28:56.651491 (systemd)[1702]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:28:56.653891 systemd-logind[1540]: New session c1 of user core. Sep 5 00:28:56.801720 systemd[1702]: Queued start job for default target default.target. Sep 5 00:28:56.808339 systemd[1702]: Created slice app.slice - User Application Slice. Sep 5 00:28:56.808379 systemd[1702]: Reached target paths.target - Paths. Sep 5 00:28:56.808454 systemd[1702]: Reached target timers.target - Timers. Sep 5 00:28:56.810095 systemd[1702]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:28:56.822784 systemd[1702]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:28:56.822916 systemd[1702]: Reached target sockets.target - Sockets. Sep 5 00:28:56.822956 systemd[1702]: Reached target basic.target - Basic System. Sep 5 00:28:56.822996 systemd[1702]: Reached target default.target - Main User Target. Sep 5 00:28:56.823053 systemd[1702]: Startup finished in 162ms. Sep 5 00:28:56.823312 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:28:56.825121 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:28:56.895460 systemd[1]: Started sshd@2-10.0.0.38:22-10.0.0.1:49864.service - OpenSSH per-connection server daemon (10.0.0.1:49864). Sep 5 00:28:56.952519 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 49864 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:28:56.953955 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:28:56.959413 systemd-logind[1540]: New session 2 of user core. Sep 5 00:28:56.969216 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:28:57.024011 sshd[1716]: Connection closed by 10.0.0.1 port 49864 Sep 5 00:28:57.024453 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Sep 5 00:28:57.033753 systemd[1]: sshd@2-10.0.0.38:22-10.0.0.1:49864.service: Deactivated successfully. Sep 5 00:28:57.035724 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:28:57.036463 systemd-logind[1540]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:28:57.039496 systemd[1]: Started sshd@3-10.0.0.38:22-10.0.0.1:49878.service - OpenSSH per-connection server daemon (10.0.0.1:49878). Sep 5 00:28:57.040108 systemd-logind[1540]: Removed session 2. Sep 5 00:28:57.095511 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 49878 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:28:57.096837 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:28:57.101559 systemd-logind[1540]: New session 3 of user core. Sep 5 00:28:57.111173 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:28:57.161022 sshd[1725]: Connection closed by 10.0.0.1 port 49878 Sep 5 00:28:57.161387 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Sep 5 00:28:57.174646 systemd[1]: sshd@3-10.0.0.38:22-10.0.0.1:49878.service: Deactivated successfully. Sep 5 00:28:57.176447 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:28:57.177199 systemd-logind[1540]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:28:57.180512 systemd[1]: Started sshd@4-10.0.0.38:22-10.0.0.1:49888.service - OpenSSH per-connection server daemon (10.0.0.1:49888). Sep 5 00:28:57.181009 systemd-logind[1540]: Removed session 3. Sep 5 00:28:57.238277 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 49888 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:28:57.239583 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:28:57.243953 systemd-logind[1540]: New session 4 of user core. Sep 5 00:28:57.257170 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:28:57.312667 sshd[1734]: Connection closed by 10.0.0.1 port 49888 Sep 5 00:28:57.313054 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Sep 5 00:28:57.323740 systemd[1]: sshd@4-10.0.0.38:22-10.0.0.1:49888.service: Deactivated successfully. Sep 5 00:28:57.325664 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:28:57.326589 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:28:57.329828 systemd[1]: Started sshd@5-10.0.0.38:22-10.0.0.1:49904.service - OpenSSH per-connection server daemon (10.0.0.1:49904). Sep 5 00:28:57.330602 systemd-logind[1540]: Removed session 4. Sep 5 00:28:57.379853 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 49904 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:28:57.381790 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:28:57.387396 systemd-logind[1540]: New session 5 of user core. Sep 5 00:28:57.401372 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:28:57.461070 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:28:57.461416 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:28:57.482588 sudo[1744]: pam_unix(sudo:session): session closed for user root Sep 5 00:28:57.484474 sshd[1743]: Connection closed by 10.0.0.1 port 49904 Sep 5 00:28:57.484914 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 5 00:28:57.498690 systemd[1]: sshd@5-10.0.0.38:22-10.0.0.1:49904.service: Deactivated successfully. Sep 5 00:28:57.500478 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:28:57.501295 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:28:57.504085 systemd[1]: Started sshd@6-10.0.0.38:22-10.0.0.1:49916.service - OpenSSH per-connection server daemon (10.0.0.1:49916). Sep 5 00:28:57.504722 systemd-logind[1540]: Removed session 5. Sep 5 00:28:57.556631 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 49916 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:28:57.558246 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:28:57.563012 systemd-logind[1540]: New session 6 of user core. Sep 5 00:28:57.579183 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:28:57.634890 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:28:57.635222 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:28:57.642830 sudo[1755]: pam_unix(sudo:session): session closed for user root Sep 5 00:28:57.649996 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 5 00:28:57.650419 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:28:57.661378 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:28:57.719729 augenrules[1777]: No rules Sep 5 00:28:57.721692 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:28:57.722065 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:28:57.723678 sudo[1754]: pam_unix(sudo:session): session closed for user root Sep 5 00:28:57.725444 sshd[1753]: Connection closed by 10.0.0.1 port 49916 Sep 5 00:28:57.725826 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Sep 5 00:28:57.736064 systemd[1]: sshd@6-10.0.0.38:22-10.0.0.1:49916.service: Deactivated successfully. Sep 5 00:28:57.738324 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:28:57.739283 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:28:57.742400 systemd[1]: Started sshd@7-10.0.0.38:22-10.0.0.1:49920.service - OpenSSH per-connection server daemon (10.0.0.1:49920). Sep 5 00:28:57.743613 systemd-logind[1540]: Removed session 6. Sep 5 00:28:57.793440 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 49920 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:28:57.794758 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:28:57.799491 systemd-logind[1540]: New session 7 of user core. Sep 5 00:28:57.813192 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:28:57.867668 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:28:57.868009 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:28:58.195674 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:28:58.221685 (dockerd)[1811]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:28:59.538904 dockerd[1811]: time="2025-09-05T00:28:59.538788663Z" level=info msg="Starting up" Sep 5 00:28:59.540332 dockerd[1811]: time="2025-09-05T00:28:59.540244864Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 5 00:28:59.564929 dockerd[1811]: time="2025-09-05T00:28:59.564864083Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 5 00:28:59.651171 dockerd[1811]: time="2025-09-05T00:28:59.651108843Z" level=info msg="Loading containers: start." Sep 5 00:28:59.663093 kernel: Initializing XFRM netlink socket Sep 5 00:28:59.862435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:28:59.864479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:29:00.153399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:29:00.158323 (kubelet)[1968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:29:00.238077 kubelet[1968]: E0905 00:29:00.238006 1968 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:29:00.246176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:29:00.246421 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:29:00.246803 systemd[1]: kubelet.service: Consumed 340ms CPU time, 111.2M memory peak. Sep 5 00:29:00.275044 systemd-networkd[1496]: docker0: Link UP Sep 5 00:29:00.282241 dockerd[1811]: time="2025-09-05T00:29:00.281980142Z" level=info msg="Loading containers: done." Sep 5 00:29:00.415661 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4247177706-merged.mount: Deactivated successfully. Sep 5 00:29:00.418408 dockerd[1811]: time="2025-09-05T00:29:00.418351978Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:29:00.418555 dockerd[1811]: time="2025-09-05T00:29:00.418523389Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 5 00:29:00.418689 dockerd[1811]: time="2025-09-05T00:29:00.418660547Z" level=info msg="Initializing buildkit" Sep 5 00:29:00.456406 dockerd[1811]: time="2025-09-05T00:29:00.456349527Z" level=info msg="Completed buildkit initialization" Sep 5 00:29:00.461530 dockerd[1811]: time="2025-09-05T00:29:00.461458127Z" level=info msg="Daemon has completed initialization" Sep 5 00:29:00.461708 dockerd[1811]: time="2025-09-05T00:29:00.461548978Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:29:00.461825 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:29:01.506469 containerd[1572]: time="2025-09-05T00:29:01.506385178Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 00:29:02.681778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2397196708.mount: Deactivated successfully. Sep 5 00:29:04.096245 containerd[1572]: time="2025-09-05T00:29:04.096158969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:04.096978 containerd[1572]: time="2025-09-05T00:29:04.096905990Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 5 00:29:04.098166 containerd[1572]: time="2025-09-05T00:29:04.098107573Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:04.100948 containerd[1572]: time="2025-09-05T00:29:04.100860566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:04.101973 containerd[1572]: time="2025-09-05T00:29:04.101926866Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 2.595469723s" Sep 5 00:29:04.101973 containerd[1572]: time="2025-09-05T00:29:04.101967562Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 5 00:29:04.103107 containerd[1572]: time="2025-09-05T00:29:04.103069889Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 00:29:05.544662 containerd[1572]: time="2025-09-05T00:29:05.544584814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:05.545512 containerd[1572]: time="2025-09-05T00:29:05.545441761Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 5 00:29:05.546513 containerd[1572]: time="2025-09-05T00:29:05.546476812Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:05.549570 containerd[1572]: time="2025-09-05T00:29:05.549536691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:05.550606 containerd[1572]: time="2025-09-05T00:29:05.550580739Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.447470805s" Sep 5 00:29:05.550655 containerd[1572]: time="2025-09-05T00:29:05.550607208Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 5 00:29:05.551092 containerd[1572]: time="2025-09-05T00:29:05.551060989Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 00:29:07.695243 containerd[1572]: time="2025-09-05T00:29:07.695163787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:07.750851 containerd[1572]: time="2025-09-05T00:29:07.750744160Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 5 00:29:07.782521 containerd[1572]: time="2025-09-05T00:29:07.782410196Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:07.785627 containerd[1572]: time="2025-09-05T00:29:07.785585100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:07.786414 containerd[1572]: time="2025-09-05T00:29:07.786380111Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 2.235292502s" Sep 5 00:29:07.786414 containerd[1572]: time="2025-09-05T00:29:07.786415757Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 5 00:29:07.786929 containerd[1572]: time="2025-09-05T00:29:07.786854941Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 00:29:09.122311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133187436.mount: Deactivated successfully. Sep 5 00:29:10.219624 containerd[1572]: time="2025-09-05T00:29:10.219526709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:10.238868 containerd[1572]: time="2025-09-05T00:29:10.238791658Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 5 00:29:10.269346 containerd[1572]: time="2025-09-05T00:29:10.269205094Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:10.293295 containerd[1572]: time="2025-09-05T00:29:10.293204772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:10.293854 containerd[1572]: time="2025-09-05T00:29:10.293799657Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 2.506914119s" Sep 5 00:29:10.293854 containerd[1572]: time="2025-09-05T00:29:10.293851635Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 5 00:29:10.294639 containerd[1572]: time="2025-09-05T00:29:10.294609246Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 00:29:10.362184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:29:10.364397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:29:10.602883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:29:10.617359 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:29:10.809800 kubelet[2126]: E0905 00:29:10.809718 2126 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:29:10.814481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:29:10.814716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:29:10.815133 systemd[1]: kubelet.service: Consumed 382ms CPU time, 112.1M memory peak. Sep 5 00:29:11.470709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292909398.mount: Deactivated successfully. Sep 5 00:29:13.158589 containerd[1572]: time="2025-09-05T00:29:13.158510222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:13.244800 containerd[1572]: time="2025-09-05T00:29:13.244710609Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 5 00:29:13.365503 containerd[1572]: time="2025-09-05T00:29:13.365445750Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:13.901384 containerd[1572]: time="2025-09-05T00:29:13.901297123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:13.902552 containerd[1572]: time="2025-09-05T00:29:13.902509266Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.607868221s" Sep 5 00:29:13.902618 containerd[1572]: time="2025-09-05T00:29:13.902559761Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 5 00:29:13.903204 containerd[1572]: time="2025-09-05T00:29:13.903134398Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:29:16.832908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130964222.mount: Deactivated successfully. Sep 5 00:29:16.873998 containerd[1572]: time="2025-09-05T00:29:16.873907368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:29:16.876685 containerd[1572]: time="2025-09-05T00:29:16.876647727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:29:16.878153 containerd[1572]: time="2025-09-05T00:29:16.878073251Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:29:16.880429 containerd[1572]: time="2025-09-05T00:29:16.880373645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:29:16.880930 containerd[1572]: time="2025-09-05T00:29:16.880896696Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.977732652s" Sep 5 00:29:16.880964 containerd[1572]: time="2025-09-05T00:29:16.880929086Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:29:16.881520 containerd[1572]: time="2025-09-05T00:29:16.881476232Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 00:29:17.880708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3023306605.mount: Deactivated successfully. Sep 5 00:29:20.078808 containerd[1572]: time="2025-09-05T00:29:20.078715833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:20.079596 containerd[1572]: time="2025-09-05T00:29:20.079540745Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 5 00:29:20.080755 containerd[1572]: time="2025-09-05T00:29:20.080692324Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:20.084059 containerd[1572]: time="2025-09-05T00:29:20.084003784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:20.085680 containerd[1572]: time="2025-09-05T00:29:20.085627827Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.204115467s" Sep 5 00:29:20.085680 containerd[1572]: time="2025-09-05T00:29:20.085663727Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 5 00:29:20.862243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 00:29:20.864298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:29:21.089391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:29:21.107481 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:29:21.189987 kubelet[2278]: E0905 00:29:21.189815 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:29:21.194861 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:29:21.195122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:29:21.195595 systemd[1]: kubelet.service: Consumed 271ms CPU time, 109.3M memory peak. Sep 5 00:29:23.127327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:29:23.127502 systemd[1]: kubelet.service: Consumed 271ms CPU time, 109.3M memory peak. Sep 5 00:29:23.129978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:29:23.159452 systemd[1]: Reload requested from client PID 2293 ('systemctl') (unit session-7.scope)... Sep 5 00:29:23.159478 systemd[1]: Reloading... Sep 5 00:29:23.250084 zram_generator::config[2335]: No configuration found. Sep 5 00:29:23.744081 systemd[1]: Reloading finished in 584 ms. Sep 5 00:29:23.811143 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:29:23.811263 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:29:23.811648 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:29:23.811700 systemd[1]: kubelet.service: Consumed 159ms CPU time, 98.3M memory peak. Sep 5 00:29:23.813478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:29:23.998048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:29:24.002365 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:29:24.052804 kubelet[2383]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:29:24.052804 kubelet[2383]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:29:24.052804 kubelet[2383]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:29:24.053237 kubelet[2383]: I0905 00:29:24.052842 2383 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:29:24.545952 kubelet[2383]: I0905 00:29:24.545902 2383 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:29:24.546192 kubelet[2383]: I0905 00:29:24.546143 2383 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:29:24.546595 kubelet[2383]: I0905 00:29:24.546569 2383 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:29:24.578926 kubelet[2383]: E0905 00:29:24.578863 2383 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 00:29:24.579962 kubelet[2383]: I0905 00:29:24.579910 2383 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:29:24.586845 kubelet[2383]: I0905 00:29:24.586817 2383 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:29:24.594840 kubelet[2383]: I0905 00:29:24.594797 2383 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:29:24.595200 kubelet[2383]: I0905 00:29:24.595160 2383 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:29:24.595378 kubelet[2383]: I0905 00:29:24.595185 2383 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:29:24.595571 kubelet[2383]: I0905 00:29:24.595385 2383 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:29:24.595571 kubelet[2383]: I0905 00:29:24.595395 2383 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:29:24.595637 kubelet[2383]: I0905 00:29:24.595598 2383 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:29:24.598470 kubelet[2383]: I0905 00:29:24.598436 2383 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:29:24.598470 kubelet[2383]: I0905 00:29:24.598459 2383 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:29:24.598589 kubelet[2383]: I0905 00:29:24.598495 2383 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:29:24.600967 kubelet[2383]: I0905 00:29:24.600947 2383 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:29:24.635224 kubelet[2383]: E0905 00:29:24.635148 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:29:24.642653 kubelet[2383]: E0905 00:29:24.642564 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:29:24.646125 kubelet[2383]: I0905 00:29:24.646079 2383 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 00:29:24.646903 kubelet[2383]: I0905 00:29:24.646878 2383 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:29:24.648180 kubelet[2383]: W0905 00:29:24.648143 2383 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:29:24.653540 kubelet[2383]: I0905 00:29:24.653498 2383 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:29:24.653599 kubelet[2383]: I0905 00:29:24.653579 2383 server.go:1289] "Started kubelet" Sep 5 00:29:24.654580 kubelet[2383]: I0905 00:29:24.653934 2383 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:29:24.654580 kubelet[2383]: I0905 00:29:24.654280 2383 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:29:24.654718 kubelet[2383]: I0905 00:29:24.654677 2383 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:29:24.656738 kubelet[2383]: I0905 00:29:24.656215 2383 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:29:24.656738 kubelet[2383]: I0905 00:29:24.656384 2383 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:29:24.657385 kubelet[2383]: I0905 00:29:24.657367 2383 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:29:24.659010 kubelet[2383]: E0905 00:29:24.657230 2383 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.38:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.38:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623b71d11c0a97 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:29:24.653533847 +0000 UTC m=+0.644340264,LastTimestamp:2025-09-05 00:29:24.653533847 +0000 UTC m=+0.644340264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:29:24.659626 kubelet[2383]: E0905 00:29:24.659507 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:29:24.659626 kubelet[2383]: I0905 00:29:24.659609 2383 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:29:24.659828 kubelet[2383]: I0905 00:29:24.659783 2383 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:29:24.659862 kubelet[2383]: I0905 00:29:24.659844 2383 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:29:24.660195 kubelet[2383]: E0905 00:29:24.660164 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:29:24.660486 kubelet[2383]: E0905 00:29:24.660425 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="200ms" Sep 5 00:29:24.661146 kubelet[2383]: E0905 00:29:24.660849 2383 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:29:24.661146 kubelet[2383]: I0905 00:29:24.661084 2383 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:29:24.662221 kubelet[2383]: I0905 00:29:24.662185 2383 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:29:24.662221 kubelet[2383]: I0905 00:29:24.662210 2383 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:29:24.686382 kubelet[2383]: I0905 00:29:24.686333 2383 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:29:24.686382 kubelet[2383]: I0905 00:29:24.686366 2383 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:29:24.686382 kubelet[2383]: I0905 00:29:24.686390 2383 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:29:24.687916 kubelet[2383]: I0905 00:29:24.687862 2383 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:29:24.689437 kubelet[2383]: I0905 00:29:24.689393 2383 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:29:24.689437 kubelet[2383]: I0905 00:29:24.689441 2383 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:29:24.689511 kubelet[2383]: I0905 00:29:24.689468 2383 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:29:24.689511 kubelet[2383]: I0905 00:29:24.689481 2383 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:29:24.689568 kubelet[2383]: E0905 00:29:24.689548 2383 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:29:24.691072 kubelet[2383]: E0905 00:29:24.690996 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:29:24.760278 kubelet[2383]: E0905 00:29:24.760202 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:29:24.790597 kubelet[2383]: E0905 00:29:24.790524 2383 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:29:24.860892 kubelet[2383]: E0905 00:29:24.860749 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:29:24.861279 kubelet[2383]: E0905 00:29:24.861229 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="400ms" Sep 5 00:29:24.961998 kubelet[2383]: E0905 00:29:24.961910 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:29:24.991229 kubelet[2383]: E0905 00:29:24.991148 2383 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:29:25.062663 kubelet[2383]: E0905 00:29:25.062588 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:29:25.162773 kubelet[2383]: E0905 00:29:25.162711 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:29:25.262810 kubelet[2383]: E0905 00:29:25.262749 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="800ms" Sep 5 00:29:25.262810 kubelet[2383]: E0905 00:29:25.262810 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:29:25.363203 kubelet[2383]: E0905 00:29:25.363123 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:29:25.391529 kubelet[2383]: E0905 00:29:25.391415 2383 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:29:25.464153 kubelet[2383]: E0905 00:29:25.463949 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:29:25.499867 kubelet[2383]: E0905 00:29:25.499796 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:29:25.503616 kubelet[2383]: E0905 00:29:25.503584 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:29:25.564559 kubelet[2383]: E0905 00:29:25.564452 2383 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:29:25.576725 kubelet[2383]: I0905 00:29:25.576531 2383 policy_none.go:49] "None policy: Start" Sep 5 00:29:25.576725 kubelet[2383]: I0905 00:29:25.576603 2383 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:29:25.576725 kubelet[2383]: I0905 00:29:25.576626 2383 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:29:25.589891 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:29:25.605843 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:29:25.610409 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:29:25.624825 kubelet[2383]: E0905 00:29:25.624547 2383 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:29:25.624997 kubelet[2383]: I0905 00:29:25.624987 2383 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:29:25.625369 kubelet[2383]: I0905 00:29:25.625002 2383 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:29:25.625426 kubelet[2383]: I0905 00:29:25.625389 2383 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:29:25.626501 kubelet[2383]: E0905 00:29:25.626449 2383 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:29:25.626501 kubelet[2383]: E0905 00:29:25.626509 2383 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:29:25.727429 kubelet[2383]: I0905 00:29:25.727269 2383 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:29:25.727773 kubelet[2383]: E0905 00:29:25.727744 2383 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Sep 5 00:29:25.930103 kubelet[2383]: I0905 00:29:25.930061 2383 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:29:25.930547 kubelet[2383]: E0905 00:29:25.930503 2383 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Sep 5 00:29:25.982454 kubelet[2383]: E0905 00:29:25.982237 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:29:26.063721 kubelet[2383]: E0905 00:29:26.063659 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="1.6s" Sep 5 00:29:26.204937 systemd[1]: Created slice kubepods-burstable-pod6c5969723598dccd784f3bd3d2919bce.slice - libcontainer container kubepods-burstable-pod6c5969723598dccd784f3bd3d2919bce.slice. Sep 5 00:29:26.224291 kubelet[2383]: E0905 00:29:26.224264 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:29:26.228104 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 5 00:29:26.230327 kubelet[2383]: E0905 00:29:26.230305 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:29:26.232463 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 5 00:29:26.234677 kubelet[2383]: E0905 00:29:26.234634 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:29:26.238185 kubelet[2383]: E0905 00:29:26.238133 2383 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:29:26.269010 kubelet[2383]: I0905 00:29:26.268909 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:26.269010 kubelet[2383]: I0905 00:29:26.268988 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:26.269010 kubelet[2383]: I0905 00:29:26.269019 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c5969723598dccd784f3bd3d2919bce-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c5969723598dccd784f3bd3d2919bce\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:26.269299 kubelet[2383]: I0905 00:29:26.269072 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:26.269299 kubelet[2383]: I0905 00:29:26.269120 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:26.269299 kubelet[2383]: I0905 00:29:26.269160 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:26.269299 kubelet[2383]: I0905 00:29:26.269190 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:29:26.269299 kubelet[2383]: I0905 00:29:26.269209 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c5969723598dccd784f3bd3d2919bce-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c5969723598dccd784f3bd3d2919bce\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:26.269440 kubelet[2383]: I0905 00:29:26.269226 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c5969723598dccd784f3bd3d2919bce-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c5969723598dccd784f3bd3d2919bce\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:26.332683 kubelet[2383]: I0905 00:29:26.332610 2383 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:29:26.333207 kubelet[2383]: E0905 00:29:26.333149 2383 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Sep 5 00:29:26.525492 kubelet[2383]: E0905 00:29:26.525347 2383 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:26.526242 containerd[1572]: time="2025-09-05T00:29:26.526206390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c5969723598dccd784f3bd3d2919bce,Namespace:kube-system,Attempt:0,}" Sep 5 00:29:26.531525 kubelet[2383]: E0905 00:29:26.531487 2383 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:26.532255 containerd[1572]: time="2025-09-05T00:29:26.532190844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 5 00:29:26.535542 kubelet[2383]: E0905 00:29:26.535517 2383 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:26.536003 containerd[1572]: time="2025-09-05T00:29:26.535906048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 5 00:29:26.579680 containerd[1572]: time="2025-09-05T00:29:26.579609792Z" level=info msg="connecting to shim 094fd6e6a03250c89f65b82f9bc642e6e0d162c00f81f3785d9ed293ca0032e3" address="unix:///run/containerd/s/da8a267c450ca77ae079fac220d30bda19b56adb3b2e1f2ef987dd687fd9c928" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:29:26.580675 containerd[1572]: time="2025-09-05T00:29:26.580629533Z" level=info msg="connecting to shim 38a715ecefd9fe4c9f04ef37f1942cb584deac44e7fd4ef0eef0f5b5c3dd5d86" address="unix:///run/containerd/s/35cf405f78fa7ec6b5ef67cb3822e07a703ba53d8e1774e3a85f1c0fa696ed6c" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:29:26.593252 containerd[1572]: time="2025-09-05T00:29:26.593167376Z" level=info msg="connecting to shim 6af072713cf57a9fd2e3c42b4342790cdfeb5ef410e4ebb92fe654b15ed4f928" address="unix:///run/containerd/s/fe05d31ba8fbe177a63f09f49c3b5602fb4b7e126aba90f28a2e553aa9acf309" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:29:26.628165 systemd[1]: Started cri-containerd-094fd6e6a03250c89f65b82f9bc642e6e0d162c00f81f3785d9ed293ca0032e3.scope - libcontainer container 094fd6e6a03250c89f65b82f9bc642e6e0d162c00f81f3785d9ed293ca0032e3. Sep 5 00:29:26.633438 kubelet[2383]: E0905 00:29:26.633401 2383 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 00:29:26.644190 systemd[1]: Started cri-containerd-38a715ecefd9fe4c9f04ef37f1942cb584deac44e7fd4ef0eef0f5b5c3dd5d86.scope - libcontainer container 38a715ecefd9fe4c9f04ef37f1942cb584deac44e7fd4ef0eef0f5b5c3dd5d86. Sep 5 00:29:26.659303 systemd[1]: Started cri-containerd-6af072713cf57a9fd2e3c42b4342790cdfeb5ef410e4ebb92fe654b15ed4f928.scope - libcontainer container 6af072713cf57a9fd2e3c42b4342790cdfeb5ef410e4ebb92fe654b15ed4f928. Sep 5 00:29:26.738063 containerd[1572]: time="2025-09-05T00:29:26.723421695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c5969723598dccd784f3bd3d2919bce,Namespace:kube-system,Attempt:0,} returns sandbox id \"094fd6e6a03250c89f65b82f9bc642e6e0d162c00f81f3785d9ed293ca0032e3\"" Sep 5 00:29:26.738063 containerd[1572]: time="2025-09-05T00:29:26.734191311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"6af072713cf57a9fd2e3c42b4342790cdfeb5ef410e4ebb92fe654b15ed4f928\"" Sep 5 00:29:26.738303 kubelet[2383]: E0905 00:29:26.724427 2383 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:26.738303 kubelet[2383]: E0905 00:29:26.734901 2383 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:26.742212 containerd[1572]: time="2025-09-05T00:29:26.742140426Z" level=info msg="CreateContainer within sandbox \"094fd6e6a03250c89f65b82f9bc642e6e0d162c00f81f3785d9ed293ca0032e3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:29:26.745188 containerd[1572]: time="2025-09-05T00:29:26.745143704Z" level=info msg="CreateContainer within sandbox \"6af072713cf57a9fd2e3c42b4342790cdfeb5ef410e4ebb92fe654b15ed4f928\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:29:26.750440 containerd[1572]: time="2025-09-05T00:29:26.750412367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"38a715ecefd9fe4c9f04ef37f1942cb584deac44e7fd4ef0eef0f5b5c3dd5d86\"" Sep 5 00:29:26.751200 kubelet[2383]: E0905 00:29:26.751165 2383 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:26.756656 containerd[1572]: time="2025-09-05T00:29:26.756625677Z" level=info msg="CreateContainer within sandbox \"38a715ecefd9fe4c9f04ef37f1942cb584deac44e7fd4ef0eef0f5b5c3dd5d86\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:29:26.759963 containerd[1572]: time="2025-09-05T00:29:26.759927864Z" level=info msg="Container 157343772b6baafd43f0fe97e2dfa032cef19ccf96b85aeb05210c40fdd361b6: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:29:26.762144 containerd[1572]: time="2025-09-05T00:29:26.762110430Z" level=info msg="Container 7457f90a8e34190ff9b3b72de62c58c995b953c7071e28fd97a63d4af9dce0c4: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:29:26.771452 containerd[1572]: time="2025-09-05T00:29:26.771418222Z" level=info msg="CreateContainer within sandbox \"094fd6e6a03250c89f65b82f9bc642e6e0d162c00f81f3785d9ed293ca0032e3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"157343772b6baafd43f0fe97e2dfa032cef19ccf96b85aeb05210c40fdd361b6\"" Sep 5 00:29:26.771928 containerd[1572]: time="2025-09-05T00:29:26.771901462Z" level=info msg="Container 759ef5b9a9b5a712531cdbb3dca0ccef68cd1a2e9e9027f954853b4d565ea56a: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:29:26.773197 containerd[1572]: time="2025-09-05T00:29:26.773167823Z" level=info msg="StartContainer for \"157343772b6baafd43f0fe97e2dfa032cef19ccf96b85aeb05210c40fdd361b6\"" Sep 5 00:29:26.774450 containerd[1572]: time="2025-09-05T00:29:26.774419777Z" level=info msg="connecting to shim 157343772b6baafd43f0fe97e2dfa032cef19ccf96b85aeb05210c40fdd361b6" address="unix:///run/containerd/s/da8a267c450ca77ae079fac220d30bda19b56adb3b2e1f2ef987dd687fd9c928" protocol=ttrpc version=3 Sep 5 00:29:26.776738 containerd[1572]: time="2025-09-05T00:29:26.776524034Z" level=info msg="CreateContainer within sandbox \"6af072713cf57a9fd2e3c42b4342790cdfeb5ef410e4ebb92fe654b15ed4f928\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7457f90a8e34190ff9b3b72de62c58c995b953c7071e28fd97a63d4af9dce0c4\"" Sep 5 00:29:26.777013 containerd[1572]: time="2025-09-05T00:29:26.776988588Z" level=info msg="StartContainer for \"7457f90a8e34190ff9b3b72de62c58c995b953c7071e28fd97a63d4af9dce0c4\"" Sep 5 00:29:26.778618 containerd[1572]: time="2025-09-05T00:29:26.778585820Z" level=info msg="connecting to shim 7457f90a8e34190ff9b3b72de62c58c995b953c7071e28fd97a63d4af9dce0c4" address="unix:///run/containerd/s/fe05d31ba8fbe177a63f09f49c3b5602fb4b7e126aba90f28a2e553aa9acf309" protocol=ttrpc version=3 Sep 5 00:29:26.784463 containerd[1572]: time="2025-09-05T00:29:26.784426751Z" level=info msg="CreateContainer within sandbox \"38a715ecefd9fe4c9f04ef37f1942cb584deac44e7fd4ef0eef0f5b5c3dd5d86\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"759ef5b9a9b5a712531cdbb3dca0ccef68cd1a2e9e9027f954853b4d565ea56a\"" Sep 5 00:29:26.787068 containerd[1572]: time="2025-09-05T00:29:26.785077360Z" level=info msg="StartContainer for \"759ef5b9a9b5a712531cdbb3dca0ccef68cd1a2e9e9027f954853b4d565ea56a\"" Sep 5 00:29:26.787819 containerd[1572]: time="2025-09-05T00:29:26.787797539Z" level=info msg="connecting to shim 759ef5b9a9b5a712531cdbb3dca0ccef68cd1a2e9e9027f954853b4d565ea56a" address="unix:///run/containerd/s/35cf405f78fa7ec6b5ef67cb3822e07a703ba53d8e1774e3a85f1c0fa696ed6c" protocol=ttrpc version=3 Sep 5 00:29:26.802234 systemd[1]: Started cri-containerd-157343772b6baafd43f0fe97e2dfa032cef19ccf96b85aeb05210c40fdd361b6.scope - libcontainer container 157343772b6baafd43f0fe97e2dfa032cef19ccf96b85aeb05210c40fdd361b6. Sep 5 00:29:26.813200 systemd[1]: Started cri-containerd-7457f90a8e34190ff9b3b72de62c58c995b953c7071e28fd97a63d4af9dce0c4.scope - libcontainer container 7457f90a8e34190ff9b3b72de62c58c995b953c7071e28fd97a63d4af9dce0c4. Sep 5 00:29:26.820171 systemd[1]: Started cri-containerd-759ef5b9a9b5a712531cdbb3dca0ccef68cd1a2e9e9027f954853b4d565ea56a.scope - libcontainer container 759ef5b9a9b5a712531cdbb3dca0ccef68cd1a2e9e9027f954853b4d565ea56a. Sep 5 00:29:26.886213 containerd[1572]: time="2025-09-05T00:29:26.886153773Z" level=info msg="StartContainer for \"157343772b6baafd43f0fe97e2dfa032cef19ccf96b85aeb05210c40fdd361b6\" returns successfully" Sep 5 00:29:26.888969 containerd[1572]: time="2025-09-05T00:29:26.888931462Z" level=info msg="StartContainer for \"7457f90a8e34190ff9b3b72de62c58c995b953c7071e28fd97a63d4af9dce0c4\" returns successfully" Sep 5 00:29:26.905594 containerd[1572]: time="2025-09-05T00:29:26.905521029Z" level=info msg="StartContainer for \"759ef5b9a9b5a712531cdbb3dca0ccef68cd1a2e9e9027f954853b4d565ea56a\" returns successfully" Sep 5 00:29:27.136085 kubelet[2383]: I0905 00:29:27.136021 2383 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:29:27.709792 kubelet[2383]: E0905 00:29:27.709716 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:29:27.710008 kubelet[2383]: E0905 00:29:27.709988 2383 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:27.715048 kubelet[2383]: E0905 00:29:27.714735 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:29:27.715048 kubelet[2383]: E0905 00:29:27.714912 2383 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:27.720056 kubelet[2383]: E0905 00:29:27.719980 2383 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:29:27.720479 kubelet[2383]: E0905 00:29:27.720452 2383 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:28.515985 kubelet[2383]: E0905 00:29:28.515920 2383 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:29:28.604510 kubelet[2383]: I0905 00:29:28.604427 2383 apiserver.go:52] "Watching apiserver" Sep 5 00:29:28.650847 kubelet[2383]: I0905 00:29:28.650786 2383 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:29:28.650847 kubelet[2383]: E0905 00:29:28.650849 2383 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 00:29:28.660518 kubelet[2383]: I0905 00:29:28.660477 2383 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:29:28.660518 kubelet[2383]: I0905 00:29:28.660532 2383 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:28.667452 kubelet[2383]: E0905 00:29:28.667387 2383 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:28.667452 kubelet[2383]: I0905 00:29:28.667440 2383 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:28.670020 kubelet[2383]: E0905 00:29:28.669960 2383 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:28.670020 kubelet[2383]: I0905 00:29:28.670000 2383 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:29:28.673731 kubelet[2383]: E0905 00:29:28.673190 2383 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 5 00:29:28.719461 kubelet[2383]: I0905 00:29:28.719420 2383 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:29:28.719832 kubelet[2383]: I0905 00:29:28.719808 2383 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:28.722528 kubelet[2383]: E0905 00:29:28.722477 2383 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:28.722599 kubelet[2383]: E0905 00:29:28.722544 2383 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 5 00:29:28.722680 kubelet[2383]: E0905 00:29:28.722657 2383 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:28.722757 kubelet[2383]: E0905 00:29:28.722738 2383 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:29.800225 update_engine[1546]: I20250905 00:29:29.800094 1546 update_attempter.cc:509] Updating boot flags... Sep 5 00:29:30.076130 kubelet[2383]: I0905 00:29:30.075617 2383 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:30.082414 kubelet[2383]: E0905 00:29:30.082360 2383 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:30.722771 kubelet[2383]: E0905 00:29:30.722654 2383 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:30.986360 systemd[1]: Reload requested from client PID 2685 ('systemctl') (unit session-7.scope)... Sep 5 00:29:30.986376 systemd[1]: Reloading... Sep 5 00:29:31.077135 zram_generator::config[2728]: No configuration found. Sep 5 00:29:31.499056 systemd[1]: Reloading finished in 512 ms. Sep 5 00:29:31.535792 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:29:31.558683 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:29:31.559098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:29:31.559167 systemd[1]: kubelet.service: Consumed 1.366s CPU time, 131.4M memory peak. Sep 5 00:29:31.561544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:29:31.796228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:29:31.807534 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:29:31.859499 kubelet[2773]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:29:31.859499 kubelet[2773]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:29:31.859499 kubelet[2773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:29:31.860140 kubelet[2773]: I0905 00:29:31.859903 2773 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:29:31.868410 kubelet[2773]: I0905 00:29:31.868355 2773 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:29:31.868410 kubelet[2773]: I0905 00:29:31.868385 2773 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:29:31.868672 kubelet[2773]: I0905 00:29:31.868652 2773 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:29:31.869970 kubelet[2773]: I0905 00:29:31.869942 2773 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 00:29:31.872151 kubelet[2773]: I0905 00:29:31.872124 2773 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:29:31.879318 kubelet[2773]: I0905 00:29:31.879268 2773 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:29:31.885507 kubelet[2773]: I0905 00:29:31.885454 2773 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:29:31.885734 kubelet[2773]: I0905 00:29:31.885689 2773 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:29:31.885920 kubelet[2773]: I0905 00:29:31.885723 2773 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:29:31.885920 kubelet[2773]: I0905 00:29:31.885919 2773 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:29:31.886081 kubelet[2773]: I0905 00:29:31.885931 2773 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:29:31.886081 kubelet[2773]: I0905 00:29:31.885995 2773 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:29:31.886234 kubelet[2773]: I0905 00:29:31.886215 2773 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:29:31.886234 kubelet[2773]: I0905 00:29:31.886231 2773 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:29:31.886294 kubelet[2773]: I0905 00:29:31.886255 2773 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:29:31.886294 kubelet[2773]: I0905 00:29:31.886274 2773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:29:31.887512 kubelet[2773]: I0905 00:29:31.887482 2773 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 00:29:31.888331 kubelet[2773]: I0905 00:29:31.888310 2773 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:29:31.892107 kubelet[2773]: I0905 00:29:31.892084 2773 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:29:31.892581 kubelet[2773]: I0905 00:29:31.892138 2773 server.go:1289] "Started kubelet" Sep 5 00:29:31.893587 kubelet[2773]: I0905 00:29:31.893562 2773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:29:31.895434 kubelet[2773]: I0905 00:29:31.895385 2773 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:29:31.896386 kubelet[2773]: I0905 00:29:31.896369 2773 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:29:31.900110 kubelet[2773]: I0905 00:29:31.900002 2773 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:29:31.900389 kubelet[2773]: I0905 00:29:31.900375 2773 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:29:31.901078 kubelet[2773]: I0905 00:29:31.900982 2773 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:29:31.903889 kubelet[2773]: I0905 00:29:31.903816 2773 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:29:31.904179 kubelet[2773]: I0905 00:29:31.904156 2773 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:29:31.905334 kubelet[2773]: I0905 00:29:31.905298 2773 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:29:31.907045 kubelet[2773]: I0905 00:29:31.905740 2773 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:29:31.907045 kubelet[2773]: I0905 00:29:31.906016 2773 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:29:31.907045 kubelet[2773]: E0905 00:29:31.906556 2773 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:29:31.909779 kubelet[2773]: I0905 00:29:31.909746 2773 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:29:31.913160 kubelet[2773]: I0905 00:29:31.913113 2773 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:29:31.914932 kubelet[2773]: I0905 00:29:31.914888 2773 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:29:31.914932 kubelet[2773]: I0905 00:29:31.914922 2773 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:29:31.915095 kubelet[2773]: I0905 00:29:31.914953 2773 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:29:31.915095 kubelet[2773]: I0905 00:29:31.914966 2773 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:29:31.915095 kubelet[2773]: E0905 00:29:31.915018 2773 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:29:31.950754 kubelet[2773]: I0905 00:29:31.950711 2773 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:29:31.950754 kubelet[2773]: I0905 00:29:31.950728 2773 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:29:31.950754 kubelet[2773]: I0905 00:29:31.950748 2773 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:29:31.950976 kubelet[2773]: I0905 00:29:31.950909 2773 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:29:31.950976 kubelet[2773]: I0905 00:29:31.950923 2773 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:29:31.950976 kubelet[2773]: I0905 00:29:31.950940 2773 policy_none.go:49] "None policy: Start" Sep 5 00:29:31.950976 kubelet[2773]: I0905 00:29:31.950949 2773 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:29:31.950976 kubelet[2773]: I0905 00:29:31.950960 2773 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:29:31.951138 kubelet[2773]: I0905 00:29:31.951061 2773 state_mem.go:75] "Updated machine memory state" Sep 5 00:29:31.959775 kubelet[2773]: E0905 00:29:31.959727 2773 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:29:31.959956 kubelet[2773]: I0905 00:29:31.959933 2773 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:29:31.960013 kubelet[2773]: I0905 00:29:31.959948 2773 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:29:31.960227 kubelet[2773]: I0905 00:29:31.960200 2773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:29:31.961656 kubelet[2773]: E0905 00:29:31.961633 2773 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:29:32.016523 kubelet[2773]: I0905 00:29:32.016479 2773 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:32.016523 kubelet[2773]: I0905 00:29:32.016512 2773 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:32.016750 kubelet[2773]: I0905 00:29:32.016709 2773 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:29:32.067657 kubelet[2773]: I0905 00:29:32.067518 2773 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:29:32.106298 kubelet[2773]: I0905 00:29:32.106240 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c5969723598dccd784f3bd3d2919bce-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c5969723598dccd784f3bd3d2919bce\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:32.106298 kubelet[2773]: I0905 00:29:32.106281 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:32.106298 kubelet[2773]: I0905 00:29:32.106316 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:32.106550 kubelet[2773]: I0905 00:29:32.106334 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:29:32.106550 kubelet[2773]: I0905 00:29:32.106396 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c5969723598dccd784f3bd3d2919bce-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c5969723598dccd784f3bd3d2919bce\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:32.106550 kubelet[2773]: I0905 00:29:32.106442 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:32.106550 kubelet[2773]: I0905 00:29:32.106517 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:32.106661 kubelet[2773]: I0905 00:29:32.106575 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:32.106661 kubelet[2773]: I0905 00:29:32.106608 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c5969723598dccd784f3bd3d2919bce-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c5969723598dccd784f3bd3d2919bce\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:32.141638 kubelet[2773]: E0905 00:29:32.141531 2773 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:32.144584 kubelet[2773]: I0905 00:29:32.144542 2773 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 00:29:32.145154 kubelet[2773]: I0905 00:29:32.144646 2773 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:29:32.441228 kubelet[2773]: E0905 00:29:32.441174 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:32.441893 kubelet[2773]: E0905 00:29:32.441724 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:32.441893 kubelet[2773]: E0905 00:29:32.441810 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:32.887630 kubelet[2773]: I0905 00:29:32.887411 2773 apiserver.go:52] "Watching apiserver" Sep 5 00:29:32.905045 kubelet[2773]: I0905 00:29:32.904992 2773 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:29:32.927053 kubelet[2773]: I0905 00:29:32.926915 2773 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:32.929062 kubelet[2773]: I0905 00:29:32.927394 2773 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:32.929176 kubelet[2773]: I0905 00:29:32.927018 2773 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:29:32.932667 kubelet[2773]: E0905 00:29:32.932293 2773 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:29:32.932667 kubelet[2773]: E0905 00:29:32.932516 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:32.935343 kubelet[2773]: E0905 00:29:32.935171 2773 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:29:32.935519 kubelet[2773]: E0905 00:29:32.935422 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:32.935725 kubelet[2773]: E0905 00:29:32.935698 2773 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:29:32.935860 kubelet[2773]: E0905 00:29:32.935815 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:32.950271 kubelet[2773]: I0905 00:29:32.950210 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.950182988 podStartE2EDuration="950.182988ms" podCreationTimestamp="2025-09-05 00:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:29:32.949957982 +0000 UTC m=+1.137046204" watchObservedRunningTime="2025-09-05 00:29:32.950182988 +0000 UTC m=+1.137271210" Sep 5 00:29:32.959933 kubelet[2773]: I0905 00:29:32.959880 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.959871135 podStartE2EDuration="2.959871135s" podCreationTimestamp="2025-09-05 00:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:29:32.959820359 +0000 UTC m=+1.146908581" watchObservedRunningTime="2025-09-05 00:29:32.959871135 +0000 UTC m=+1.146959357" Sep 5 00:29:32.979058 kubelet[2773]: I0905 00:29:32.978966 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.978819309 podStartE2EDuration="978.819309ms" podCreationTimestamp="2025-09-05 00:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:29:32.967110082 +0000 UTC m=+1.154198304" watchObservedRunningTime="2025-09-05 00:29:32.978819309 +0000 UTC m=+1.165907541" Sep 5 00:29:33.929678 kubelet[2773]: E0905 00:29:33.929582 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:33.930247 kubelet[2773]: E0905 00:29:33.929749 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:33.930247 kubelet[2773]: E0905 00:29:33.929770 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:34.930431 kubelet[2773]: E0905 00:29:34.930318 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:34.930913 kubelet[2773]: E0905 00:29:34.930545 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:36.980954 kubelet[2773]: I0905 00:29:36.980903 2773 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:29:36.981590 containerd[1572]: time="2025-09-05T00:29:36.981381758Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:29:36.981985 kubelet[2773]: I0905 00:29:36.981594 2773 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:29:37.926577 systemd[1]: Created slice kubepods-besteffort-pode0cd1d96_7e49_4c73_9d1c_66ee5061fdbb.slice - libcontainer container kubepods-besteffort-pode0cd1d96_7e49_4c73_9d1c_66ee5061fdbb.slice. Sep 5 00:29:37.941831 kubelet[2773]: I0905 00:29:37.941791 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0cd1d96-7e49-4c73-9d1c-66ee5061fdbb-kube-proxy\") pod \"kube-proxy-d46j5\" (UID: \"e0cd1d96-7e49-4c73-9d1c-66ee5061fdbb\") " pod="kube-system/kube-proxy-d46j5" Sep 5 00:29:37.941831 kubelet[2773]: I0905 00:29:37.941823 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0cd1d96-7e49-4c73-9d1c-66ee5061fdbb-lib-modules\") pod \"kube-proxy-d46j5\" (UID: \"e0cd1d96-7e49-4c73-9d1c-66ee5061fdbb\") " pod="kube-system/kube-proxy-d46j5" Sep 5 00:29:37.941831 kubelet[2773]: I0905 00:29:37.941841 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkbf7\" (UniqueName: \"kubernetes.io/projected/e0cd1d96-7e49-4c73-9d1c-66ee5061fdbb-kube-api-access-bkbf7\") pod \"kube-proxy-d46j5\" (UID: \"e0cd1d96-7e49-4c73-9d1c-66ee5061fdbb\") " pod="kube-system/kube-proxy-d46j5" Sep 5 00:29:37.942073 kubelet[2773]: I0905 00:29:37.941860 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0cd1d96-7e49-4c73-9d1c-66ee5061fdbb-xtables-lock\") pod \"kube-proxy-d46j5\" (UID: \"e0cd1d96-7e49-4c73-9d1c-66ee5061fdbb\") " pod="kube-system/kube-proxy-d46j5" Sep 5 00:29:38.031943 systemd[1]: Created slice kubepods-besteffort-pod764cff06_8c93_4757_91fb_f6db69f63643.slice - libcontainer container kubepods-besteffort-pod764cff06_8c93_4757_91fb_f6db69f63643.slice. Sep 5 00:29:38.042632 kubelet[2773]: I0905 00:29:38.042579 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s929r\" (UniqueName: \"kubernetes.io/projected/764cff06-8c93-4757-91fb-f6db69f63643-kube-api-access-s929r\") pod \"tigera-operator-755d956888-crdt8\" (UID: \"764cff06-8c93-4757-91fb-f6db69f63643\") " pod="tigera-operator/tigera-operator-755d956888-crdt8" Sep 5 00:29:38.042632 kubelet[2773]: I0905 00:29:38.042637 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/764cff06-8c93-4757-91fb-f6db69f63643-var-lib-calico\") pod \"tigera-operator-755d956888-crdt8\" (UID: \"764cff06-8c93-4757-91fb-f6db69f63643\") " pod="tigera-operator/tigera-operator-755d956888-crdt8" Sep 5 00:29:38.250687 kubelet[2773]: E0905 00:29:38.250534 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:38.251479 containerd[1572]: time="2025-09-05T00:29:38.251383330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d46j5,Uid:e0cd1d96-7e49-4c73-9d1c-66ee5061fdbb,Namespace:kube-system,Attempt:0,}" Sep 5 00:29:38.276696 containerd[1572]: time="2025-09-05T00:29:38.276646451Z" level=info msg="connecting to shim b62c1c760c94d8f8954c56d11641daa55c870b4e25b32f2ff5498e0f72a36ae3" address="unix:///run/containerd/s/969609e99728ab736dffc9ed78bdbcf276416cc106f53e1a2ecfb4492c243453" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:29:38.302207 systemd[1]: Started cri-containerd-b62c1c760c94d8f8954c56d11641daa55c870b4e25b32f2ff5498e0f72a36ae3.scope - libcontainer container b62c1c760c94d8f8954c56d11641daa55c870b4e25b32f2ff5498e0f72a36ae3. Sep 5 00:29:38.331920 containerd[1572]: time="2025-09-05T00:29:38.331866619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d46j5,Uid:e0cd1d96-7e49-4c73-9d1c-66ee5061fdbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b62c1c760c94d8f8954c56d11641daa55c870b4e25b32f2ff5498e0f72a36ae3\"" Sep 5 00:29:38.332697 kubelet[2773]: E0905 00:29:38.332658 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:38.336391 containerd[1572]: time="2025-09-05T00:29:38.336353743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-crdt8,Uid:764cff06-8c93-4757-91fb-f6db69f63643,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:29:38.338655 containerd[1572]: time="2025-09-05T00:29:38.338620053Z" level=info msg="CreateContainer within sandbox \"b62c1c760c94d8f8954c56d11641daa55c870b4e25b32f2ff5498e0f72a36ae3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:29:38.354467 containerd[1572]: time="2025-09-05T00:29:38.353304206Z" level=info msg="Container 95fd077b7a05b744a7a06a7e19a1f9e61573cb570d24e0f95b3982d9f1f5be63: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:29:38.363449 containerd[1572]: time="2025-09-05T00:29:38.363389782Z" level=info msg="CreateContainer within sandbox \"b62c1c760c94d8f8954c56d11641daa55c870b4e25b32f2ff5498e0f72a36ae3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"95fd077b7a05b744a7a06a7e19a1f9e61573cb570d24e0f95b3982d9f1f5be63\"" Sep 5 00:29:38.363847 containerd[1572]: time="2025-09-05T00:29:38.363821288Z" level=info msg="connecting to shim 4679586326a021d328d3a086f53951327a4b5997f713ec913812bd7ba299d9ef" address="unix:///run/containerd/s/17aff413e68c2b3ccef1dc20b7f4aadd0bf15b6c192b6b329da0ce622a9b9ad3" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:29:38.364264 containerd[1572]: time="2025-09-05T00:29:38.364228817Z" level=info msg="StartContainer for \"95fd077b7a05b744a7a06a7e19a1f9e61573cb570d24e0f95b3982d9f1f5be63\"" Sep 5 00:29:38.366739 containerd[1572]: time="2025-09-05T00:29:38.366689625Z" level=info msg="connecting to shim 95fd077b7a05b744a7a06a7e19a1f9e61573cb570d24e0f95b3982d9f1f5be63" address="unix:///run/containerd/s/969609e99728ab736dffc9ed78bdbcf276416cc106f53e1a2ecfb4492c243453" protocol=ttrpc version=3 Sep 5 00:29:38.386188 systemd[1]: Started cri-containerd-95fd077b7a05b744a7a06a7e19a1f9e61573cb570d24e0f95b3982d9f1f5be63.scope - libcontainer container 95fd077b7a05b744a7a06a7e19a1f9e61573cb570d24e0f95b3982d9f1f5be63. Sep 5 00:29:38.395199 systemd[1]: Started cri-containerd-4679586326a021d328d3a086f53951327a4b5997f713ec913812bd7ba299d9ef.scope - libcontainer container 4679586326a021d328d3a086f53951327a4b5997f713ec913812bd7ba299d9ef. Sep 5 00:29:38.440488 containerd[1572]: time="2025-09-05T00:29:38.440443084Z" level=info msg="StartContainer for \"95fd077b7a05b744a7a06a7e19a1f9e61573cb570d24e0f95b3982d9f1f5be63\" returns successfully" Sep 5 00:29:38.455353 containerd[1572]: time="2025-09-05T00:29:38.455307477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-crdt8,Uid:764cff06-8c93-4757-91fb-f6db69f63643,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4679586326a021d328d3a086f53951327a4b5997f713ec913812bd7ba299d9ef\"" Sep 5 00:29:38.456983 containerd[1572]: time="2025-09-05T00:29:38.456960168Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 00:29:38.939093 kubelet[2773]: E0905 00:29:38.938632 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:38.947734 kubelet[2773]: I0905 00:29:38.947632 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d46j5" podStartSLOduration=1.947614526 podStartE2EDuration="1.947614526s" podCreationTimestamp="2025-09-05 00:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:29:38.947512163 +0000 UTC m=+7.134600385" watchObservedRunningTime="2025-09-05 00:29:38.947614526 +0000 UTC m=+7.134702748" Sep 5 00:29:39.898263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3982549410.mount: Deactivated successfully. Sep 5 00:29:40.260114 containerd[1572]: time="2025-09-05T00:29:40.259892303Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:40.260970 containerd[1572]: time="2025-09-05T00:29:40.260927286Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 5 00:29:40.262595 containerd[1572]: time="2025-09-05T00:29:40.262512927Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:40.265111 containerd[1572]: time="2025-09-05T00:29:40.265070293Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:40.266002 containerd[1572]: time="2025-09-05T00:29:40.265938932Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.808948939s" Sep 5 00:29:40.266066 containerd[1572]: time="2025-09-05T00:29:40.266000669Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 5 00:29:40.272438 containerd[1572]: time="2025-09-05T00:29:40.272370899Z" level=info msg="CreateContainer within sandbox \"4679586326a021d328d3a086f53951327a4b5997f713ec913812bd7ba299d9ef\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:29:40.286457 containerd[1572]: time="2025-09-05T00:29:40.286388250Z" level=info msg="Container ec8497d1bbdcbc7bfae32c0e47f3d0cb175b3ab3c71a9faf5fe79523702910e7: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:29:40.293855 containerd[1572]: time="2025-09-05T00:29:40.293792441Z" level=info msg="CreateContainer within sandbox \"4679586326a021d328d3a086f53951327a4b5997f713ec913812bd7ba299d9ef\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ec8497d1bbdcbc7bfae32c0e47f3d0cb175b3ab3c71a9faf5fe79523702910e7\"" Sep 5 00:29:40.294550 containerd[1572]: time="2025-09-05T00:29:40.294495949Z" level=info msg="StartContainer for \"ec8497d1bbdcbc7bfae32c0e47f3d0cb175b3ab3c71a9faf5fe79523702910e7\"" Sep 5 00:29:40.295597 containerd[1572]: time="2025-09-05T00:29:40.295568673Z" level=info msg="connecting to shim ec8497d1bbdcbc7bfae32c0e47f3d0cb175b3ab3c71a9faf5fe79523702910e7" address="unix:///run/containerd/s/17aff413e68c2b3ccef1dc20b7f4aadd0bf15b6c192b6b329da0ce622a9b9ad3" protocol=ttrpc version=3 Sep 5 00:29:40.363308 systemd[1]: Started cri-containerd-ec8497d1bbdcbc7bfae32c0e47f3d0cb175b3ab3c71a9faf5fe79523702910e7.scope - libcontainer container ec8497d1bbdcbc7bfae32c0e47f3d0cb175b3ab3c71a9faf5fe79523702910e7. Sep 5 00:29:40.395536 containerd[1572]: time="2025-09-05T00:29:40.395479472Z" level=info msg="StartContainer for \"ec8497d1bbdcbc7bfae32c0e47f3d0cb175b3ab3c71a9faf5fe79523702910e7\" returns successfully" Sep 5 00:29:41.000750 kubelet[2773]: I0905 00:29:41.000650 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-crdt8" podStartSLOduration=1.190090315 podStartE2EDuration="3.0006317s" podCreationTimestamp="2025-09-05 00:29:38 +0000 UTC" firstStartedPulling="2025-09-05 00:29:38.456640744 +0000 UTC m=+6.643728966" lastFinishedPulling="2025-09-05 00:29:40.267182129 +0000 UTC m=+8.454270351" observedRunningTime="2025-09-05 00:29:41.000507967 +0000 UTC m=+9.187596370" watchObservedRunningTime="2025-09-05 00:29:41.0006317 +0000 UTC m=+9.187719922" Sep 5 00:29:41.046339 kubelet[2773]: E0905 00:29:41.046285 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:41.946718 kubelet[2773]: E0905 00:29:41.946657 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:44.439569 kubelet[2773]: E0905 00:29:44.439494 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:44.755938 kubelet[2773]: E0905 00:29:44.755459 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:44.951582 kubelet[2773]: E0905 00:29:44.951536 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:46.041954 sudo[1790]: pam_unix(sudo:session): session closed for user root Sep 5 00:29:46.046182 sshd[1789]: Connection closed by 10.0.0.1 port 49920 Sep 5 00:29:46.050168 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Sep 5 00:29:46.058629 systemd[1]: sshd@7-10.0.0.38:22-10.0.0.1:49920.service: Deactivated successfully. Sep 5 00:29:46.062967 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:29:46.063549 systemd[1]: session-7.scope: Consumed 5.700s CPU time, 224.9M memory peak. Sep 5 00:29:46.068358 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:29:46.070379 systemd-logind[1540]: Removed session 7. Sep 5 00:29:48.601131 systemd[1]: Created slice kubepods-besteffort-pod8e008863_6ce6_452b_9f35_0f00c5ac3e37.slice - libcontainer container kubepods-besteffort-pod8e008863_6ce6_452b_9f35_0f00c5ac3e37.slice. Sep 5 00:29:48.609196 kubelet[2773]: I0905 00:29:48.609135 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sfdd\" (UniqueName: \"kubernetes.io/projected/8e008863-6ce6-452b-9f35-0f00c5ac3e37-kube-api-access-2sfdd\") pod \"calico-typha-685d5975b4-snkkk\" (UID: \"8e008863-6ce6-452b-9f35-0f00c5ac3e37\") " pod="calico-system/calico-typha-685d5975b4-snkkk" Sep 5 00:29:48.609196 kubelet[2773]: I0905 00:29:48.609182 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e008863-6ce6-452b-9f35-0f00c5ac3e37-tigera-ca-bundle\") pod \"calico-typha-685d5975b4-snkkk\" (UID: \"8e008863-6ce6-452b-9f35-0f00c5ac3e37\") " pod="calico-system/calico-typha-685d5975b4-snkkk" Sep 5 00:29:48.609196 kubelet[2773]: I0905 00:29:48.609201 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8e008863-6ce6-452b-9f35-0f00c5ac3e37-typha-certs\") pod \"calico-typha-685d5975b4-snkkk\" (UID: \"8e008863-6ce6-452b-9f35-0f00c5ac3e37\") " pod="calico-system/calico-typha-685d5975b4-snkkk" Sep 5 00:29:48.852155 systemd[1]: Created slice kubepods-besteffort-podcd126bb9_f9fd_46c2_b5f6_abae5dbc9280.slice - libcontainer container kubepods-besteffort-podcd126bb9_f9fd_46c2_b5f6_abae5dbc9280.slice. Sep 5 00:29:48.906683 kubelet[2773]: E0905 00:29:48.906624 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:48.907335 containerd[1572]: time="2025-09-05T00:29:48.907281142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-685d5975b4-snkkk,Uid:8e008863-6ce6-452b-9f35-0f00c5ac3e37,Namespace:calico-system,Attempt:0,}" Sep 5 00:29:48.911901 kubelet[2773]: I0905 00:29:48.911864 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd126bb9-f9fd-46c2-b5f6-abae5dbc9280-lib-modules\") pod \"calico-node-gvmt8\" (UID: \"cd126bb9-f9fd-46c2-b5f6-abae5dbc9280\") " pod="calico-system/calico-node-gvmt8" Sep 5 00:29:48.911901 kubelet[2773]: I0905 00:29:48.911908 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cd126bb9-f9fd-46c2-b5f6-abae5dbc9280-node-certs\") pod \"calico-node-gvmt8\" (UID: \"cd126bb9-f9fd-46c2-b5f6-abae5dbc9280\") " pod="calico-system/calico-node-gvmt8" Sep 5 00:29:48.912015 kubelet[2773]: I0905 00:29:48.911929 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cd126bb9-f9fd-46c2-b5f6-abae5dbc9280-policysync\") pod \"calico-node-gvmt8\" (UID: \"cd126bb9-f9fd-46c2-b5f6-abae5dbc9280\") " pod="calico-system/calico-node-gvmt8" Sep 5 00:29:48.912015 kubelet[2773]: I0905 00:29:48.911943 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cd126bb9-f9fd-46c2-b5f6-abae5dbc9280-var-lib-calico\") pod \"calico-node-gvmt8\" (UID: \"cd126bb9-f9fd-46c2-b5f6-abae5dbc9280\") " pod="calico-system/calico-node-gvmt8" Sep 5 00:29:48.912015 kubelet[2773]: I0905 00:29:48.911960 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cd126bb9-f9fd-46c2-b5f6-abae5dbc9280-cni-bin-dir\") pod \"calico-node-gvmt8\" (UID: \"cd126bb9-f9fd-46c2-b5f6-abae5dbc9280\") " pod="calico-system/calico-node-gvmt8" Sep 5 00:29:48.912015 kubelet[2773]: I0905 00:29:48.911975 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cd126bb9-f9fd-46c2-b5f6-abae5dbc9280-cni-log-dir\") pod \"calico-node-gvmt8\" (UID: \"cd126bb9-f9fd-46c2-b5f6-abae5dbc9280\") " pod="calico-system/calico-node-gvmt8" Sep 5 00:29:48.912015 kubelet[2773]: I0905 00:29:48.911989 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bw8r\" (UniqueName: \"kubernetes.io/projected/cd126bb9-f9fd-46c2-b5f6-abae5dbc9280-kube-api-access-6bw8r\") pod \"calico-node-gvmt8\" (UID: \"cd126bb9-f9fd-46c2-b5f6-abae5dbc9280\") " pod="calico-system/calico-node-gvmt8" Sep 5 00:29:48.912174 kubelet[2773]: I0905 00:29:48.912006 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cd126bb9-f9fd-46c2-b5f6-abae5dbc9280-cni-net-dir\") pod \"calico-node-gvmt8\" (UID: \"cd126bb9-f9fd-46c2-b5f6-abae5dbc9280\") " pod="calico-system/calico-node-gvmt8" Sep 5 00:29:48.912174 kubelet[2773]: I0905 00:29:48.912018 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cd126bb9-f9fd-46c2-b5f6-abae5dbc9280-var-run-calico\") pod \"calico-node-gvmt8\" (UID: \"cd126bb9-f9fd-46c2-b5f6-abae5dbc9280\") " pod="calico-system/calico-node-gvmt8" Sep 5 00:29:48.912174 kubelet[2773]: I0905 00:29:48.912047 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cd126bb9-f9fd-46c2-b5f6-abae5dbc9280-flexvol-driver-host\") pod \"calico-node-gvmt8\" (UID: \"cd126bb9-f9fd-46c2-b5f6-abae5dbc9280\") " pod="calico-system/calico-node-gvmt8" Sep 5 00:29:48.912174 kubelet[2773]: I0905 00:29:48.912063 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd126bb9-f9fd-46c2-b5f6-abae5dbc9280-tigera-ca-bundle\") pod \"calico-node-gvmt8\" (UID: \"cd126bb9-f9fd-46c2-b5f6-abae5dbc9280\") " pod="calico-system/calico-node-gvmt8" Sep 5 00:29:48.912174 kubelet[2773]: I0905 00:29:48.912106 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd126bb9-f9fd-46c2-b5f6-abae5dbc9280-xtables-lock\") pod \"calico-node-gvmt8\" (UID: \"cd126bb9-f9fd-46c2-b5f6-abae5dbc9280\") " pod="calico-system/calico-node-gvmt8" Sep 5 00:29:48.949299 containerd[1572]: time="2025-09-05T00:29:48.949082444Z" level=info msg="connecting to shim 69abe7c0bbdd1617db3a42b15375fe1b3e76e78525ef18f4d34c45b1391c2c2f" address="unix:///run/containerd/s/c9df1dd8d4121e9f0b55835c3ba0fa5afccbf6fe7f2afc0c637af8ad70d36192" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:29:48.987302 systemd[1]: Started cri-containerd-69abe7c0bbdd1617db3a42b15375fe1b3e76e78525ef18f4d34c45b1391c2c2f.scope - libcontainer container 69abe7c0bbdd1617db3a42b15375fe1b3e76e78525ef18f4d34c45b1391c2c2f. Sep 5 00:29:49.019373 kubelet[2773]: E0905 00:29:49.019330 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.019686 kubelet[2773]: W0905 00:29:49.019612 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.019686 kubelet[2773]: E0905 00:29:49.019652 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.024148 kubelet[2773]: E0905 00:29:49.024088 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.024148 kubelet[2773]: W0905 00:29:49.024105 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.024148 kubelet[2773]: E0905 00:29:49.024121 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.026451 kubelet[2773]: E0905 00:29:49.026437 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.026539 kubelet[2773]: W0905 00:29:49.026526 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.026609 kubelet[2773]: E0905 00:29:49.026598 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.057793 containerd[1572]: time="2025-09-05T00:29:49.057728072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-685d5975b4-snkkk,Uid:8e008863-6ce6-452b-9f35-0f00c5ac3e37,Namespace:calico-system,Attempt:0,} returns sandbox id \"69abe7c0bbdd1617db3a42b15375fe1b3e76e78525ef18f4d34c45b1391c2c2f\"" Sep 5 00:29:49.059831 kubelet[2773]: E0905 00:29:49.059603 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:49.060535 containerd[1572]: time="2025-09-05T00:29:49.060509787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 00:29:49.144728 kubelet[2773]: E0905 00:29:49.144652 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5g8f" podUID="1e677ca4-ae66-4c9c-848d-15fc5e784c81" Sep 5 00:29:49.156964 containerd[1572]: time="2025-09-05T00:29:49.156890582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gvmt8,Uid:cd126bb9-f9fd-46c2-b5f6-abae5dbc9280,Namespace:calico-system,Attempt:0,}" Sep 5 00:29:49.197244 containerd[1572]: time="2025-09-05T00:29:49.197170834Z" level=info msg="connecting to shim aa2962642d378463d9637f6ae8d32c31af80d2e6694b09d5445db3557f8b3275" address="unix:///run/containerd/s/ba287f95e3c90534cad16c35fed7a396f71bad9069806a0616bbdf668d49d80f" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:29:49.199087 kubelet[2773]: E0905 00:29:49.199015 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.199233 kubelet[2773]: W0905 00:29:49.199084 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.199233 kubelet[2773]: E0905 00:29:49.199112 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.199444 kubelet[2773]: E0905 00:29:49.199403 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.199444 kubelet[2773]: W0905 00:29:49.199423 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.199444 kubelet[2773]: E0905 00:29:49.199435 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.199815 kubelet[2773]: E0905 00:29:49.199643 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.199815 kubelet[2773]: W0905 00:29:49.199663 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.199815 kubelet[2773]: E0905 00:29:49.199676 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.200198 kubelet[2773]: E0905 00:29:49.200163 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.200198 kubelet[2773]: W0905 00:29:49.200180 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.200198 kubelet[2773]: E0905 00:29:49.200195 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.200560 kubelet[2773]: E0905 00:29:49.200514 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.200560 kubelet[2773]: W0905 00:29:49.200528 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.200560 kubelet[2773]: E0905 00:29:49.200538 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.200991 kubelet[2773]: E0905 00:29:49.200927 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.200991 kubelet[2773]: W0905 00:29:49.200987 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.200991 kubelet[2773]: E0905 00:29:49.200999 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.201417 kubelet[2773]: E0905 00:29:49.201397 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.201417 kubelet[2773]: W0905 00:29:49.201411 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.201417 kubelet[2773]: E0905 00:29:49.201422 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.201906 kubelet[2773]: E0905 00:29:49.201881 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.201906 kubelet[2773]: W0905 00:29:49.201898 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.201906 kubelet[2773]: E0905 00:29:49.201909 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.202226 kubelet[2773]: E0905 00:29:49.202207 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.202226 kubelet[2773]: W0905 00:29:49.202220 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.202323 kubelet[2773]: E0905 00:29:49.202230 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.202484 kubelet[2773]: E0905 00:29:49.202466 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.202484 kubelet[2773]: W0905 00:29:49.202481 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.202742 kubelet[2773]: E0905 00:29:49.202490 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.203211 kubelet[2773]: E0905 00:29:49.203191 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.203211 kubelet[2773]: W0905 00:29:49.203206 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.203296 kubelet[2773]: E0905 00:29:49.203217 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.203439 kubelet[2773]: E0905 00:29:49.203421 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.203480 kubelet[2773]: W0905 00:29:49.203439 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.203480 kubelet[2773]: E0905 00:29:49.203451 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.203947 kubelet[2773]: E0905 00:29:49.203923 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.203947 kubelet[2773]: W0905 00:29:49.203937 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.203947 kubelet[2773]: E0905 00:29:49.203947 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.204169 kubelet[2773]: E0905 00:29:49.204156 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.204169 kubelet[2773]: W0905 00:29:49.204166 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.204277 kubelet[2773]: E0905 00:29:49.204176 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.204456 kubelet[2773]: E0905 00:29:49.204437 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.204456 kubelet[2773]: W0905 00:29:49.204450 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.204527 kubelet[2773]: E0905 00:29:49.204460 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.204995 kubelet[2773]: E0905 00:29:49.204971 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.204995 kubelet[2773]: W0905 00:29:49.204986 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.204995 kubelet[2773]: E0905 00:29:49.204996 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.205432 kubelet[2773]: E0905 00:29:49.205413 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.205432 kubelet[2773]: W0905 00:29:49.205429 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.205432 kubelet[2773]: E0905 00:29:49.205442 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.205669 kubelet[2773]: E0905 00:29:49.205644 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.205669 kubelet[2773]: W0905 00:29:49.205659 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.205669 kubelet[2773]: E0905 00:29:49.205669 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.205897 kubelet[2773]: E0905 00:29:49.205878 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.205897 kubelet[2773]: W0905 00:29:49.205892 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.205960 kubelet[2773]: E0905 00:29:49.205902 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.206154 kubelet[2773]: E0905 00:29:49.206133 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.206154 kubelet[2773]: W0905 00:29:49.206146 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.206154 kubelet[2773]: E0905 00:29:49.206156 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.215799 kubelet[2773]: E0905 00:29:49.215664 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.215799 kubelet[2773]: W0905 00:29:49.215691 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.215799 kubelet[2773]: E0905 00:29:49.215709 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.215799 kubelet[2773]: I0905 00:29:49.215742 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1e677ca4-ae66-4c9c-848d-15fc5e784c81-registration-dir\") pod \"csi-node-driver-g5g8f\" (UID: \"1e677ca4-ae66-4c9c-848d-15fc5e784c81\") " pod="calico-system/csi-node-driver-g5g8f" Sep 5 00:29:49.216391 kubelet[2773]: E0905 00:29:49.216294 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.216391 kubelet[2773]: W0905 00:29:49.216308 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.216391 kubelet[2773]: E0905 00:29:49.216319 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.216391 kubelet[2773]: I0905 00:29:49.216346 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1e677ca4-ae66-4c9c-848d-15fc5e784c81-socket-dir\") pod \"csi-node-driver-g5g8f\" (UID: \"1e677ca4-ae66-4c9c-848d-15fc5e784c81\") " pod="calico-system/csi-node-driver-g5g8f" Sep 5 00:29:49.216718 kubelet[2773]: E0905 00:29:49.216704 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.216789 kubelet[2773]: W0905 00:29:49.216777 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.216840 kubelet[2773]: E0905 00:29:49.216830 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.216920 kubelet[2773]: I0905 00:29:49.216907 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw69p\" (UniqueName: \"kubernetes.io/projected/1e677ca4-ae66-4c9c-848d-15fc5e784c81-kube-api-access-rw69p\") pod \"csi-node-driver-g5g8f\" (UID: \"1e677ca4-ae66-4c9c-848d-15fc5e784c81\") " pod="calico-system/csi-node-driver-g5g8f" Sep 5 00:29:49.217196 kubelet[2773]: E0905 00:29:49.217180 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.217270 kubelet[2773]: W0905 00:29:49.217254 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.217383 kubelet[2773]: E0905 00:29:49.217366 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.217585 kubelet[2773]: I0905 00:29:49.217563 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1e677ca4-ae66-4c9c-848d-15fc5e784c81-varrun\") pod \"csi-node-driver-g5g8f\" (UID: \"1e677ca4-ae66-4c9c-848d-15fc5e784c81\") " pod="calico-system/csi-node-driver-g5g8f" Sep 5 00:29:49.217934 kubelet[2773]: E0905 00:29:49.217895 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.217934 kubelet[2773]: W0905 00:29:49.217909 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.217934 kubelet[2773]: E0905 00:29:49.217921 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.218392 kubelet[2773]: E0905 00:29:49.218350 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.218392 kubelet[2773]: W0905 00:29:49.218365 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.218392 kubelet[2773]: E0905 00:29:49.218377 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.218825 kubelet[2773]: E0905 00:29:49.218811 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.218963 kubelet[2773]: W0905 00:29:49.218889 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.218963 kubelet[2773]: E0905 00:29:49.218905 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.219251 kubelet[2773]: E0905 00:29:49.219237 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.219450 kubelet[2773]: W0905 00:29:49.219311 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.219450 kubelet[2773]: E0905 00:29:49.219327 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.219635 kubelet[2773]: E0905 00:29:49.219617 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.219714 kubelet[2773]: W0905 00:29:49.219701 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.219784 kubelet[2773]: E0905 00:29:49.219770 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.220156 kubelet[2773]: E0905 00:29:49.220116 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.220156 kubelet[2773]: W0905 00:29:49.220130 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.220156 kubelet[2773]: E0905 00:29:49.220141 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.220384 kubelet[2773]: I0905 00:29:49.220358 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e677ca4-ae66-4c9c-848d-15fc5e784c81-kubelet-dir\") pod \"csi-node-driver-g5g8f\" (UID: \"1e677ca4-ae66-4c9c-848d-15fc5e784c81\") " pod="calico-system/csi-node-driver-g5g8f" Sep 5 00:29:49.220627 kubelet[2773]: E0905 00:29:49.220590 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.220627 kubelet[2773]: W0905 00:29:49.220601 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.220627 kubelet[2773]: E0905 00:29:49.220612 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.220981 kubelet[2773]: E0905 00:29:49.220947 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.220981 kubelet[2773]: W0905 00:29:49.220959 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.220981 kubelet[2773]: E0905 00:29:49.220969 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.221341 kubelet[2773]: E0905 00:29:49.221301 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.221341 kubelet[2773]: W0905 00:29:49.221315 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.221341 kubelet[2773]: E0905 00:29:49.221325 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.221700 kubelet[2773]: E0905 00:29:49.221682 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.221807 kubelet[2773]: W0905 00:29:49.221764 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.221807 kubelet[2773]: E0905 00:29:49.221781 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.222343 kubelet[2773]: E0905 00:29:49.222323 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.222492 kubelet[2773]: W0905 00:29:49.222419 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.222492 kubelet[2773]: E0905 00:29:49.222437 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.236617 systemd[1]: Started cri-containerd-aa2962642d378463d9637f6ae8d32c31af80d2e6694b09d5445db3557f8b3275.scope - libcontainer container aa2962642d378463d9637f6ae8d32c31af80d2e6694b09d5445db3557f8b3275. Sep 5 00:29:49.316136 containerd[1572]: time="2025-09-05T00:29:49.316077419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gvmt8,Uid:cd126bb9-f9fd-46c2-b5f6-abae5dbc9280,Namespace:calico-system,Attempt:0,} returns sandbox id \"aa2962642d378463d9637f6ae8d32c31af80d2e6694b09d5445db3557f8b3275\"" Sep 5 00:29:49.322332 kubelet[2773]: E0905 00:29:49.322240 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.322332 kubelet[2773]: W0905 00:29:49.322271 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.322944 kubelet[2773]: E0905 00:29:49.322426 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.323353 kubelet[2773]: E0905 00:29:49.323196 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.323353 kubelet[2773]: W0905 00:29:49.323241 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.323353 kubelet[2773]: E0905 00:29:49.323268 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.323954 kubelet[2773]: E0905 00:29:49.323911 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.324263 kubelet[2773]: W0905 00:29:49.324188 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.324263 kubelet[2773]: E0905 00:29:49.324210 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.324859 kubelet[2773]: E0905 00:29:49.324734 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.324859 kubelet[2773]: W0905 00:29:49.324777 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.324859 kubelet[2773]: E0905 00:29:49.324791 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.325409 kubelet[2773]: E0905 00:29:49.325376 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.325608 kubelet[2773]: W0905 00:29:49.325487 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.325608 kubelet[2773]: E0905 00:29:49.325507 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.325957 kubelet[2773]: E0905 00:29:49.325923 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.325957 kubelet[2773]: W0905 00:29:49.325938 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.325957 kubelet[2773]: E0905 00:29:49.325950 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.326261 kubelet[2773]: E0905 00:29:49.326219 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.326309 kubelet[2773]: W0905 00:29:49.326281 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.326309 kubelet[2773]: E0905 00:29:49.326295 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.326690 kubelet[2773]: E0905 00:29:49.326652 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.326690 kubelet[2773]: W0905 00:29:49.326686 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.326793 kubelet[2773]: E0905 00:29:49.326722 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.327794 kubelet[2773]: E0905 00:29:49.327774 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.327794 kubelet[2773]: W0905 00:29:49.327789 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.327899 kubelet[2773]: E0905 00:29:49.327804 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.328200 kubelet[2773]: E0905 00:29:49.328182 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.328929 kubelet[2773]: W0905 00:29:49.328278 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.328929 kubelet[2773]: E0905 00:29:49.328298 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.329428 kubelet[2773]: E0905 00:29:49.329250 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.329428 kubelet[2773]: W0905 00:29:49.329266 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.329428 kubelet[2773]: E0905 00:29:49.329278 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.330196 kubelet[2773]: E0905 00:29:49.330104 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.330196 kubelet[2773]: W0905 00:29:49.330116 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.330196 kubelet[2773]: E0905 00:29:49.330129 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.331312 kubelet[2773]: E0905 00:29:49.331290 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.331312 kubelet[2773]: W0905 00:29:49.331305 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.331420 kubelet[2773]: E0905 00:29:49.331317 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.331800 kubelet[2773]: E0905 00:29:49.331755 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.331800 kubelet[2773]: W0905 00:29:49.331770 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.331800 kubelet[2773]: E0905 00:29:49.331782 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.332248 kubelet[2773]: E0905 00:29:49.332203 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.332248 kubelet[2773]: W0905 00:29:49.332219 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.332248 kubelet[2773]: E0905 00:29:49.332231 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.332728 kubelet[2773]: E0905 00:29:49.332683 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.332728 kubelet[2773]: W0905 00:29:49.332698 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.332728 kubelet[2773]: E0905 00:29:49.332710 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.333261 kubelet[2773]: E0905 00:29:49.333145 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.333261 kubelet[2773]: W0905 00:29:49.333167 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.333261 kubelet[2773]: E0905 00:29:49.333179 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.333614 kubelet[2773]: E0905 00:29:49.333522 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.333614 kubelet[2773]: W0905 00:29:49.333535 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.333614 kubelet[2773]: E0905 00:29:49.333557 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.333910 kubelet[2773]: E0905 00:29:49.333871 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.333910 kubelet[2773]: W0905 00:29:49.333884 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.333910 kubelet[2773]: E0905 00:29:49.333894 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.334338 kubelet[2773]: E0905 00:29:49.334316 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.334338 kubelet[2773]: W0905 00:29:49.334331 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.334441 kubelet[2773]: E0905 00:29:49.334345 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.334624 kubelet[2773]: E0905 00:29:49.334606 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.334624 kubelet[2773]: W0905 00:29:49.334619 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.334731 kubelet[2773]: E0905 00:29:49.334631 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.334964 kubelet[2773]: E0905 00:29:49.334926 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.334964 kubelet[2773]: W0905 00:29:49.334948 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.334964 kubelet[2773]: E0905 00:29:49.334961 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.335256 kubelet[2773]: E0905 00:29:49.335231 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.335256 kubelet[2773]: W0905 00:29:49.335244 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.335256 kubelet[2773]: E0905 00:29:49.335254 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.336234 kubelet[2773]: E0905 00:29:49.335817 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.336495 kubelet[2773]: W0905 00:29:49.336306 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.336495 kubelet[2773]: E0905 00:29:49.336329 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.338099 kubelet[2773]: E0905 00:29:49.338073 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.338099 kubelet[2773]: W0905 00:29:49.338090 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.338099 kubelet[2773]: E0905 00:29:49.338103 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:49.350662 kubelet[2773]: E0905 00:29:49.350611 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:49.350662 kubelet[2773]: W0905 00:29:49.350646 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:49.350662 kubelet[2773]: E0905 00:29:49.350683 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:50.695793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2939755521.mount: Deactivated successfully. Sep 5 00:29:50.915706 kubelet[2773]: E0905 00:29:50.915651 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5g8f" podUID="1e677ca4-ae66-4c9c-848d-15fc5e784c81" Sep 5 00:29:51.115473 containerd[1572]: time="2025-09-05T00:29:51.115414498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:51.116187 containerd[1572]: time="2025-09-05T00:29:51.116156142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 5 00:29:51.117388 containerd[1572]: time="2025-09-05T00:29:51.117313069Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:51.119090 containerd[1572]: time="2025-09-05T00:29:51.119060035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:51.119538 containerd[1572]: time="2025-09-05T00:29:51.119496176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.058831458s" Sep 5 00:29:51.119579 containerd[1572]: time="2025-09-05T00:29:51.119536512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 5 00:29:51.120584 containerd[1572]: time="2025-09-05T00:29:51.120519290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 00:29:51.132163 containerd[1572]: time="2025-09-05T00:29:51.132098211Z" level=info msg="CreateContainer within sandbox \"69abe7c0bbdd1617db3a42b15375fe1b3e76e78525ef18f4d34c45b1391c2c2f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:29:51.140352 containerd[1572]: time="2025-09-05T00:29:51.140317643Z" level=info msg="Container 5734ccb81636dcfc8ae0b623123536d614ec86afe6b4a36be52e845ea9587e5e: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:29:51.149729 containerd[1572]: time="2025-09-05T00:29:51.149676226Z" level=info msg="CreateContainer within sandbox \"69abe7c0bbdd1617db3a42b15375fe1b3e76e78525ef18f4d34c45b1391c2c2f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5734ccb81636dcfc8ae0b623123536d614ec86afe6b4a36be52e845ea9587e5e\"" Sep 5 00:29:51.150189 containerd[1572]: time="2025-09-05T00:29:51.150157492Z" level=info msg="StartContainer for \"5734ccb81636dcfc8ae0b623123536d614ec86afe6b4a36be52e845ea9587e5e\"" Sep 5 00:29:51.151439 containerd[1572]: time="2025-09-05T00:29:51.151403956Z" level=info msg="connecting to shim 5734ccb81636dcfc8ae0b623123536d614ec86afe6b4a36be52e845ea9587e5e" address="unix:///run/containerd/s/c9df1dd8d4121e9f0b55835c3ba0fa5afccbf6fe7f2afc0c637af8ad70d36192" protocol=ttrpc version=3 Sep 5 00:29:51.180179 systemd[1]: Started cri-containerd-5734ccb81636dcfc8ae0b623123536d614ec86afe6b4a36be52e845ea9587e5e.scope - libcontainer container 5734ccb81636dcfc8ae0b623123536d614ec86afe6b4a36be52e845ea9587e5e. Sep 5 00:29:51.234710 containerd[1572]: time="2025-09-05T00:29:51.234664275Z" level=info msg="StartContainer for \"5734ccb81636dcfc8ae0b623123536d614ec86afe6b4a36be52e845ea9587e5e\" returns successfully" Sep 5 00:29:51.967362 kubelet[2773]: E0905 00:29:51.967275 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:51.978432 kubelet[2773]: I0905 00:29:51.978133 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-685d5975b4-snkkk" podStartSLOduration=1.917879069 podStartE2EDuration="3.977996886s" podCreationTimestamp="2025-09-05 00:29:48 +0000 UTC" firstStartedPulling="2025-09-05 00:29:49.060187581 +0000 UTC m=+17.247275803" lastFinishedPulling="2025-09-05 00:29:51.120305408 +0000 UTC m=+19.307393620" observedRunningTime="2025-09-05 00:29:51.977794394 +0000 UTC m=+20.164882617" watchObservedRunningTime="2025-09-05 00:29:51.977996886 +0000 UTC m=+20.165085108" Sep 5 00:29:52.022572 kubelet[2773]: E0905 00:29:52.022512 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.022572 kubelet[2773]: W0905 00:29:52.022545 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.022572 kubelet[2773]: E0905 00:29:52.022578 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.023035 kubelet[2773]: E0905 00:29:52.023007 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.023074 kubelet[2773]: W0905 00:29:52.023021 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.023074 kubelet[2773]: E0905 00:29:52.023051 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.023283 kubelet[2773]: E0905 00:29:52.023258 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.023283 kubelet[2773]: W0905 00:29:52.023271 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.023283 kubelet[2773]: E0905 00:29:52.023282 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.023563 kubelet[2773]: E0905 00:29:52.023533 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.023563 kubelet[2773]: W0905 00:29:52.023549 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.023563 kubelet[2773]: E0905 00:29:52.023558 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.023763 kubelet[2773]: E0905 00:29:52.023738 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.023763 kubelet[2773]: W0905 00:29:52.023750 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.023763 kubelet[2773]: E0905 00:29:52.023758 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.023958 kubelet[2773]: E0905 00:29:52.023941 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.023958 kubelet[2773]: W0905 00:29:52.023952 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.023958 kubelet[2773]: E0905 00:29:52.023959 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.024153 kubelet[2773]: E0905 00:29:52.024136 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.024153 kubelet[2773]: W0905 00:29:52.024148 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.024233 kubelet[2773]: E0905 00:29:52.024156 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.024346 kubelet[2773]: E0905 00:29:52.024328 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.024346 kubelet[2773]: W0905 00:29:52.024339 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.024346 kubelet[2773]: E0905 00:29:52.024347 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.024551 kubelet[2773]: E0905 00:29:52.024537 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.024551 kubelet[2773]: W0905 00:29:52.024545 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.024551 kubelet[2773]: E0905 00:29:52.024553 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.024750 kubelet[2773]: E0905 00:29:52.024729 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.024750 kubelet[2773]: W0905 00:29:52.024743 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.024750 kubelet[2773]: E0905 00:29:52.024754 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.024951 kubelet[2773]: E0905 00:29:52.024933 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.024951 kubelet[2773]: W0905 00:29:52.024945 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.025015 kubelet[2773]: E0905 00:29:52.024955 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.025165 kubelet[2773]: E0905 00:29:52.025148 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.025165 kubelet[2773]: W0905 00:29:52.025158 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.025213 kubelet[2773]: E0905 00:29:52.025166 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.025411 kubelet[2773]: E0905 00:29:52.025394 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.025411 kubelet[2773]: W0905 00:29:52.025404 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.025457 kubelet[2773]: E0905 00:29:52.025412 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.025614 kubelet[2773]: E0905 00:29:52.025597 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.025614 kubelet[2773]: W0905 00:29:52.025608 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.025659 kubelet[2773]: E0905 00:29:52.025617 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.025804 kubelet[2773]: E0905 00:29:52.025788 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.025804 kubelet[2773]: W0905 00:29:52.025797 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.025853 kubelet[2773]: E0905 00:29:52.025805 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.043671 kubelet[2773]: E0905 00:29:52.043601 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.043671 kubelet[2773]: W0905 00:29:52.043649 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.043671 kubelet[2773]: E0905 00:29:52.043677 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.044044 kubelet[2773]: E0905 00:29:52.044000 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.044044 kubelet[2773]: W0905 00:29:52.044015 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.044103 kubelet[2773]: E0905 00:29:52.044056 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.044358 kubelet[2773]: E0905 00:29:52.044339 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.044358 kubelet[2773]: W0905 00:29:52.044354 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.044418 kubelet[2773]: E0905 00:29:52.044365 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.044763 kubelet[2773]: E0905 00:29:52.044721 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.044763 kubelet[2773]: W0905 00:29:52.044750 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.044822 kubelet[2773]: E0905 00:29:52.044772 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.044997 kubelet[2773]: E0905 00:29:52.044981 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.044997 kubelet[2773]: W0905 00:29:52.044992 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.045095 kubelet[2773]: E0905 00:29:52.045001 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.045357 kubelet[2773]: E0905 00:29:52.045309 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.045357 kubelet[2773]: W0905 00:29:52.045343 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.045357 kubelet[2773]: E0905 00:29:52.045374 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.045675 kubelet[2773]: E0905 00:29:52.045656 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.045675 kubelet[2773]: W0905 00:29:52.045667 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.045675 kubelet[2773]: E0905 00:29:52.045676 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.045876 kubelet[2773]: E0905 00:29:52.045858 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.045876 kubelet[2773]: W0905 00:29:52.045868 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.045876 kubelet[2773]: E0905 00:29:52.045877 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.046184 kubelet[2773]: E0905 00:29:52.046150 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.046184 kubelet[2773]: W0905 00:29:52.046169 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.046184 kubelet[2773]: E0905 00:29:52.046185 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.046490 kubelet[2773]: E0905 00:29:52.046464 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.046490 kubelet[2773]: W0905 00:29:52.046478 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.046552 kubelet[2773]: E0905 00:29:52.046502 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.046692 kubelet[2773]: E0905 00:29:52.046674 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.046692 kubelet[2773]: W0905 00:29:52.046685 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.046750 kubelet[2773]: E0905 00:29:52.046697 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.046932 kubelet[2773]: E0905 00:29:52.046916 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.046932 kubelet[2773]: W0905 00:29:52.046927 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.046993 kubelet[2773]: E0905 00:29:52.046936 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.047264 kubelet[2773]: E0905 00:29:52.047240 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.047264 kubelet[2773]: W0905 00:29:52.047255 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.047264 kubelet[2773]: E0905 00:29:52.047267 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.047441 kubelet[2773]: E0905 00:29:52.047424 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.047441 kubelet[2773]: W0905 00:29:52.047433 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.047441 kubelet[2773]: E0905 00:29:52.047441 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.047644 kubelet[2773]: E0905 00:29:52.047625 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.047644 kubelet[2773]: W0905 00:29:52.047637 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.047644 kubelet[2773]: E0905 00:29:52.047645 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.047900 kubelet[2773]: E0905 00:29:52.047815 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.047900 kubelet[2773]: W0905 00:29:52.047829 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.047900 kubelet[2773]: E0905 00:29:52.047837 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.048017 kubelet[2773]: E0905 00:29:52.047988 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.048017 kubelet[2773]: W0905 00:29:52.047995 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.048017 kubelet[2773]: E0905 00:29:52.048003 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.048274 kubelet[2773]: E0905 00:29:52.048251 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:52.048274 kubelet[2773]: W0905 00:29:52.048270 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:52.048354 kubelet[2773]: E0905 00:29:52.048284 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:52.916098 kubelet[2773]: E0905 00:29:52.916009 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5g8f" podUID="1e677ca4-ae66-4c9c-848d-15fc5e784c81" Sep 5 00:29:52.972627 kubelet[2773]: E0905 00:29:52.972588 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:52.978832 containerd[1572]: time="2025-09-05T00:29:52.978756295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:52.979923 containerd[1572]: time="2025-09-05T00:29:52.979864179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 5 00:29:52.981807 containerd[1572]: time="2025-09-05T00:29:52.981724959Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:52.985752 containerd[1572]: time="2025-09-05T00:29:52.985686960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:52.986473 containerd[1572]: time="2025-09-05T00:29:52.986426410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.865874699s" Sep 5 00:29:52.986551 containerd[1572]: time="2025-09-05T00:29:52.986483758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 5 00:29:52.993553 containerd[1572]: time="2025-09-05T00:29:52.993503009Z" level=info msg="CreateContainer within sandbox \"aa2962642d378463d9637f6ae8d32c31af80d2e6694b09d5445db3557f8b3275\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:29:53.005253 containerd[1572]: time="2025-09-05T00:29:53.005199926Z" level=info msg="Container 32c97889cd4c0d5ec417b0285927eea161149ae8f70660b510d8c2cb793fd400: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:29:53.016694 containerd[1572]: time="2025-09-05T00:29:53.016633636Z" level=info msg="CreateContainer within sandbox \"aa2962642d378463d9637f6ae8d32c31af80d2e6694b09d5445db3557f8b3275\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"32c97889cd4c0d5ec417b0285927eea161149ae8f70660b510d8c2cb793fd400\"" Sep 5 00:29:53.017543 containerd[1572]: time="2025-09-05T00:29:53.017492441Z" level=info msg="StartContainer for \"32c97889cd4c0d5ec417b0285927eea161149ae8f70660b510d8c2cb793fd400\"" Sep 5 00:29:53.019589 containerd[1572]: time="2025-09-05T00:29:53.019534542Z" level=info msg="connecting to shim 32c97889cd4c0d5ec417b0285927eea161149ae8f70660b510d8c2cb793fd400" address="unix:///run/containerd/s/ba287f95e3c90534cad16c35fed7a396f71bad9069806a0616bbdf668d49d80f" protocol=ttrpc version=3 Sep 5 00:29:53.031888 kubelet[2773]: E0905 00:29:53.031618 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.031888 kubelet[2773]: W0905 00:29:53.031651 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.031888 kubelet[2773]: E0905 00:29:53.031685 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.032266 kubelet[2773]: E0905 00:29:53.032251 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.032437 kubelet[2773]: W0905 00:29:53.032326 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.032437 kubelet[2773]: E0905 00:29:53.032342 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.032657 kubelet[2773]: E0905 00:29:53.032644 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.032720 kubelet[2773]: W0905 00:29:53.032709 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.032820 kubelet[2773]: E0905 00:29:53.032773 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.033043 kubelet[2773]: E0905 00:29:53.033009 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.033184 kubelet[2773]: W0905 00:29:53.033021 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.033184 kubelet[2773]: E0905 00:29:53.033132 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.033417 kubelet[2773]: E0905 00:29:53.033404 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.033601 kubelet[2773]: W0905 00:29:53.033501 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.033601 kubelet[2773]: E0905 00:29:53.033516 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.033777 kubelet[2773]: E0905 00:29:53.033722 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.033777 kubelet[2773]: W0905 00:29:53.033733 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.033777 kubelet[2773]: E0905 00:29:53.033742 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.034110 kubelet[2773]: E0905 00:29:53.034021 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.034110 kubelet[2773]: W0905 00:29:53.034062 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.034110 kubelet[2773]: E0905 00:29:53.034072 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.034365 kubelet[2773]: E0905 00:29:53.034353 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.034529 kubelet[2773]: W0905 00:29:53.034421 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.034529 kubelet[2773]: E0905 00:29:53.034436 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.034656 kubelet[2773]: E0905 00:29:53.034644 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.034741 kubelet[2773]: W0905 00:29:53.034726 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.034827 kubelet[2773]: E0905 00:29:53.034813 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.035183 kubelet[2773]: E0905 00:29:53.035073 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.035183 kubelet[2773]: W0905 00:29:53.035087 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.035183 kubelet[2773]: E0905 00:29:53.035099 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.035484 kubelet[2773]: E0905 00:29:53.035378 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.035484 kubelet[2773]: W0905 00:29:53.035390 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.035484 kubelet[2773]: E0905 00:29:53.035400 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.035623 kubelet[2773]: E0905 00:29:53.035612 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.035769 kubelet[2773]: W0905 00:29:53.035668 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.035769 kubelet[2773]: E0905 00:29:53.035681 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.035876 kubelet[2773]: E0905 00:29:53.035865 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.035940 kubelet[2773]: W0905 00:29:53.035929 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.036148 kubelet[2773]: E0905 00:29:53.036061 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.036257 kubelet[2773]: E0905 00:29:53.036242 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.036257 kubelet[2773]: W0905 00:29:53.036254 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.036314 kubelet[2773]: E0905 00:29:53.036266 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.036485 kubelet[2773]: E0905 00:29:53.036467 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.036485 kubelet[2773]: W0905 00:29:53.036478 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.036485 kubelet[2773]: E0905 00:29:53.036487 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.042274 systemd[1]: Started cri-containerd-32c97889cd4c0d5ec417b0285927eea161149ae8f70660b510d8c2cb793fd400.scope - libcontainer container 32c97889cd4c0d5ec417b0285927eea161149ae8f70660b510d8c2cb793fd400. Sep 5 00:29:53.049782 kubelet[2773]: E0905 00:29:53.049751 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.049782 kubelet[2773]: W0905 00:29:53.049775 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.049941 kubelet[2773]: E0905 00:29:53.049818 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.050240 kubelet[2773]: E0905 00:29:53.050224 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.050240 kubelet[2773]: W0905 00:29:53.050238 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.050323 kubelet[2773]: E0905 00:29:53.050250 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.050699 kubelet[2773]: E0905 00:29:53.050667 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.050699 kubelet[2773]: W0905 00:29:53.050683 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.050699 kubelet[2773]: E0905 00:29:53.050696 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.051071 kubelet[2773]: E0905 00:29:53.051037 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.051071 kubelet[2773]: W0905 00:29:53.051051 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.051071 kubelet[2773]: E0905 00:29:53.051062 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.052800 kubelet[2773]: E0905 00:29:53.052782 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.052800 kubelet[2773]: W0905 00:29:53.052798 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.052905 kubelet[2773]: E0905 00:29:53.052810 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.053157 kubelet[2773]: E0905 00:29:53.053142 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.053157 kubelet[2773]: W0905 00:29:53.053155 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.053343 kubelet[2773]: E0905 00:29:53.053185 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.053550 kubelet[2773]: E0905 00:29:53.053504 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.053550 kubelet[2773]: W0905 00:29:53.053518 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.053550 kubelet[2773]: E0905 00:29:53.053531 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.053855 kubelet[2773]: E0905 00:29:53.053812 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.053855 kubelet[2773]: W0905 00:29:53.053845 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.053921 kubelet[2773]: E0905 00:29:53.053856 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.054269 kubelet[2773]: E0905 00:29:53.054242 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.054269 kubelet[2773]: W0905 00:29:53.054255 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.054269 kubelet[2773]: E0905 00:29:53.054267 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.054679 kubelet[2773]: E0905 00:29:53.054643 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.054679 kubelet[2773]: W0905 00:29:53.054675 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.054847 kubelet[2773]: E0905 00:29:53.054686 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.056137 kubelet[2773]: E0905 00:29:53.055012 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.056137 kubelet[2773]: W0905 00:29:53.056075 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.056137 kubelet[2773]: E0905 00:29:53.056093 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.056595 kubelet[2773]: E0905 00:29:53.056575 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.056637 kubelet[2773]: W0905 00:29:53.056607 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.056637 kubelet[2773]: E0905 00:29:53.056620 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.056977 kubelet[2773]: E0905 00:29:53.056934 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.056977 kubelet[2773]: W0905 00:29:53.056944 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.056977 kubelet[2773]: E0905 00:29:53.056971 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.057405 kubelet[2773]: E0905 00:29:53.057236 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.057405 kubelet[2773]: W0905 00:29:53.057250 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.057405 kubelet[2773]: E0905 00:29:53.057261 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.057584 kubelet[2773]: E0905 00:29:53.057531 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.057584 kubelet[2773]: W0905 00:29:53.057541 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.057584 kubelet[2773]: E0905 00:29:53.057552 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.057824 kubelet[2773]: E0905 00:29:53.057766 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.057824 kubelet[2773]: W0905 00:29:53.057779 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.057824 kubelet[2773]: E0905 00:29:53.057790 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.058084 kubelet[2773]: E0905 00:29:53.058035 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.058084 kubelet[2773]: W0905 00:29:53.058048 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.058084 kubelet[2773]: E0905 00:29:53.058059 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.059628 kubelet[2773]: E0905 00:29:53.059609 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:29:53.059628 kubelet[2773]: W0905 00:29:53.059625 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:29:53.059862 kubelet[2773]: E0905 00:29:53.059640 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:29:53.095582 containerd[1572]: time="2025-09-05T00:29:53.095536614Z" level=info msg="StartContainer for \"32c97889cd4c0d5ec417b0285927eea161149ae8f70660b510d8c2cb793fd400\" returns successfully" Sep 5 00:29:53.108338 systemd[1]: cri-containerd-32c97889cd4c0d5ec417b0285927eea161149ae8f70660b510d8c2cb793fd400.scope: Deactivated successfully. Sep 5 00:29:53.112138 containerd[1572]: time="2025-09-05T00:29:53.112092315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32c97889cd4c0d5ec417b0285927eea161149ae8f70660b510d8c2cb793fd400\" id:\"32c97889cd4c0d5ec417b0285927eea161149ae8f70660b510d8c2cb793fd400\" pid:3496 exited_at:{seconds:1757032193 nanos:111338717}" Sep 5 00:29:53.112262 containerd[1572]: time="2025-09-05T00:29:53.112134293Z" level=info msg="received exit event container_id:\"32c97889cd4c0d5ec417b0285927eea161149ae8f70660b510d8c2cb793fd400\" id:\"32c97889cd4c0d5ec417b0285927eea161149ae8f70660b510d8c2cb793fd400\" pid:3496 exited_at:{seconds:1757032193 nanos:111338717}" Sep 5 00:29:53.138325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32c97889cd4c0d5ec417b0285927eea161149ae8f70660b510d8c2cb793fd400-rootfs.mount: Deactivated successfully. Sep 5 00:29:53.977613 kubelet[2773]: E0905 00:29:53.977510 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:29:53.979343 containerd[1572]: time="2025-09-05T00:29:53.979107231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 00:29:54.916409 kubelet[2773]: E0905 00:29:54.916326 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5g8f" podUID="1e677ca4-ae66-4c9c-848d-15fc5e784c81" Sep 5 00:29:56.681774 containerd[1572]: time="2025-09-05T00:29:56.681691628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:56.682599 containerd[1572]: time="2025-09-05T00:29:56.682544190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 5 00:29:56.683820 containerd[1572]: time="2025-09-05T00:29:56.683786995Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:56.706716 containerd[1572]: time="2025-09-05T00:29:56.706660185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:29:56.707359 containerd[1572]: time="2025-09-05T00:29:56.707331768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.727990417s" Sep 5 00:29:56.707421 containerd[1572]: time="2025-09-05T00:29:56.707361434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 5 00:29:56.767054 containerd[1572]: time="2025-09-05T00:29:56.765730939Z" level=info msg="CreateContainer within sandbox \"aa2962642d378463d9637f6ae8d32c31af80d2e6694b09d5445db3557f8b3275\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:29:56.915954 kubelet[2773]: E0905 00:29:56.915888 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5g8f" podUID="1e677ca4-ae66-4c9c-848d-15fc5e784c81" Sep 5 00:29:57.005677 containerd[1572]: time="2025-09-05T00:29:57.005535679Z" level=info msg="Container ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:29:57.008469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount893434554.mount: Deactivated successfully. Sep 5 00:29:57.019684 containerd[1572]: time="2025-09-05T00:29:57.019591483Z" level=info msg="CreateContainer within sandbox \"aa2962642d378463d9637f6ae8d32c31af80d2e6694b09d5445db3557f8b3275\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0\"" Sep 5 00:29:57.020315 containerd[1572]: time="2025-09-05T00:29:57.020263546Z" level=info msg="StartContainer for \"ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0\"" Sep 5 00:29:57.022066 containerd[1572]: time="2025-09-05T00:29:57.022013114Z" level=info msg="connecting to shim ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0" address="unix:///run/containerd/s/ba287f95e3c90534cad16c35fed7a396f71bad9069806a0616bbdf668d49d80f" protocol=ttrpc version=3 Sep 5 00:29:57.048348 systemd[1]: Started cri-containerd-ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0.scope - libcontainer container ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0. Sep 5 00:29:57.105975 containerd[1572]: time="2025-09-05T00:29:57.105897252Z" level=info msg="StartContainer for \"ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0\" returns successfully" Sep 5 00:29:58.916373 kubelet[2773]: E0905 00:29:58.916275 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5g8f" podUID="1e677ca4-ae66-4c9c-848d-15fc5e784c81" Sep 5 00:29:59.487639 systemd[1]: cri-containerd-ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0.scope: Deactivated successfully. Sep 5 00:29:59.488254 systemd[1]: cri-containerd-ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0.scope: Consumed 688ms CPU time, 181.9M memory peak, 3.6M read from disk, 171.3M written to disk. Sep 5 00:29:59.491492 containerd[1572]: time="2025-09-05T00:29:59.491428845Z" level=info msg="received exit event container_id:\"ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0\" id:\"ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0\" pid:3574 exited_at:{seconds:1757032199 nanos:490932983}" Sep 5 00:29:59.492102 containerd[1572]: time="2025-09-05T00:29:59.492011799Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0\" id:\"ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0\" pid:3574 exited_at:{seconds:1757032199 nanos:490932983}" Sep 5 00:29:59.549263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce5d02167bb7ed7957fd56a8e74d910ad6555d1c9cb63a8ace7594a02db956c0-rootfs.mount: Deactivated successfully. Sep 5 00:29:59.566662 kubelet[2773]: I0905 00:29:59.565898 2773 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 00:29:59.909471 systemd[1]: Created slice kubepods-burstable-pod1bae6087_a995_43b4_89e8_f06b7acab6f3.slice - libcontainer container kubepods-burstable-pod1bae6087_a995_43b4_89e8_f06b7acab6f3.slice. Sep 5 00:29:59.923015 systemd[1]: Created slice kubepods-burstable-pod5b699e17_324d_48df_a118_37822782a2a2.slice - libcontainer container kubepods-burstable-pod5b699e17_324d_48df_a118_37822782a2a2.slice. Sep 5 00:29:59.929859 systemd[1]: Created slice kubepods-besteffort-pod510659b3_8cca_445b_950f_ac0a5af64459.slice - libcontainer container kubepods-besteffort-pod510659b3_8cca_445b_950f_ac0a5af64459.slice. Sep 5 00:29:59.941873 systemd[1]: Created slice kubepods-besteffort-pode7771c0c_5e2c_4ff9_a260_3beb4b4ae172.slice - libcontainer container kubepods-besteffort-pode7771c0c_5e2c_4ff9_a260_3beb4b4ae172.slice. Sep 5 00:29:59.951523 systemd[1]: Created slice kubepods-besteffort-pod803aa214_3ed6_41f3_80bd_4a8ea923c160.slice - libcontainer container kubepods-besteffort-pod803aa214_3ed6_41f3_80bd_4a8ea923c160.slice. Sep 5 00:29:59.960773 systemd[1]: Created slice kubepods-besteffort-poda3ced8ed_a418_4283_94ac_c3d443998f43.slice - libcontainer container kubepods-besteffort-poda3ced8ed_a418_4283_94ac_c3d443998f43.slice. Sep 5 00:29:59.970715 systemd[1]: Created slice kubepods-besteffort-podf0bed27a_b5ea_4248_90cd_67ba48cc98d5.slice - libcontainer container kubepods-besteffort-podf0bed27a_b5ea_4248_90cd_67ba48cc98d5.slice. Sep 5 00:29:59.996266 containerd[1572]: time="2025-09-05T00:29:59.996205161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 00:29:59.997717 kubelet[2773]: I0905 00:29:59.997668 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twcgr\" (UniqueName: \"kubernetes.io/projected/e7771c0c-5e2c-4ff9-a260-3beb4b4ae172-kube-api-access-twcgr\") pod \"calico-kube-controllers-548d8ddcf7-rxxwx\" (UID: \"e7771c0c-5e2c-4ff9-a260-3beb4b4ae172\") " pod="calico-system/calico-kube-controllers-548d8ddcf7-rxxwx" Sep 5 00:29:59.998438 kubelet[2773]: I0905 00:29:59.997729 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b699e17-324d-48df-a118-37822782a2a2-config-volume\") pod \"coredns-674b8bbfcf-q96lv\" (UID: \"5b699e17-324d-48df-a118-37822782a2a2\") " pod="kube-system/coredns-674b8bbfcf-q96lv" Sep 5 00:29:59.998438 kubelet[2773]: I0905 00:29:59.997817 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803aa214-3ed6-41f3-80bd-4a8ea923c160-config\") pod \"goldmane-54d579b49d-nnvn4\" (UID: \"803aa214-3ed6-41f3-80bd-4a8ea923c160\") " pod="calico-system/goldmane-54d579b49d-nnvn4" Sep 5 00:29:59.998438 kubelet[2773]: I0905 00:29:59.997845 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/803aa214-3ed6-41f3-80bd-4a8ea923c160-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-nnvn4\" (UID: \"803aa214-3ed6-41f3-80bd-4a8ea923c160\") " pod="calico-system/goldmane-54d579b49d-nnvn4" Sep 5 00:29:59.998438 kubelet[2773]: I0905 00:29:59.997924 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0bed27a-b5ea-4248-90cd-67ba48cc98d5-whisker-ca-bundle\") pod \"whisker-7d779c7f6b-wdm5s\" (UID: \"f0bed27a-b5ea-4248-90cd-67ba48cc98d5\") " pod="calico-system/whisker-7d779c7f6b-wdm5s" Sep 5 00:29:59.998438 kubelet[2773]: I0905 00:29:59.997950 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht6js\" (UniqueName: \"kubernetes.io/projected/5b699e17-324d-48df-a118-37822782a2a2-kube-api-access-ht6js\") pod \"coredns-674b8bbfcf-q96lv\" (UID: \"5b699e17-324d-48df-a118-37822782a2a2\") " pod="kube-system/coredns-674b8bbfcf-q96lv" Sep 5 00:29:59.998624 kubelet[2773]: I0905 00:29:59.998003 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhlp7\" (UniqueName: \"kubernetes.io/projected/803aa214-3ed6-41f3-80bd-4a8ea923c160-kube-api-access-bhlp7\") pod \"goldmane-54d579b49d-nnvn4\" (UID: \"803aa214-3ed6-41f3-80bd-4a8ea923c160\") " pod="calico-system/goldmane-54d579b49d-nnvn4" Sep 5 00:29:59.998624 kubelet[2773]: I0905 00:29:59.998052 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dfr2\" (UniqueName: \"kubernetes.io/projected/510659b3-8cca-445b-950f-ac0a5af64459-kube-api-access-9dfr2\") pod \"calico-apiserver-854b5fb597-xvvm4\" (UID: \"510659b3-8cca-445b-950f-ac0a5af64459\") " pod="calico-apiserver/calico-apiserver-854b5fb597-xvvm4" Sep 5 00:29:59.998624 kubelet[2773]: I0905 00:29:59.998076 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hswh9\" (UniqueName: \"kubernetes.io/projected/f0bed27a-b5ea-4248-90cd-67ba48cc98d5-kube-api-access-hswh9\") pod \"whisker-7d779c7f6b-wdm5s\" (UID: \"f0bed27a-b5ea-4248-90cd-67ba48cc98d5\") " pod="calico-system/whisker-7d779c7f6b-wdm5s" Sep 5 00:29:59.998624 kubelet[2773]: I0905 00:29:59.998100 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7771c0c-5e2c-4ff9-a260-3beb4b4ae172-tigera-ca-bundle\") pod \"calico-kube-controllers-548d8ddcf7-rxxwx\" (UID: \"e7771c0c-5e2c-4ff9-a260-3beb4b4ae172\") " pod="calico-system/calico-kube-controllers-548d8ddcf7-rxxwx" Sep 5 00:29:59.998624 kubelet[2773]: I0905 00:29:59.998120 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bae6087-a995-43b4-89e8-f06b7acab6f3-config-volume\") pod \"coredns-674b8bbfcf-mg2ps\" (UID: \"1bae6087-a995-43b4-89e8-f06b7acab6f3\") " pod="kube-system/coredns-674b8bbfcf-mg2ps" Sep 5 00:29:59.998807 kubelet[2773]: I0905 00:29:59.998138 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/803aa214-3ed6-41f3-80bd-4a8ea923c160-goldmane-key-pair\") pod \"goldmane-54d579b49d-nnvn4\" (UID: \"803aa214-3ed6-41f3-80bd-4a8ea923c160\") " pod="calico-system/goldmane-54d579b49d-nnvn4" Sep 5 00:29:59.998807 kubelet[2773]: I0905 00:29:59.998161 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a3ced8ed-a418-4283-94ac-c3d443998f43-calico-apiserver-certs\") pod \"calico-apiserver-854b5fb597-sss76\" (UID: \"a3ced8ed-a418-4283-94ac-c3d443998f43\") " pod="calico-apiserver/calico-apiserver-854b5fb597-sss76" Sep 5 00:29:59.998807 kubelet[2773]: I0905 00:29:59.998195 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dpmx\" (UniqueName: \"kubernetes.io/projected/1bae6087-a995-43b4-89e8-f06b7acab6f3-kube-api-access-2dpmx\") pod \"coredns-674b8bbfcf-mg2ps\" (UID: \"1bae6087-a995-43b4-89e8-f06b7acab6f3\") " pod="kube-system/coredns-674b8bbfcf-mg2ps" Sep 5 00:29:59.998807 kubelet[2773]: I0905 00:29:59.998218 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/510659b3-8cca-445b-950f-ac0a5af64459-calico-apiserver-certs\") pod \"calico-apiserver-854b5fb597-xvvm4\" (UID: \"510659b3-8cca-445b-950f-ac0a5af64459\") " pod="calico-apiserver/calico-apiserver-854b5fb597-xvvm4" Sep 5 00:29:59.998807 kubelet[2773]: I0905 00:29:59.998243 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpw4n\" (UniqueName: \"kubernetes.io/projected/a3ced8ed-a418-4283-94ac-c3d443998f43-kube-api-access-lpw4n\") pod \"calico-apiserver-854b5fb597-sss76\" (UID: \"a3ced8ed-a418-4283-94ac-c3d443998f43\") " pod="calico-apiserver/calico-apiserver-854b5fb597-sss76" Sep 5 00:29:59.998982 kubelet[2773]: I0905 00:29:59.998268 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f0bed27a-b5ea-4248-90cd-67ba48cc98d5-whisker-backend-key-pair\") pod \"whisker-7d779c7f6b-wdm5s\" (UID: \"f0bed27a-b5ea-4248-90cd-67ba48cc98d5\") " pod="calico-system/whisker-7d779c7f6b-wdm5s" Sep 5 00:30:00.217611 kubelet[2773]: E0905 00:30:00.217438 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:00.218377 containerd[1572]: time="2025-09-05T00:30:00.218265925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mg2ps,Uid:1bae6087-a995-43b4-89e8-f06b7acab6f3,Namespace:kube-system,Attempt:0,}" Sep 5 00:30:00.226847 kubelet[2773]: E0905 00:30:00.226789 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:00.228880 containerd[1572]: time="2025-09-05T00:30:00.227442433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q96lv,Uid:5b699e17-324d-48df-a118-37822782a2a2,Namespace:kube-system,Attempt:0,}" Sep 5 00:30:00.239146 containerd[1572]: time="2025-09-05T00:30:00.239043255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854b5fb597-xvvm4,Uid:510659b3-8cca-445b-950f-ac0a5af64459,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:30:00.248787 containerd[1572]: time="2025-09-05T00:30:00.248415251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548d8ddcf7-rxxwx,Uid:e7771c0c-5e2c-4ff9-a260-3beb4b4ae172,Namespace:calico-system,Attempt:0,}" Sep 5 00:30:00.260360 containerd[1572]: time="2025-09-05T00:30:00.260298254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-nnvn4,Uid:803aa214-3ed6-41f3-80bd-4a8ea923c160,Namespace:calico-system,Attempt:0,}" Sep 5 00:30:00.268190 containerd[1572]: time="2025-09-05T00:30:00.268145536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854b5fb597-sss76,Uid:a3ced8ed-a418-4283-94ac-c3d443998f43,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:30:00.278049 containerd[1572]: time="2025-09-05T00:30:00.276166904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d779c7f6b-wdm5s,Uid:f0bed27a-b5ea-4248-90cd-67ba48cc98d5,Namespace:calico-system,Attempt:0,}" Sep 5 00:30:00.394164 containerd[1572]: time="2025-09-05T00:30:00.394009739Z" level=error msg="Failed to destroy network for sandbox \"35885aa0652ea25629343d0514a8575e8bea65ef80951e8e3cd67653fd162c67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.401183 containerd[1572]: time="2025-09-05T00:30:00.401145093Z" level=error msg="Failed to destroy network for sandbox \"22b25d4f43367b233b6fb3772ee29af73c853c571204757b0cb5c73ec4376751\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.401712 containerd[1572]: time="2025-09-05T00:30:00.401650201Z" level=error msg="Failed to destroy network for sandbox \"e472c6080e62029438aa788151a71ca7a5b8709294ce9ed5320828261cc16b66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.404239 containerd[1572]: time="2025-09-05T00:30:00.404190113Z" level=error msg="Failed to destroy network for sandbox \"81abb57894f9472e299adba9aad9b29fe16db8bb8e9b405fe0cd2fdedbaf5edd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.414659 containerd[1572]: time="2025-09-05T00:30:00.414240613Z" level=error msg="Failed to destroy network for sandbox \"6aaf94ecc57cf0f4b394a0ed0d150ec48de5511562b0b26e2f8bb0b707a54666\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.414848 containerd[1572]: time="2025-09-05T00:30:00.414655573Z" level=error msg="Failed to destroy network for sandbox \"8380fa86e52bdcedd4f82e448e4b47d4052a379163a672aa71b7845ff9c9ee06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.424789 containerd[1572]: time="2025-09-05T00:30:00.424736370Z" level=error msg="Failed to destroy network for sandbox \"2616398521f42f2794acc6aacaadaccc8790e3332b65da8793b9cf09ec98f76f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.501840 containerd[1572]: time="2025-09-05T00:30:00.501586567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mg2ps,Uid:1bae6087-a995-43b4-89e8-f06b7acab6f3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35885aa0652ea25629343d0514a8575e8bea65ef80951e8e3cd67653fd162c67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.505358 containerd[1572]: time="2025-09-05T00:30:00.505256141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548d8ddcf7-rxxwx,Uid:e7771c0c-5e2c-4ff9-a260-3beb4b4ae172,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"22b25d4f43367b233b6fb3772ee29af73c853c571204757b0cb5c73ec4376751\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.510617 kubelet[2773]: E0905 00:30:00.510556 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35885aa0652ea25629343d0514a8575e8bea65ef80951e8e3cd67653fd162c67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.510837 kubelet[2773]: E0905 00:30:00.510551 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22b25d4f43367b233b6fb3772ee29af73c853c571204757b0cb5c73ec4376751\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.546194 kubelet[2773]: E0905 00:30:00.546086 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35885aa0652ea25629343d0514a8575e8bea65ef80951e8e3cd67653fd162c67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mg2ps" Sep 5 00:30:00.546194 kubelet[2773]: E0905 00:30:00.546181 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35885aa0652ea25629343d0514a8575e8bea65ef80951e8e3cd67653fd162c67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mg2ps" Sep 5 00:30:00.546617 kubelet[2773]: E0905 00:30:00.546254 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mg2ps_kube-system(1bae6087-a995-43b4-89e8-f06b7acab6f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mg2ps_kube-system(1bae6087-a995-43b4-89e8-f06b7acab6f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35885aa0652ea25629343d0514a8575e8bea65ef80951e8e3cd67653fd162c67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mg2ps" podUID="1bae6087-a995-43b4-89e8-f06b7acab6f3" Sep 5 00:30:00.546617 kubelet[2773]: E0905 00:30:00.546089 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22b25d4f43367b233b6fb3772ee29af73c853c571204757b0cb5c73ec4376751\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-548d8ddcf7-rxxwx" Sep 5 00:30:00.546617 kubelet[2773]: E0905 00:30:00.546452 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22b25d4f43367b233b6fb3772ee29af73c853c571204757b0cb5c73ec4376751\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-548d8ddcf7-rxxwx" Sep 5 00:30:00.546782 kubelet[2773]: E0905 00:30:00.546511 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-548d8ddcf7-rxxwx_calico-system(e7771c0c-5e2c-4ff9-a260-3beb4b4ae172)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-548d8ddcf7-rxxwx_calico-system(e7771c0c-5e2c-4ff9-a260-3beb4b4ae172)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22b25d4f43367b233b6fb3772ee29af73c853c571204757b0cb5c73ec4376751\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-548d8ddcf7-rxxwx" podUID="e7771c0c-5e2c-4ff9-a260-3beb4b4ae172" Sep 5 00:30:00.564078 containerd[1572]: time="2025-09-05T00:30:00.563882674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q96lv,Uid:5b699e17-324d-48df-a118-37822782a2a2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e472c6080e62029438aa788151a71ca7a5b8709294ce9ed5320828261cc16b66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.564373 kubelet[2773]: E0905 00:30:00.564320 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e472c6080e62029438aa788151a71ca7a5b8709294ce9ed5320828261cc16b66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.564541 kubelet[2773]: E0905 00:30:00.564414 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e472c6080e62029438aa788151a71ca7a5b8709294ce9ed5320828261cc16b66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q96lv" Sep 5 00:30:00.564541 kubelet[2773]: E0905 00:30:00.564443 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e472c6080e62029438aa788151a71ca7a5b8709294ce9ed5320828261cc16b66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q96lv" Sep 5 00:30:00.564541 kubelet[2773]: E0905 00:30:00.564518 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-q96lv_kube-system(5b699e17-324d-48df-a118-37822782a2a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-q96lv_kube-system(5b699e17-324d-48df-a118-37822782a2a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e472c6080e62029438aa788151a71ca7a5b8709294ce9ed5320828261cc16b66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-q96lv" podUID="5b699e17-324d-48df-a118-37822782a2a2" Sep 5 00:30:00.566157 containerd[1572]: time="2025-09-05T00:30:00.566088077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-nnvn4,Uid:803aa214-3ed6-41f3-80bd-4a8ea923c160,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"81abb57894f9472e299adba9aad9b29fe16db8bb8e9b405fe0cd2fdedbaf5edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.566852 kubelet[2773]: E0905 00:30:00.566785 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81abb57894f9472e299adba9aad9b29fe16db8bb8e9b405fe0cd2fdedbaf5edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.566953 kubelet[2773]: E0905 00:30:00.566886 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81abb57894f9472e299adba9aad9b29fe16db8bb8e9b405fe0cd2fdedbaf5edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-nnvn4" Sep 5 00:30:00.566953 kubelet[2773]: E0905 00:30:00.566918 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81abb57894f9472e299adba9aad9b29fe16db8bb8e9b405fe0cd2fdedbaf5edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-nnvn4" Sep 5 00:30:00.567143 kubelet[2773]: E0905 00:30:00.566990 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-nnvn4_calico-system(803aa214-3ed6-41f3-80bd-4a8ea923c160)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-nnvn4_calico-system(803aa214-3ed6-41f3-80bd-4a8ea923c160)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81abb57894f9472e299adba9aad9b29fe16db8bb8e9b405fe0cd2fdedbaf5edd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-nnvn4" podUID="803aa214-3ed6-41f3-80bd-4a8ea923c160" Sep 5 00:30:00.568894 containerd[1572]: time="2025-09-05T00:30:00.568776387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d779c7f6b-wdm5s,Uid:f0bed27a-b5ea-4248-90cd-67ba48cc98d5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aaf94ecc57cf0f4b394a0ed0d150ec48de5511562b0b26e2f8bb0b707a54666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.569192 kubelet[2773]: E0905 00:30:00.569121 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aaf94ecc57cf0f4b394a0ed0d150ec48de5511562b0b26e2f8bb0b707a54666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.569263 kubelet[2773]: E0905 00:30:00.569193 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aaf94ecc57cf0f4b394a0ed0d150ec48de5511562b0b26e2f8bb0b707a54666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d779c7f6b-wdm5s" Sep 5 00:30:00.569263 kubelet[2773]: E0905 00:30:00.569224 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aaf94ecc57cf0f4b394a0ed0d150ec48de5511562b0b26e2f8bb0b707a54666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d779c7f6b-wdm5s" Sep 5 00:30:00.569380 kubelet[2773]: E0905 00:30:00.569315 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d779c7f6b-wdm5s_calico-system(f0bed27a-b5ea-4248-90cd-67ba48cc98d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d779c7f6b-wdm5s_calico-system(f0bed27a-b5ea-4248-90cd-67ba48cc98d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6aaf94ecc57cf0f4b394a0ed0d150ec48de5511562b0b26e2f8bb0b707a54666\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d779c7f6b-wdm5s" podUID="f0bed27a-b5ea-4248-90cd-67ba48cc98d5" Sep 5 00:30:00.572631 containerd[1572]: time="2025-09-05T00:30:00.572536731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854b5fb597-xvvm4,Uid:510659b3-8cca-445b-950f-ac0a5af64459,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8380fa86e52bdcedd4f82e448e4b47d4052a379163a672aa71b7845ff9c9ee06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.572916 kubelet[2773]: E0905 00:30:00.572853 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8380fa86e52bdcedd4f82e448e4b47d4052a379163a672aa71b7845ff9c9ee06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.572916 kubelet[2773]: E0905 00:30:00.572895 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8380fa86e52bdcedd4f82e448e4b47d4052a379163a672aa71b7845ff9c9ee06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-854b5fb597-xvvm4" Sep 5 00:30:00.572916 kubelet[2773]: E0905 00:30:00.572917 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8380fa86e52bdcedd4f82e448e4b47d4052a379163a672aa71b7845ff9c9ee06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-854b5fb597-xvvm4" Sep 5 00:30:00.573237 kubelet[2773]: E0905 00:30:00.572965 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-854b5fb597-xvvm4_calico-apiserver(510659b3-8cca-445b-950f-ac0a5af64459)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-854b5fb597-xvvm4_calico-apiserver(510659b3-8cca-445b-950f-ac0a5af64459)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8380fa86e52bdcedd4f82e448e4b47d4052a379163a672aa71b7845ff9c9ee06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-854b5fb597-xvvm4" podUID="510659b3-8cca-445b-950f-ac0a5af64459" Sep 5 00:30:00.574569 containerd[1572]: time="2025-09-05T00:30:00.574492936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854b5fb597-sss76,Uid:a3ced8ed-a418-4283-94ac-c3d443998f43,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2616398521f42f2794acc6aacaadaccc8790e3332b65da8793b9cf09ec98f76f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.574747 kubelet[2773]: E0905 00:30:00.574697 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2616398521f42f2794acc6aacaadaccc8790e3332b65da8793b9cf09ec98f76f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:00.574747 kubelet[2773]: E0905 00:30:00.574734 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2616398521f42f2794acc6aacaadaccc8790e3332b65da8793b9cf09ec98f76f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-854b5fb597-sss76" Sep 5 00:30:00.574747 kubelet[2773]: E0905 00:30:00.574752 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2616398521f42f2794acc6aacaadaccc8790e3332b65da8793b9cf09ec98f76f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-854b5fb597-sss76" Sep 5 00:30:00.575123 kubelet[2773]: E0905 00:30:00.574803 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-854b5fb597-sss76_calico-apiserver(a3ced8ed-a418-4283-94ac-c3d443998f43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-854b5fb597-sss76_calico-apiserver(a3ced8ed-a418-4283-94ac-c3d443998f43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2616398521f42f2794acc6aacaadaccc8790e3332b65da8793b9cf09ec98f76f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-854b5fb597-sss76" podUID="a3ced8ed-a418-4283-94ac-c3d443998f43" Sep 5 00:30:00.922387 systemd[1]: Created slice kubepods-besteffort-pod1e677ca4_ae66_4c9c_848d_15fc5e784c81.slice - libcontainer container kubepods-besteffort-pod1e677ca4_ae66_4c9c_848d_15fc5e784c81.slice. Sep 5 00:30:00.924798 containerd[1572]: time="2025-09-05T00:30:00.924749802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5g8f,Uid:1e677ca4-ae66-4c9c-848d-15fc5e784c81,Namespace:calico-system,Attempt:0,}" Sep 5 00:30:01.038994 containerd[1572]: time="2025-09-05T00:30:01.038921933Z" level=error msg="Failed to destroy network for sandbox \"6692097957f7a9e88da8225d719001b52a07c8d6235b04e85f474ab2303c4e8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:01.041620 containerd[1572]: time="2025-09-05T00:30:01.041570938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5g8f,Uid:1e677ca4-ae66-4c9c-848d-15fc5e784c81,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6692097957f7a9e88da8225d719001b52a07c8d6235b04e85f474ab2303c4e8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:01.041868 kubelet[2773]: E0905 00:30:01.041826 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6692097957f7a9e88da8225d719001b52a07c8d6235b04e85f474ab2303c4e8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:01.042312 kubelet[2773]: E0905 00:30:01.041897 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6692097957f7a9e88da8225d719001b52a07c8d6235b04e85f474ab2303c4e8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g5g8f" Sep 5 00:30:01.042312 kubelet[2773]: E0905 00:30:01.041919 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6692097957f7a9e88da8225d719001b52a07c8d6235b04e85f474ab2303c4e8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g5g8f" Sep 5 00:30:01.042312 kubelet[2773]: E0905 00:30:01.041979 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g5g8f_calico-system(1e677ca4-ae66-4c9c-848d-15fc5e784c81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g5g8f_calico-system(1e677ca4-ae66-4c9c-848d-15fc5e784c81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6692097957f7a9e88da8225d719001b52a07c8d6235b04e85f474ab2303c4e8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g5g8f" podUID="1e677ca4-ae66-4c9c-848d-15fc5e784c81" Sep 5 00:30:01.041977 systemd[1]: run-netns-cni\x2d47107c45\x2daeff\x2d18cd\x2d74a7\x2d97b2773c603a.mount: Deactivated successfully. Sep 5 00:30:09.378368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2329487203.mount: Deactivated successfully. Sep 5 00:30:10.586368 containerd[1572]: time="2025-09-05T00:30:10.586241013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:10.587260 containerd[1572]: time="2025-09-05T00:30:10.587219129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 5 00:30:10.588797 containerd[1572]: time="2025-09-05T00:30:10.588762806Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:10.591346 containerd[1572]: time="2025-09-05T00:30:10.591311250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:10.592402 containerd[1572]: time="2025-09-05T00:30:10.592319051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.596047596s" Sep 5 00:30:10.592402 containerd[1572]: time="2025-09-05T00:30:10.592393682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 5 00:30:10.622501 containerd[1572]: time="2025-09-05T00:30:10.622427276Z" level=info msg="CreateContainer within sandbox \"aa2962642d378463d9637f6ae8d32c31af80d2e6694b09d5445db3557f8b3275\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:30:10.634067 containerd[1572]: time="2025-09-05T00:30:10.633991091Z" level=info msg="Container 78744935d218bec9ce5e07ad189c03ce56974f61dce163d0f169da48825546a7: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:30:10.649334 containerd[1572]: time="2025-09-05T00:30:10.649251816Z" level=info msg="CreateContainer within sandbox \"aa2962642d378463d9637f6ae8d32c31af80d2e6694b09d5445db3557f8b3275\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"78744935d218bec9ce5e07ad189c03ce56974f61dce163d0f169da48825546a7\"" Sep 5 00:30:10.649989 containerd[1572]: time="2025-09-05T00:30:10.649947612Z" level=info msg="StartContainer for \"78744935d218bec9ce5e07ad189c03ce56974f61dce163d0f169da48825546a7\"" Sep 5 00:30:10.652255 containerd[1572]: time="2025-09-05T00:30:10.652217352Z" level=info msg="connecting to shim 78744935d218bec9ce5e07ad189c03ce56974f61dce163d0f169da48825546a7" address="unix:///run/containerd/s/ba287f95e3c90534cad16c35fed7a396f71bad9069806a0616bbdf668d49d80f" protocol=ttrpc version=3 Sep 5 00:30:10.683299 systemd[1]: Started cri-containerd-78744935d218bec9ce5e07ad189c03ce56974f61dce163d0f169da48825546a7.scope - libcontainer container 78744935d218bec9ce5e07ad189c03ce56974f61dce163d0f169da48825546a7. Sep 5 00:30:10.936768 kubelet[2773]: E0905 00:30:10.916392 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:10.937550 containerd[1572]: time="2025-09-05T00:30:10.916957489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q96lv,Uid:5b699e17-324d-48df-a118-37822782a2a2,Namespace:kube-system,Attempt:0,}" Sep 5 00:30:11.028598 containerd[1572]: time="2025-09-05T00:30:11.028541402Z" level=info msg="StartContainer for \"78744935d218bec9ce5e07ad189c03ce56974f61dce163d0f169da48825546a7\" returns successfully" Sep 5 00:30:11.088069 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:30:11.088249 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:30:11.133198 containerd[1572]: time="2025-09-05T00:30:11.133136370Z" level=error msg="Failed to destroy network for sandbox \"fa6bb31a0c1a41c3311a4978c011fcbe2cbbdc9dc9b8e63ed7023ac7aa3a26da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:11.134866 containerd[1572]: time="2025-09-05T00:30:11.134773272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q96lv,Uid:5b699e17-324d-48df-a118-37822782a2a2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa6bb31a0c1a41c3311a4978c011fcbe2cbbdc9dc9b8e63ed7023ac7aa3a26da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:11.135197 kubelet[2773]: E0905 00:30:11.135142 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa6bb31a0c1a41c3311a4978c011fcbe2cbbdc9dc9b8e63ed7023ac7aa3a26da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:30:11.135308 kubelet[2773]: E0905 00:30:11.135245 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa6bb31a0c1a41c3311a4978c011fcbe2cbbdc9dc9b8e63ed7023ac7aa3a26da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q96lv" Sep 5 00:30:11.135308 kubelet[2773]: E0905 00:30:11.135282 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa6bb31a0c1a41c3311a4978c011fcbe2cbbdc9dc9b8e63ed7023ac7aa3a26da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q96lv" Sep 5 00:30:11.135482 kubelet[2773]: E0905 00:30:11.135368 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-q96lv_kube-system(5b699e17-324d-48df-a118-37822782a2a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-q96lv_kube-system(5b699e17-324d-48df-a118-37822782a2a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa6bb31a0c1a41c3311a4978c011fcbe2cbbdc9dc9b8e63ed7023ac7aa3a26da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-q96lv" podUID="5b699e17-324d-48df-a118-37822782a2a2" Sep 5 00:30:11.219538 kubelet[2773]: I0905 00:30:11.219236 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0bed27a-b5ea-4248-90cd-67ba48cc98d5-whisker-ca-bundle\") pod \"f0bed27a-b5ea-4248-90cd-67ba48cc98d5\" (UID: \"f0bed27a-b5ea-4248-90cd-67ba48cc98d5\") " Sep 5 00:30:11.219538 kubelet[2773]: I0905 00:30:11.219294 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hswh9\" (UniqueName: \"kubernetes.io/projected/f0bed27a-b5ea-4248-90cd-67ba48cc98d5-kube-api-access-hswh9\") pod \"f0bed27a-b5ea-4248-90cd-67ba48cc98d5\" (UID: \"f0bed27a-b5ea-4248-90cd-67ba48cc98d5\") " Sep 5 00:30:11.219538 kubelet[2773]: I0905 00:30:11.219332 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f0bed27a-b5ea-4248-90cd-67ba48cc98d5-whisker-backend-key-pair\") pod \"f0bed27a-b5ea-4248-90cd-67ba48cc98d5\" (UID: \"f0bed27a-b5ea-4248-90cd-67ba48cc98d5\") " Sep 5 00:30:11.222536 kubelet[2773]: I0905 00:30:11.220820 2773 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0bed27a-b5ea-4248-90cd-67ba48cc98d5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f0bed27a-b5ea-4248-90cd-67ba48cc98d5" (UID: "f0bed27a-b5ea-4248-90cd-67ba48cc98d5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:30:11.235418 kubelet[2773]: I0905 00:30:11.235199 2773 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0bed27a-b5ea-4248-90cd-67ba48cc98d5-kube-api-access-hswh9" (OuterVolumeSpecName: "kube-api-access-hswh9") pod "f0bed27a-b5ea-4248-90cd-67ba48cc98d5" (UID: "f0bed27a-b5ea-4248-90cd-67ba48cc98d5"). InnerVolumeSpecName "kube-api-access-hswh9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:30:11.239319 kubelet[2773]: I0905 00:30:11.239271 2773 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bed27a-b5ea-4248-90cd-67ba48cc98d5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f0bed27a-b5ea-4248-90cd-67ba48cc98d5" (UID: "f0bed27a-b5ea-4248-90cd-67ba48cc98d5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 00:30:11.319766 kubelet[2773]: I0905 00:30:11.319705 2773 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f0bed27a-b5ea-4248-90cd-67ba48cc98d5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 00:30:11.319766 kubelet[2773]: I0905 00:30:11.319758 2773 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0bed27a-b5ea-4248-90cd-67ba48cc98d5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 00:30:11.319766 kubelet[2773]: I0905 00:30:11.319771 2773 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hswh9\" (UniqueName: \"kubernetes.io/projected/f0bed27a-b5ea-4248-90cd-67ba48cc98d5-kube-api-access-hswh9\") on node \"localhost\" DevicePath \"\"" Sep 5 00:30:11.383690 containerd[1572]: time="2025-09-05T00:30:11.383637141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78744935d218bec9ce5e07ad189c03ce56974f61dce163d0f169da48825546a7\" id:\"a82187f686099eec5f32f45e0efb63db280153a7a1cabbeadda1d88904fc0f36\" pid:3985 exit_status:1 exited_at:{seconds:1757032211 nanos:373383246}" Sep 5 00:30:11.600432 systemd[1]: var-lib-kubelet-pods-f0bed27a\x2db5ea\x2d4248\x2d90cd\x2d67ba48cc98d5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhswh9.mount: Deactivated successfully. Sep 5 00:30:11.600557 systemd[1]: var-lib-kubelet-pods-f0bed27a\x2db5ea\x2d4248\x2d90cd\x2d67ba48cc98d5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 00:30:11.916367 kubelet[2773]: E0905 00:30:11.916286 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:11.916892 containerd[1572]: time="2025-09-05T00:30:11.916841829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mg2ps,Uid:1bae6087-a995-43b4-89e8-f06b7acab6f3,Namespace:kube-system,Attempt:0,}" Sep 5 00:30:11.929348 systemd[1]: Removed slice kubepods-besteffort-podf0bed27a_b5ea_4248_90cd_67ba48cc98d5.slice - libcontainer container kubepods-besteffort-podf0bed27a_b5ea_4248_90cd_67ba48cc98d5.slice. Sep 5 00:30:12.096752 systemd-networkd[1496]: calibd02291acf3: Link UP Sep 5 00:30:12.097081 systemd-networkd[1496]: calibd02291acf3: Gained carrier Sep 5 00:30:12.108492 kubelet[2773]: I0905 00:30:12.108378 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gvmt8" podStartSLOduration=2.834324369 podStartE2EDuration="24.108356778s" podCreationTimestamp="2025-09-05 00:29:48 +0000 UTC" firstStartedPulling="2025-09-05 00:29:49.31940438 +0000 UTC m=+17.506492602" lastFinishedPulling="2025-09-05 00:30:10.593436789 +0000 UTC m=+38.780525011" observedRunningTime="2025-09-05 00:30:11.267656294 +0000 UTC m=+39.454744516" watchObservedRunningTime="2025-09-05 00:30:12.108356778 +0000 UTC m=+40.295445000" Sep 5 00:30:12.111788 containerd[1572]: 2025-09-05 00:30:11.945 [INFO][4010] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:30:12.111788 containerd[1572]: 2025-09-05 00:30:11.977 [INFO][4010] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--mg2ps-eth0 coredns-674b8bbfcf- kube-system 1bae6087-a995-43b4-89e8-f06b7acab6f3 877 0 2025-09-05 00:29:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-mg2ps eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibd02291acf3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2ps" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mg2ps-" Sep 5 00:30:12.111788 containerd[1572]: 2025-09-05 00:30:11.977 [INFO][4010] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2ps" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mg2ps-eth0" Sep 5 00:30:12.111788 containerd[1572]: 2025-09-05 00:30:12.052 [INFO][4024] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" HandleID="k8s-pod-network.be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" Workload="localhost-k8s-coredns--674b8bbfcf--mg2ps-eth0" Sep 5 00:30:12.111956 containerd[1572]: 2025-09-05 00:30:12.053 [INFO][4024] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" HandleID="k8s-pod-network.be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" Workload="localhost-k8s-coredns--674b8bbfcf--mg2ps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0730), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-mg2ps", "timestamp":"2025-09-05 00:30:12.052947885 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:30:12.111956 containerd[1572]: 2025-09-05 00:30:12.053 [INFO][4024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:30:12.111956 containerd[1572]: 2025-09-05 00:30:12.053 [INFO][4024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:30:12.111956 containerd[1572]: 2025-09-05 00:30:12.054 [INFO][4024] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:30:12.111956 containerd[1572]: 2025-09-05 00:30:12.061 [INFO][4024] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" host="localhost" Sep 5 00:30:12.111956 containerd[1572]: 2025-09-05 00:30:12.067 [INFO][4024] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:30:12.111956 containerd[1572]: 2025-09-05 00:30:12.071 [INFO][4024] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:30:12.111956 containerd[1572]: 2025-09-05 00:30:12.073 [INFO][4024] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:12.111956 containerd[1572]: 2025-09-05 00:30:12.075 [INFO][4024] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:12.111956 containerd[1572]: 2025-09-05 00:30:12.075 [INFO][4024] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" host="localhost" Sep 5 00:30:12.112249 containerd[1572]: 2025-09-05 00:30:12.076 [INFO][4024] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11 Sep 5 00:30:12.112249 containerd[1572]: 2025-09-05 00:30:12.080 [INFO][4024] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" host="localhost" Sep 5 00:30:12.112249 containerd[1572]: 2025-09-05 00:30:12.084 [INFO][4024] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" host="localhost" Sep 5 00:30:12.112249 containerd[1572]: 2025-09-05 00:30:12.085 [INFO][4024] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" host="localhost" Sep 5 00:30:12.112249 containerd[1572]: 2025-09-05 00:30:12.085 [INFO][4024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:30:12.112249 containerd[1572]: 2025-09-05 00:30:12.085 [INFO][4024] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" HandleID="k8s-pod-network.be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" Workload="localhost-k8s-coredns--674b8bbfcf--mg2ps-eth0" Sep 5 00:30:12.112377 containerd[1572]: 2025-09-05 00:30:12.088 [INFO][4010] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2ps" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mg2ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mg2ps-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1bae6087-a995-43b4-89e8-f06b7acab6f3", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-mg2ps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd02291acf3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:12.112461 containerd[1572]: 2025-09-05 00:30:12.088 [INFO][4010] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2ps" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mg2ps-eth0" Sep 5 00:30:12.112461 containerd[1572]: 2025-09-05 00:30:12.088 [INFO][4010] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd02291acf3 ContainerID="be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2ps" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mg2ps-eth0" Sep 5 00:30:12.112461 containerd[1572]: 2025-09-05 00:30:12.097 [INFO][4010] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2ps" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mg2ps-eth0" Sep 5 00:30:12.112534 containerd[1572]: 2025-09-05 00:30:12.097 [INFO][4010] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2ps" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mg2ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mg2ps-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1bae6087-a995-43b4-89e8-f06b7acab6f3", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11", Pod:"coredns-674b8bbfcf-mg2ps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd02291acf3", MAC:"2e:89:d2:0d:ac:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:12.112534 containerd[1572]: 2025-09-05 00:30:12.105 [INFO][4010] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" Namespace="kube-system" Pod="coredns-674b8bbfcf-mg2ps" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mg2ps-eth0" Sep 5 00:30:12.258592 containerd[1572]: time="2025-09-05T00:30:12.253037870Z" level=info msg="connecting to shim be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11" address="unix:///run/containerd/s/2a8cefc464c0a6f5ab3ad0e48f1d6b6683f29c808b2c6bb56d1c635c61ee813e" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:30:12.308207 systemd[1]: Started cri-containerd-be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11.scope - libcontainer container be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11. Sep 5 00:30:12.329673 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:30:12.389869 systemd[1]: Created slice kubepods-besteffort-podc647e923_d01d_4132_a5ca_cc36f8ef26fc.slice - libcontainer container kubepods-besteffort-podc647e923_d01d_4132_a5ca_cc36f8ef26fc.slice. Sep 5 00:30:12.415345 containerd[1572]: time="2025-09-05T00:30:12.415267937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78744935d218bec9ce5e07ad189c03ce56974f61dce163d0f169da48825546a7\" id:\"2a195754ddda7ca9b1314ae6f77913ce494fc9c3eb6df4564234354a09acfb81\" pid:4065 exit_status:1 exited_at:{seconds:1757032212 nanos:414763612}" Sep 5 00:30:12.426562 kubelet[2773]: I0905 00:30:12.426481 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrqlb\" (UniqueName: \"kubernetes.io/projected/c647e923-d01d-4132-a5ca-cc36f8ef26fc-kube-api-access-rrqlb\") pod \"whisker-7d9959749-mvpcl\" (UID: \"c647e923-d01d-4132-a5ca-cc36f8ef26fc\") " pod="calico-system/whisker-7d9959749-mvpcl" Sep 5 00:30:12.426562 kubelet[2773]: I0905 00:30:12.426557 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c647e923-d01d-4132-a5ca-cc36f8ef26fc-whisker-ca-bundle\") pod \"whisker-7d9959749-mvpcl\" (UID: \"c647e923-d01d-4132-a5ca-cc36f8ef26fc\") " pod="calico-system/whisker-7d9959749-mvpcl" Sep 5 00:30:12.426883 kubelet[2773]: I0905 00:30:12.426586 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c647e923-d01d-4132-a5ca-cc36f8ef26fc-whisker-backend-key-pair\") pod \"whisker-7d9959749-mvpcl\" (UID: \"c647e923-d01d-4132-a5ca-cc36f8ef26fc\") " pod="calico-system/whisker-7d9959749-mvpcl" Sep 5 00:30:12.427789 containerd[1572]: time="2025-09-05T00:30:12.427645699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mg2ps,Uid:1bae6087-a995-43b4-89e8-f06b7acab6f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11\"" Sep 5 00:30:12.429137 kubelet[2773]: E0905 00:30:12.429059 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:12.437019 containerd[1572]: time="2025-09-05T00:30:12.436941525Z" level=info msg="CreateContainer within sandbox \"be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:30:12.475593 containerd[1572]: time="2025-09-05T00:30:12.475529939Z" level=info msg="Container 00a86e7b8970e50e243161095206fe66ffa8965cce4e5dcda0d90657c1663cee: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:30:12.483219 containerd[1572]: time="2025-09-05T00:30:12.483125656Z" level=info msg="CreateContainer within sandbox \"be778cacb75e6b19d9ea279ba8dc539fecf631f5391ec3c35316d1476dbffd11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00a86e7b8970e50e243161095206fe66ffa8965cce4e5dcda0d90657c1663cee\"" Sep 5 00:30:12.484017 containerd[1572]: time="2025-09-05T00:30:12.483946236Z" level=info msg="StartContainer for \"00a86e7b8970e50e243161095206fe66ffa8965cce4e5dcda0d90657c1663cee\"" Sep 5 00:30:12.485322 containerd[1572]: time="2025-09-05T00:30:12.485284758Z" level=info msg="connecting to shim 00a86e7b8970e50e243161095206fe66ffa8965cce4e5dcda0d90657c1663cee" address="unix:///run/containerd/s/2a8cefc464c0a6f5ab3ad0e48f1d6b6683f29c808b2c6bb56d1c635c61ee813e" protocol=ttrpc version=3 Sep 5 00:30:12.530372 systemd[1]: Started cri-containerd-00a86e7b8970e50e243161095206fe66ffa8965cce4e5dcda0d90657c1663cee.scope - libcontainer container 00a86e7b8970e50e243161095206fe66ffa8965cce4e5dcda0d90657c1663cee. Sep 5 00:30:12.619734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2690057306.mount: Deactivated successfully. Sep 5 00:30:12.673799 containerd[1572]: time="2025-09-05T00:30:12.673740808Z" level=info msg="StartContainer for \"00a86e7b8970e50e243161095206fe66ffa8965cce4e5dcda0d90657c1663cee\" returns successfully" Sep 5 00:30:12.694755 containerd[1572]: time="2025-09-05T00:30:12.694675689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d9959749-mvpcl,Uid:c647e923-d01d-4132-a5ca-cc36f8ef26fc,Namespace:calico-system,Attempt:0,}" Sep 5 00:30:12.955716 systemd-networkd[1496]: cali19aca3d2186: Link UP Sep 5 00:30:12.957050 systemd-networkd[1496]: cali19aca3d2186: Gained carrier Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.772 [INFO][4233] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.808 [INFO][4233] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7d9959749--mvpcl-eth0 whisker-7d9959749- calico-system c647e923-d01d-4132-a5ca-cc36f8ef26fc 969 0 2025-09-05 00:30:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d9959749 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7d9959749-mvpcl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali19aca3d2186 [] [] }} ContainerID="3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" Namespace="calico-system" Pod="whisker-7d9959749-mvpcl" WorkloadEndpoint="localhost-k8s-whisker--7d9959749--mvpcl-" Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.808 [INFO][4233] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" Namespace="calico-system" Pod="whisker-7d9959749-mvpcl" WorkloadEndpoint="localhost-k8s-whisker--7d9959749--mvpcl-eth0" Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.881 [INFO][4255] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" HandleID="k8s-pod-network.3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" Workload="localhost-k8s-whisker--7d9959749--mvpcl-eth0" Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.881 [INFO][4255] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" HandleID="k8s-pod-network.3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" Workload="localhost-k8s-whisker--7d9959749--mvpcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b1600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7d9959749-mvpcl", "timestamp":"2025-09-05 00:30:12.881119011 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.881 [INFO][4255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.882 [INFO][4255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.882 [INFO][4255] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.893 [INFO][4255] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" host="localhost" Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.900 [INFO][4255] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.907 [INFO][4255] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.910 [INFO][4255] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.915 [INFO][4255] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.916 [INFO][4255] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" host="localhost" Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.920 [INFO][4255] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3 Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.934 [INFO][4255] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" host="localhost" Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.944 [INFO][4255] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" host="localhost" Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.944 [INFO][4255] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" host="localhost" Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.944 [INFO][4255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:30:12.986567 containerd[1572]: 2025-09-05 00:30:12.944 [INFO][4255] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" HandleID="k8s-pod-network.3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" Workload="localhost-k8s-whisker--7d9959749--mvpcl-eth0" Sep 5 00:30:12.987959 containerd[1572]: 2025-09-05 00:30:12.951 [INFO][4233] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" Namespace="calico-system" Pod="whisker-7d9959749-mvpcl" WorkloadEndpoint="localhost-k8s-whisker--7d9959749--mvpcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d9959749--mvpcl-eth0", GenerateName:"whisker-7d9959749-", Namespace:"calico-system", SelfLink:"", UID:"c647e923-d01d-4132-a5ca-cc36f8ef26fc", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 30, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d9959749", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7d9959749-mvpcl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali19aca3d2186", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:12.987959 containerd[1572]: 2025-09-05 00:30:12.951 [INFO][4233] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" Namespace="calico-system" Pod="whisker-7d9959749-mvpcl" WorkloadEndpoint="localhost-k8s-whisker--7d9959749--mvpcl-eth0" Sep 5 00:30:12.987959 containerd[1572]: 2025-09-05 00:30:12.951 [INFO][4233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19aca3d2186 ContainerID="3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" Namespace="calico-system" Pod="whisker-7d9959749-mvpcl" WorkloadEndpoint="localhost-k8s-whisker--7d9959749--mvpcl-eth0" Sep 5 00:30:12.987959 containerd[1572]: 2025-09-05 00:30:12.957 [INFO][4233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" Namespace="calico-system" Pod="whisker-7d9959749-mvpcl" WorkloadEndpoint="localhost-k8s-whisker--7d9959749--mvpcl-eth0" Sep 5 00:30:12.987959 containerd[1572]: 2025-09-05 00:30:12.958 [INFO][4233] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" Namespace="calico-system" Pod="whisker-7d9959749-mvpcl" WorkloadEndpoint="localhost-k8s-whisker--7d9959749--mvpcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d9959749--mvpcl-eth0", GenerateName:"whisker-7d9959749-", Namespace:"calico-system", SelfLink:"", UID:"c647e923-d01d-4132-a5ca-cc36f8ef26fc", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 30, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d9959749", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3", Pod:"whisker-7d9959749-mvpcl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali19aca3d2186", MAC:"ce:75:0f:25:34:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:12.987959 containerd[1572]: 2025-09-05 00:30:12.980 [INFO][4233] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" Namespace="calico-system" Pod="whisker-7d9959749-mvpcl" WorkloadEndpoint="localhost-k8s-whisker--7d9959749--mvpcl-eth0" Sep 5 00:30:13.021672 containerd[1572]: time="2025-09-05T00:30:13.021593496Z" level=info msg="connecting to shim 3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3" address="unix:///run/containerd/s/8c19c7050eec7a24bbca1de1c7f7a5cc6de4881819fc9b587642570317154125" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:30:13.055209 systemd[1]: Started cri-containerd-3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3.scope - libcontainer container 3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3. Sep 5 00:30:13.071357 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:30:13.104881 containerd[1572]: time="2025-09-05T00:30:13.104823417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d9959749-mvpcl,Uid:c647e923-d01d-4132-a5ca-cc36f8ef26fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3\"" Sep 5 00:30:13.107152 containerd[1572]: time="2025-09-05T00:30:13.107049234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 00:30:13.231623 kubelet[2773]: E0905 00:30:13.231314 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:13.261193 kubelet[2773]: I0905 00:30:13.261062 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mg2ps" podStartSLOduration=36.261042239 podStartE2EDuration="36.261042239s" podCreationTimestamp="2025-09-05 00:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:30:13.245220906 +0000 UTC m=+41.432309148" watchObservedRunningTime="2025-09-05 00:30:13.261042239 +0000 UTC m=+41.448130461" Sep 5 00:30:13.334355 systemd-networkd[1496]: calibd02291acf3: Gained IPv6LL Sep 5 00:30:13.348145 systemd-networkd[1496]: vxlan.calico: Link UP Sep 5 00:30:13.348161 systemd-networkd[1496]: vxlan.calico: Gained carrier Sep 5 00:30:13.916722 containerd[1572]: time="2025-09-05T00:30:13.916417236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854b5fb597-sss76,Uid:a3ced8ed-a418-4283-94ac-c3d443998f43,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:30:13.916722 containerd[1572]: time="2025-09-05T00:30:13.916472219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548d8ddcf7-rxxwx,Uid:e7771c0c-5e2c-4ff9-a260-3beb4b4ae172,Namespace:calico-system,Attempt:0,}" Sep 5 00:30:13.918828 kubelet[2773]: I0905 00:30:13.918786 2773 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0bed27a-b5ea-4248-90cd-67ba48cc98d5" path="/var/lib/kubelet/pods/f0bed27a-b5ea-4248-90cd-67ba48cc98d5/volumes" Sep 5 00:30:14.030813 systemd-networkd[1496]: cali042cdd5889e: Link UP Sep 5 00:30:14.032252 systemd-networkd[1496]: cali042cdd5889e: Gained carrier Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:13.960 [INFO][4439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-eth0 calico-kube-controllers-548d8ddcf7- calico-system e7771c0c-5e2c-4ff9-a260-3beb4b4ae172 880 0 2025-09-05 00:29:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:548d8ddcf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-548d8ddcf7-rxxwx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali042cdd5889e [] [] }} ContainerID="067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" Namespace="calico-system" Pod="calico-kube-controllers-548d8ddcf7-rxxwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-" Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:13.961 [INFO][4439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" Namespace="calico-system" Pod="calico-kube-controllers-548d8ddcf7-rxxwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-eth0" Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:13.993 [INFO][4459] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" HandleID="k8s-pod-network.067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" Workload="localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-eth0" Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:13.994 [INFO][4459] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" HandleID="k8s-pod-network.067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" Workload="localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026d980), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-548d8ddcf7-rxxwx", "timestamp":"2025-09-05 00:30:13.993688154 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:13.994 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:13.994 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:13.994 [INFO][4459] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:14.001 [INFO][4459] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" host="localhost" Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:14.006 [INFO][4459] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:14.010 [INFO][4459] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:14.012 [INFO][4459] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:14.014 [INFO][4459] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:14.014 [INFO][4459] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" host="localhost" Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:14.015 [INFO][4459] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551 Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:14.019 [INFO][4459] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" host="localhost" Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:14.025 [INFO][4459] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" host="localhost" Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:14.025 [INFO][4459] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" host="localhost" Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:14.025 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:30:14.045155 containerd[1572]: 2025-09-05 00:30:14.025 [INFO][4459] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" HandleID="k8s-pod-network.067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" Workload="localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-eth0" Sep 5 00:30:14.046585 containerd[1572]: 2025-09-05 00:30:14.028 [INFO][4439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" Namespace="calico-system" Pod="calico-kube-controllers-548d8ddcf7-rxxwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-eth0", GenerateName:"calico-kube-controllers-548d8ddcf7-", Namespace:"calico-system", SelfLink:"", UID:"e7771c0c-5e2c-4ff9-a260-3beb4b4ae172", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"548d8ddcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-548d8ddcf7-rxxwx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali042cdd5889e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:14.046585 containerd[1572]: 2025-09-05 00:30:14.028 [INFO][4439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" Namespace="calico-system" Pod="calico-kube-controllers-548d8ddcf7-rxxwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-eth0" Sep 5 00:30:14.046585 containerd[1572]: 2025-09-05 00:30:14.028 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali042cdd5889e ContainerID="067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" Namespace="calico-system" Pod="calico-kube-controllers-548d8ddcf7-rxxwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-eth0" Sep 5 00:30:14.046585 containerd[1572]: 2025-09-05 00:30:14.032 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" Namespace="calico-system" Pod="calico-kube-controllers-548d8ddcf7-rxxwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-eth0" Sep 5 00:30:14.046585 containerd[1572]: 2025-09-05 00:30:14.033 [INFO][4439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" Namespace="calico-system" Pod="calico-kube-controllers-548d8ddcf7-rxxwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-eth0", GenerateName:"calico-kube-controllers-548d8ddcf7-", Namespace:"calico-system", SelfLink:"", UID:"e7771c0c-5e2c-4ff9-a260-3beb4b4ae172", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"548d8ddcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551", Pod:"calico-kube-controllers-548d8ddcf7-rxxwx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali042cdd5889e", MAC:"8e:49:50:06:df:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:14.046585 containerd[1572]: 2025-09-05 00:30:14.041 [INFO][4439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" Namespace="calico-system" Pod="calico-kube-controllers-548d8ddcf7-rxxwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548d8ddcf7--rxxwx-eth0" Sep 5 00:30:14.074636 containerd[1572]: time="2025-09-05T00:30:14.074588906Z" level=info msg="connecting to shim 067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551" address="unix:///run/containerd/s/1d76ab91832df556aa86d43e6e691038cf83aca6bfc606a69b7f42953efc1536" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:30:14.106291 systemd[1]: Started cri-containerd-067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551.scope - libcontainer container 067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551. Sep 5 00:30:14.125817 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:30:14.141319 systemd-networkd[1496]: cali44ea479383c: Link UP Sep 5 00:30:14.141882 systemd-networkd[1496]: cali44ea479383c: Gained carrier Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:13.963 [INFO][4428] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--854b5fb597--sss76-eth0 calico-apiserver-854b5fb597- calico-apiserver a3ced8ed-a418-4283-94ac-c3d443998f43 887 0 2025-09-05 00:29:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:854b5fb597 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-854b5fb597-sss76 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali44ea479383c [] [] }} ContainerID="927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-sss76" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--sss76-" Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:13.963 [INFO][4428] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-sss76" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--sss76-eth0" Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:13.998 [INFO][4458] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" HandleID="k8s-pod-network.927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" Workload="localhost-k8s-calico--apiserver--854b5fb597--sss76-eth0" Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:13.998 [INFO][4458] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" HandleID="k8s-pod-network.927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" Workload="localhost-k8s-calico--apiserver--854b5fb597--sss76-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bba30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-854b5fb597-sss76", "timestamp":"2025-09-05 00:30:13.99818333 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:13.998 [INFO][4458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.025 [INFO][4458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.025 [INFO][4458] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.102 [INFO][4458] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" host="localhost" Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.109 [INFO][4458] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.114 [INFO][4458] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.116 [INFO][4458] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.119 [INFO][4458] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.119 [INFO][4458] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" host="localhost" Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.121 [INFO][4458] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9 Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.127 [INFO][4458] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" host="localhost" Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.135 [INFO][4458] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" host="localhost" Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.135 [INFO][4458] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" host="localhost" Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.135 [INFO][4458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:30:14.163528 containerd[1572]: 2025-09-05 00:30:14.135 [INFO][4458] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" HandleID="k8s-pod-network.927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" Workload="localhost-k8s-calico--apiserver--854b5fb597--sss76-eth0" Sep 5 00:30:14.164148 containerd[1572]: 2025-09-05 00:30:14.139 [INFO][4428] cni-plugin/k8s.go 418: Populated endpoint ContainerID="927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-sss76" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--sss76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--854b5fb597--sss76-eth0", GenerateName:"calico-apiserver-854b5fb597-", Namespace:"calico-apiserver", SelfLink:"", UID:"a3ced8ed-a418-4283-94ac-c3d443998f43", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"854b5fb597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-854b5fb597-sss76", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44ea479383c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:14.164148 containerd[1572]: 2025-09-05 00:30:14.139 [INFO][4428] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-sss76" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--sss76-eth0" Sep 5 00:30:14.164148 containerd[1572]: 2025-09-05 00:30:14.139 [INFO][4428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44ea479383c ContainerID="927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-sss76" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--sss76-eth0" Sep 5 00:30:14.164148 containerd[1572]: 2025-09-05 00:30:14.142 [INFO][4428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-sss76" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--sss76-eth0" Sep 5 00:30:14.164148 containerd[1572]: 2025-09-05 00:30:14.142 [INFO][4428] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-sss76" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--sss76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--854b5fb597--sss76-eth0", GenerateName:"calico-apiserver-854b5fb597-", Namespace:"calico-apiserver", SelfLink:"", UID:"a3ced8ed-a418-4283-94ac-c3d443998f43", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"854b5fb597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9", Pod:"calico-apiserver-854b5fb597-sss76", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44ea479383c", MAC:"c6:cf:68:db:ee:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:14.164148 containerd[1572]: 2025-09-05 00:30:14.155 [INFO][4428] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-sss76" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--sss76-eth0" Sep 5 00:30:14.171018 containerd[1572]: time="2025-09-05T00:30:14.170662153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548d8ddcf7-rxxwx,Uid:e7771c0c-5e2c-4ff9-a260-3beb4b4ae172,Namespace:calico-system,Attempt:0,} returns sandbox id \"067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551\"" Sep 5 00:30:14.195498 containerd[1572]: time="2025-09-05T00:30:14.195428150Z" level=info msg="connecting to shim 927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9" address="unix:///run/containerd/s/38acc860f02e8fba3b8cbd0b0675fadd8580c581f18e97909e83872798b7f4d2" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:30:14.222200 systemd[1]: Started cri-containerd-927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9.scope - libcontainer container 927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9. Sep 5 00:30:14.237596 kubelet[2773]: E0905 00:30:14.237535 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:14.238527 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:30:14.280129 containerd[1572]: time="2025-09-05T00:30:14.280053352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854b5fb597-sss76,Uid:a3ced8ed-a418-4283-94ac-c3d443998f43,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9\"" Sep 5 00:30:14.421294 systemd-networkd[1496]: vxlan.calico: Gained IPv6LL Sep 5 00:30:14.917282 containerd[1572]: time="2025-09-05T00:30:14.917221123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854b5fb597-xvvm4,Uid:510659b3-8cca-445b-950f-ac0a5af64459,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:30:14.928047 containerd[1572]: time="2025-09-05T00:30:14.927976427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-nnvn4,Uid:803aa214-3ed6-41f3-80bd-4a8ea923c160,Namespace:calico-system,Attempt:0,}" Sep 5 00:30:14.997279 systemd-networkd[1496]: cali19aca3d2186: Gained IPv6LL Sep 5 00:30:15.112804 systemd-networkd[1496]: cali7d2a7952377: Link UP Sep 5 00:30:15.114246 systemd-networkd[1496]: cali7d2a7952377: Gained carrier Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.018 [INFO][4588] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--854b5fb597--xvvm4-eth0 calico-apiserver-854b5fb597- calico-apiserver 510659b3-8cca-445b-950f-ac0a5af64459 886 0 2025-09-05 00:29:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:854b5fb597 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-854b5fb597-xvvm4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7d2a7952377 [] [] }} ContainerID="fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-xvvm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--xvvm4-" Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.018 [INFO][4588] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-xvvm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--xvvm4-eth0" Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.052 [INFO][4622] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" HandleID="k8s-pod-network.fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" Workload="localhost-k8s-calico--apiserver--854b5fb597--xvvm4-eth0" Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.052 [INFO][4622] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" HandleID="k8s-pod-network.fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" Workload="localhost-k8s-calico--apiserver--854b5fb597--xvvm4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-854b5fb597-xvvm4", "timestamp":"2025-09-05 00:30:15.051981488 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.052 [INFO][4622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.052 [INFO][4622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.052 [INFO][4622] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.065 [INFO][4622] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" host="localhost" Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.070 [INFO][4622] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.079 [INFO][4622] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.083 [INFO][4622] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.087 [INFO][4622] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.087 [INFO][4622] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" host="localhost" Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.089 [INFO][4622] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114 Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.096 [INFO][4622] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" host="localhost" Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.102 [INFO][4622] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" host="localhost" Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.102 [INFO][4622] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" host="localhost" Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.102 [INFO][4622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:30:15.134156 containerd[1572]: 2025-09-05 00:30:15.103 [INFO][4622] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" HandleID="k8s-pod-network.fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" Workload="localhost-k8s-calico--apiserver--854b5fb597--xvvm4-eth0" Sep 5 00:30:15.135758 containerd[1572]: 2025-09-05 00:30:15.106 [INFO][4588] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-xvvm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--xvvm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--854b5fb597--xvvm4-eth0", GenerateName:"calico-apiserver-854b5fb597-", Namespace:"calico-apiserver", SelfLink:"", UID:"510659b3-8cca-445b-950f-ac0a5af64459", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"854b5fb597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-854b5fb597-xvvm4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d2a7952377", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:15.135758 containerd[1572]: 2025-09-05 00:30:15.107 [INFO][4588] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-xvvm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--xvvm4-eth0" Sep 5 00:30:15.135758 containerd[1572]: 2025-09-05 00:30:15.107 [INFO][4588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d2a7952377 ContainerID="fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-xvvm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--xvvm4-eth0" Sep 5 00:30:15.135758 containerd[1572]: 2025-09-05 00:30:15.116 [INFO][4588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-xvvm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--xvvm4-eth0" Sep 5 00:30:15.135758 containerd[1572]: 2025-09-05 00:30:15.119 [INFO][4588] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-xvvm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--xvvm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--854b5fb597--xvvm4-eth0", GenerateName:"calico-apiserver-854b5fb597-", Namespace:"calico-apiserver", SelfLink:"", UID:"510659b3-8cca-445b-950f-ac0a5af64459", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"854b5fb597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114", Pod:"calico-apiserver-854b5fb597-xvvm4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d2a7952377", MAC:"0e:18:2f:b6:1d:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:15.135758 containerd[1572]: 2025-09-05 00:30:15.131 [INFO][4588] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" Namespace="calico-apiserver" Pod="calico-apiserver-854b5fb597-xvvm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--854b5fb597--xvvm4-eth0" Sep 5 00:30:15.148378 containerd[1572]: time="2025-09-05T00:30:15.148313241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:15.149203 containerd[1572]: time="2025-09-05T00:30:15.149121668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 5 00:30:15.151688 containerd[1572]: time="2025-09-05T00:30:15.151660152Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:15.155336 containerd[1572]: time="2025-09-05T00:30:15.155268503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:15.156663 containerd[1572]: time="2025-09-05T00:30:15.156632853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.049538554s" Sep 5 00:30:15.156788 containerd[1572]: time="2025-09-05T00:30:15.156765191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 5 00:30:15.158949 containerd[1572]: time="2025-09-05T00:30:15.158905568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 00:30:15.165652 containerd[1572]: time="2025-09-05T00:30:15.165577969Z" level=info msg="CreateContainer within sandbox \"3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 00:30:15.170793 containerd[1572]: time="2025-09-05T00:30:15.169120477Z" level=info msg="connecting to shim fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114" address="unix:///run/containerd/s/bc0a154c03205c38f42941f71b122ebdd56e1cda8c5bf43e4848387e169d45da" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:30:15.211442 systemd[1]: Started cri-containerd-fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114.scope - libcontainer container fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114. Sep 5 00:30:15.229783 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:30:15.241399 kubelet[2773]: E0905 00:30:15.241280 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:15.359424 containerd[1572]: time="2025-09-05T00:30:15.359369934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854b5fb597-xvvm4,Uid:510659b3-8cca-445b-950f-ac0a5af64459,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114\"" Sep 5 00:30:15.369822 containerd[1572]: time="2025-09-05T00:30:15.369735385Z" level=info msg="Container 849f30f84ea6521f1291e17282897daa157604194901e87f22bc7b89947e5e5a: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:30:15.377792 systemd-networkd[1496]: calie18219c6730: Link UP Sep 5 00:30:15.378626 systemd-networkd[1496]: calie18219c6730: Gained carrier Sep 5 00:30:15.386655 containerd[1572]: time="2025-09-05T00:30:15.386599933Z" level=info msg="CreateContainer within sandbox \"3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"849f30f84ea6521f1291e17282897daa157604194901e87f22bc7b89947e5e5a\"" Sep 5 00:30:15.387774 containerd[1572]: time="2025-09-05T00:30:15.387722379Z" level=info msg="StartContainer for \"849f30f84ea6521f1291e17282897daa157604194901e87f22bc7b89947e5e5a\"" Sep 5 00:30:15.390076 containerd[1572]: time="2025-09-05T00:30:15.389736670Z" level=info msg="connecting to shim 849f30f84ea6521f1291e17282897daa157604194901e87f22bc7b89947e5e5a" address="unix:///run/containerd/s/8c19c7050eec7a24bbca1de1c7f7a5cc6de4881819fc9b587642570317154125" protocol=ttrpc version=3 Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.031 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--nnvn4-eth0 goldmane-54d579b49d- calico-system 803aa214-3ed6-41f3-80bd-4a8ea923c160 884 0 2025-09-05 00:29:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-nnvn4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie18219c6730 [] [] }} ContainerID="4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" Namespace="calico-system" Pod="goldmane-54d579b49d-nnvn4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--nnvn4-" Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.032 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" Namespace="calico-system" Pod="goldmane-54d579b49d-nnvn4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--nnvn4-eth0" Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.084 [INFO][4630] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" HandleID="k8s-pod-network.4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" Workload="localhost-k8s-goldmane--54d579b49d--nnvn4-eth0" Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.084 [INFO][4630] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" HandleID="k8s-pod-network.4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" Workload="localhost-k8s-goldmane--54d579b49d--nnvn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001303b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-nnvn4", "timestamp":"2025-09-05 00:30:15.084408158 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.084 [INFO][4630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.102 [INFO][4630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.103 [INFO][4630] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.167 [INFO][4630] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" host="localhost" Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.177 [INFO][4630] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.186 [INFO][4630] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.189 [INFO][4630] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.192 [INFO][4630] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.192 [INFO][4630] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" host="localhost" Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.194 [INFO][4630] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.354 [INFO][4630] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" host="localhost" Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.367 [INFO][4630] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" host="localhost" Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.367 [INFO][4630] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" host="localhost" Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.367 [INFO][4630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:30:15.395684 containerd[1572]: 2025-09-05 00:30:15.367 [INFO][4630] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" HandleID="k8s-pod-network.4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" Workload="localhost-k8s-goldmane--54d579b49d--nnvn4-eth0" Sep 5 00:30:15.396923 containerd[1572]: 2025-09-05 00:30:15.372 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" Namespace="calico-system" Pod="goldmane-54d579b49d-nnvn4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--nnvn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--nnvn4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"803aa214-3ed6-41f3-80bd-4a8ea923c160", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-nnvn4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie18219c6730", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:15.396923 containerd[1572]: 2025-09-05 00:30:15.372 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" Namespace="calico-system" Pod="goldmane-54d579b49d-nnvn4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--nnvn4-eth0" Sep 5 00:30:15.396923 containerd[1572]: 2025-09-05 00:30:15.372 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie18219c6730 ContainerID="4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" Namespace="calico-system" Pod="goldmane-54d579b49d-nnvn4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--nnvn4-eth0" Sep 5 00:30:15.396923 containerd[1572]: 2025-09-05 00:30:15.379 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" Namespace="calico-system" Pod="goldmane-54d579b49d-nnvn4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--nnvn4-eth0" Sep 5 00:30:15.396923 containerd[1572]: 2025-09-05 00:30:15.379 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" Namespace="calico-system" Pod="goldmane-54d579b49d-nnvn4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--nnvn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--nnvn4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"803aa214-3ed6-41f3-80bd-4a8ea923c160", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac", Pod:"goldmane-54d579b49d-nnvn4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie18219c6730", MAC:"7e:7e:98:21:85:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:15.396923 containerd[1572]: 2025-09-05 00:30:15.391 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" Namespace="calico-system" Pod="goldmane-54d579b49d-nnvn4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--nnvn4-eth0" Sep 5 00:30:15.423295 systemd[1]: Started cri-containerd-849f30f84ea6521f1291e17282897daa157604194901e87f22bc7b89947e5e5a.scope - libcontainer container 849f30f84ea6521f1291e17282897daa157604194901e87f22bc7b89947e5e5a. Sep 5 00:30:15.431821 containerd[1572]: time="2025-09-05T00:30:15.431193716Z" level=info msg="connecting to shim 4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac" address="unix:///run/containerd/s/d8a08381d94f48f5280474508b570c72f17faf454b0354c1d29d716279967ec2" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:30:15.464217 systemd[1]: Started cri-containerd-4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac.scope - libcontainer container 4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac. Sep 5 00:30:15.480395 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:30:15.487476 containerd[1572]: time="2025-09-05T00:30:15.487416112Z" level=info msg="StartContainer for \"849f30f84ea6521f1291e17282897daa157604194901e87f22bc7b89947e5e5a\" returns successfully" Sep 5 00:30:15.517294 containerd[1572]: time="2025-09-05T00:30:15.517252422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-nnvn4,Uid:803aa214-3ed6-41f3-80bd-4a8ea923c160,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac\"" Sep 5 00:30:15.701333 systemd-networkd[1496]: cali042cdd5889e: Gained IPv6LL Sep 5 00:30:15.917440 containerd[1572]: time="2025-09-05T00:30:15.917357568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5g8f,Uid:1e677ca4-ae66-4c9c-848d-15fc5e784c81,Namespace:calico-system,Attempt:0,}" Sep 5 00:30:16.149315 systemd-networkd[1496]: cali44ea479383c: Gained IPv6LL Sep 5 00:30:16.272071 systemd-networkd[1496]: cali6fc0daca26f: Link UP Sep 5 00:30:16.272856 systemd-networkd[1496]: cali6fc0daca26f: Gained carrier Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.192 [INFO][4786] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--g5g8f-eth0 csi-node-driver- calico-system 1e677ca4-ae66-4c9c-848d-15fc5e784c81 771 0 2025-09-05 00:29:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-g5g8f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6fc0daca26f [] [] }} ContainerID="f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" Namespace="calico-system" Pod="csi-node-driver-g5g8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--g5g8f-" Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.192 [INFO][4786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" Namespace="calico-system" Pod="csi-node-driver-g5g8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--g5g8f-eth0" Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.221 [INFO][4801] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" HandleID="k8s-pod-network.f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" Workload="localhost-k8s-csi--node--driver--g5g8f-eth0" Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.221 [INFO][4801] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" HandleID="k8s-pod-network.f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" Workload="localhost-k8s-csi--node--driver--g5g8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7280), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-g5g8f", "timestamp":"2025-09-05 00:30:16.221778729 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.222 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.222 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.222 [INFO][4801] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.229 [INFO][4801] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" host="localhost" Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.233 [INFO][4801] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.238 [INFO][4801] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.240 [INFO][4801] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.244 [INFO][4801] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.244 [INFO][4801] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" host="localhost" Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.247 [INFO][4801] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5 Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.252 [INFO][4801] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" host="localhost" Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.261 [INFO][4801] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" host="localhost" Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.261 [INFO][4801] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" host="localhost" Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.262 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:30:16.292148 containerd[1572]: 2025-09-05 00:30:16.262 [INFO][4801] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" HandleID="k8s-pod-network.f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" Workload="localhost-k8s-csi--node--driver--g5g8f-eth0" Sep 5 00:30:16.293299 containerd[1572]: 2025-09-05 00:30:16.265 [INFO][4786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" Namespace="calico-system" Pod="csi-node-driver-g5g8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--g5g8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g5g8f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1e677ca4-ae66-4c9c-848d-15fc5e784c81", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-g5g8f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6fc0daca26f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:16.293299 containerd[1572]: 2025-09-05 00:30:16.266 [INFO][4786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" Namespace="calico-system" Pod="csi-node-driver-g5g8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--g5g8f-eth0" Sep 5 00:30:16.293299 containerd[1572]: 2025-09-05 00:30:16.266 [INFO][4786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fc0daca26f ContainerID="f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" Namespace="calico-system" Pod="csi-node-driver-g5g8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--g5g8f-eth0" Sep 5 00:30:16.293299 containerd[1572]: 2025-09-05 00:30:16.273 [INFO][4786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" Namespace="calico-system" Pod="csi-node-driver-g5g8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--g5g8f-eth0" Sep 5 00:30:16.293299 containerd[1572]: 2025-09-05 00:30:16.274 [INFO][4786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" Namespace="calico-system" Pod="csi-node-driver-g5g8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--g5g8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g5g8f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1e677ca4-ae66-4c9c-848d-15fc5e784c81", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5", Pod:"csi-node-driver-g5g8f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6fc0daca26f", MAC:"62:3f:b3:d9:e9:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:16.293299 containerd[1572]: 2025-09-05 00:30:16.287 [INFO][4786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" Namespace="calico-system" Pod="csi-node-driver-g5g8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--g5g8f-eth0" Sep 5 00:30:16.323607 containerd[1572]: time="2025-09-05T00:30:16.323542828Z" level=info msg="connecting to shim f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5" address="unix:///run/containerd/s/d36effee32f1b7578ae6f0fad343c46475f9e7e8ca7a62121883d693dd8598ac" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:30:16.364353 systemd[1]: Started cri-containerd-f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5.scope - libcontainer container f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5. Sep 5 00:30:16.379102 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:30:16.393671 containerd[1572]: time="2025-09-05T00:30:16.393580714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5g8f,Uid:1e677ca4-ae66-4c9c-848d-15fc5e784c81,Namespace:calico-system,Attempt:0,} returns sandbox id \"f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5\"" Sep 5 00:30:16.469267 systemd-networkd[1496]: cali7d2a7952377: Gained IPv6LL Sep 5 00:30:16.981505 systemd-networkd[1496]: calie18219c6730: Gained IPv6LL Sep 5 00:30:17.431253 systemd-networkd[1496]: cali6fc0daca26f: Gained IPv6LL Sep 5 00:30:18.446525 containerd[1572]: time="2025-09-05T00:30:18.446446133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:18.447446 containerd[1572]: time="2025-09-05T00:30:18.447281360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 5 00:30:18.449812 containerd[1572]: time="2025-09-05T00:30:18.448646181Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:18.451044 containerd[1572]: time="2025-09-05T00:30:18.450977746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:18.451842 containerd[1572]: time="2025-09-05T00:30:18.451777326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.292695407s" Sep 5 00:30:18.451842 containerd[1572]: time="2025-09-05T00:30:18.451828202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 5 00:30:18.454239 containerd[1572]: time="2025-09-05T00:30:18.454107939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:30:18.469906 containerd[1572]: time="2025-09-05T00:30:18.469858663Z" level=info msg="CreateContainer within sandbox \"067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:30:18.479919 containerd[1572]: time="2025-09-05T00:30:18.479882201Z" level=info msg="Container 43c03a541c9bb9ebe72678e52b83cf15ff16775b0961fdaa4bc1bfbd58c548f9: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:30:18.489439 containerd[1572]: time="2025-09-05T00:30:18.489402143Z" level=info msg="CreateContainer within sandbox \"067310bf747a2249ab2150faf1a95a5006fb4e6ab5d23c0cdf904aa81814f551\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"43c03a541c9bb9ebe72678e52b83cf15ff16775b0961fdaa4bc1bfbd58c548f9\"" Sep 5 00:30:18.490063 containerd[1572]: time="2025-09-05T00:30:18.490020774Z" level=info msg="StartContainer for \"43c03a541c9bb9ebe72678e52b83cf15ff16775b0961fdaa4bc1bfbd58c548f9\"" Sep 5 00:30:18.491379 containerd[1572]: time="2025-09-05T00:30:18.491325041Z" level=info msg="connecting to shim 43c03a541c9bb9ebe72678e52b83cf15ff16775b0961fdaa4bc1bfbd58c548f9" address="unix:///run/containerd/s/1d76ab91832df556aa86d43e6e691038cf83aca6bfc606a69b7f42953efc1536" protocol=ttrpc version=3 Sep 5 00:30:18.555362 systemd[1]: Started cri-containerd-43c03a541c9bb9ebe72678e52b83cf15ff16775b0961fdaa4bc1bfbd58c548f9.scope - libcontainer container 43c03a541c9bb9ebe72678e52b83cf15ff16775b0961fdaa4bc1bfbd58c548f9. Sep 5 00:30:18.611255 containerd[1572]: time="2025-09-05T00:30:18.611202885Z" level=info msg="StartContainer for \"43c03a541c9bb9ebe72678e52b83cf15ff16775b0961fdaa4bc1bfbd58c548f9\" returns successfully" Sep 5 00:30:19.275416 kubelet[2773]: I0905 00:30:19.274997 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-548d8ddcf7-rxxwx" podStartSLOduration=25.994084039 podStartE2EDuration="30.274974751s" podCreationTimestamp="2025-09-05 00:29:49 +0000 UTC" firstStartedPulling="2025-09-05 00:30:14.172116442 +0000 UTC m=+42.359204664" lastFinishedPulling="2025-09-05 00:30:18.453007084 +0000 UTC m=+46.640095376" observedRunningTime="2025-09-05 00:30:19.273695751 +0000 UTC m=+47.460783963" watchObservedRunningTime="2025-09-05 00:30:19.274974751 +0000 UTC m=+47.462062963" Sep 5 00:30:19.311020 containerd[1572]: time="2025-09-05T00:30:19.310960401Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43c03a541c9bb9ebe72678e52b83cf15ff16775b0961fdaa4bc1bfbd58c548f9\" id:\"e04b1f711054f187b2f14f23cf3e97cd771516bb30bb631125ef2109e49fcacd\" pid:4933 exited_at:{seconds:1757032219 nanos:310602029}" Sep 5 00:30:21.923403 containerd[1572]: time="2025-09-05T00:30:21.923332135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:21.924287 containerd[1572]: time="2025-09-05T00:30:21.924237083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 5 00:30:21.925481 containerd[1572]: time="2025-09-05T00:30:21.925439890Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:21.928269 containerd[1572]: time="2025-09-05T00:30:21.928083836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:21.928884 containerd[1572]: time="2025-09-05T00:30:21.928825176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.474591871s" Sep 5 00:30:21.928884 containerd[1572]: time="2025-09-05T00:30:21.928865344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:30:21.930467 containerd[1572]: time="2025-09-05T00:30:21.930090193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:30:21.934812 containerd[1572]: time="2025-09-05T00:30:21.934770004Z" level=info msg="CreateContainer within sandbox \"927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:30:21.945170 containerd[1572]: time="2025-09-05T00:30:21.945120531Z" level=info msg="Container 5029d98934a50983feac928239a9a1babc93e38e99ad8d31e9d57d4b7a26dab8: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:30:21.961548 containerd[1572]: time="2025-09-05T00:30:21.961478537Z" level=info msg="CreateContainer within sandbox \"927dcc115576d25b6f0c204dad8ea588d059631ade1eb3017d3d3ee98cb0a1c9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5029d98934a50983feac928239a9a1babc93e38e99ad8d31e9d57d4b7a26dab8\"" Sep 5 00:30:21.962194 containerd[1572]: time="2025-09-05T00:30:21.962154330Z" level=info msg="StartContainer for \"5029d98934a50983feac928239a9a1babc93e38e99ad8d31e9d57d4b7a26dab8\"" Sep 5 00:30:21.963502 containerd[1572]: time="2025-09-05T00:30:21.963474043Z" level=info msg="connecting to shim 5029d98934a50983feac928239a9a1babc93e38e99ad8d31e9d57d4b7a26dab8" address="unix:///run/containerd/s/38acc860f02e8fba3b8cbd0b0675fadd8580c581f18e97909e83872798b7f4d2" protocol=ttrpc version=3 Sep 5 00:30:21.995251 systemd[1]: Started cri-containerd-5029d98934a50983feac928239a9a1babc93e38e99ad8d31e9d57d4b7a26dab8.scope - libcontainer container 5029d98934a50983feac928239a9a1babc93e38e99ad8d31e9d57d4b7a26dab8. Sep 5 00:30:22.045937 containerd[1572]: time="2025-09-05T00:30:22.045888023Z" level=info msg="StartContainer for \"5029d98934a50983feac928239a9a1babc93e38e99ad8d31e9d57d4b7a26dab8\" returns successfully" Sep 5 00:30:22.288209 kubelet[2773]: I0905 00:30:22.287801 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-854b5fb597-sss76" podStartSLOduration=28.639325916 podStartE2EDuration="36.287780599s" podCreationTimestamp="2025-09-05 00:29:46 +0000 UTC" firstStartedPulling="2025-09-05 00:30:14.281478285 +0000 UTC m=+42.468566507" lastFinishedPulling="2025-09-05 00:30:21.929932968 +0000 UTC m=+50.117021190" observedRunningTime="2025-09-05 00:30:22.287520655 +0000 UTC m=+50.474608897" watchObservedRunningTime="2025-09-05 00:30:22.287780599 +0000 UTC m=+50.474868831" Sep 5 00:30:22.306109 containerd[1572]: time="2025-09-05T00:30:22.306052189Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:22.307182 containerd[1572]: time="2025-09-05T00:30:22.307150050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:30:22.309260 containerd[1572]: time="2025-09-05T00:30:22.309207703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 373.845899ms" Sep 5 00:30:22.309260 containerd[1572]: time="2025-09-05T00:30:22.309238383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:30:22.312234 containerd[1572]: time="2025-09-05T00:30:22.312197886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 00:30:22.317274 containerd[1572]: time="2025-09-05T00:30:22.316792913Z" level=info msg="CreateContainer within sandbox \"fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:30:22.330303 containerd[1572]: time="2025-09-05T00:30:22.330254267Z" level=info msg="Container d8381897d26ab677bdbdb2bae8b574bbdacb9cfd4f47e9aa14f463320a015060: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:30:22.342197 containerd[1572]: time="2025-09-05T00:30:22.342153300Z" level=info msg="CreateContainer within sandbox \"fad812f305cc81c805e68258f52cdaac86f3c27842e7354d42b06add6db17114\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d8381897d26ab677bdbdb2bae8b574bbdacb9cfd4f47e9aa14f463320a015060\"" Sep 5 00:30:22.342795 containerd[1572]: time="2025-09-05T00:30:22.342761480Z" level=info msg="StartContainer for \"d8381897d26ab677bdbdb2bae8b574bbdacb9cfd4f47e9aa14f463320a015060\"" Sep 5 00:30:22.344377 containerd[1572]: time="2025-09-05T00:30:22.344311949Z" level=info msg="connecting to shim d8381897d26ab677bdbdb2bae8b574bbdacb9cfd4f47e9aa14f463320a015060" address="unix:///run/containerd/s/bc0a154c03205c38f42941f71b122ebdd56e1cda8c5bf43e4848387e169d45da" protocol=ttrpc version=3 Sep 5 00:30:22.377300 systemd[1]: Started cri-containerd-d8381897d26ab677bdbdb2bae8b574bbdacb9cfd4f47e9aa14f463320a015060.scope - libcontainer container d8381897d26ab677bdbdb2bae8b574bbdacb9cfd4f47e9aa14f463320a015060. Sep 5 00:30:22.621470 containerd[1572]: time="2025-09-05T00:30:22.621404749Z" level=info msg="StartContainer for \"d8381897d26ab677bdbdb2bae8b574bbdacb9cfd4f47e9aa14f463320a015060\" returns successfully" Sep 5 00:30:23.354453 kubelet[2773]: I0905 00:30:23.354379 2773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:30:23.916655 kubelet[2773]: E0905 00:30:23.916607 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:23.917174 containerd[1572]: time="2025-09-05T00:30:23.917126989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q96lv,Uid:5b699e17-324d-48df-a118-37822782a2a2,Namespace:kube-system,Attempt:0,}" Sep 5 00:30:24.006484 systemd[1]: Started sshd@8-10.0.0.38:22-10.0.0.1:53718.service - OpenSSH per-connection server daemon (10.0.0.1:53718). Sep 5 00:30:24.280732 sshd[5053]: Accepted publickey for core from 10.0.0.1 port 53718 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:30:24.283021 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:30:24.300861 systemd-logind[1540]: New session 8 of user core. Sep 5 00:30:24.314284 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:30:24.591738 sshd[5065]: Connection closed by 10.0.0.1 port 53718 Sep 5 00:30:24.592705 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Sep 5 00:30:24.599142 systemd[1]: sshd@8-10.0.0.38:22-10.0.0.1:53718.service: Deactivated successfully. Sep 5 00:30:24.603235 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:30:24.606690 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:30:24.608542 systemd-logind[1540]: Removed session 8. Sep 5 00:30:24.635411 systemd-networkd[1496]: calif501e4f66e7: Link UP Sep 5 00:30:24.637283 systemd-networkd[1496]: calif501e4f66e7: Gained carrier Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:23.993 [INFO][5039] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--q96lv-eth0 coredns-674b8bbfcf- kube-system 5b699e17-324d-48df-a118-37822782a2a2 885 0 2025-09-05 00:29:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-q96lv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif501e4f66e7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-q96lv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q96lv-" Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:23.994 [INFO][5039] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-q96lv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q96lv-eth0" Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.052 [INFO][5055] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" HandleID="k8s-pod-network.d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" Workload="localhost-k8s-coredns--674b8bbfcf--q96lv-eth0" Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.052 [INFO][5055] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" HandleID="k8s-pod-network.d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" Workload="localhost-k8s-coredns--674b8bbfcf--q96lv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001395f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-q96lv", "timestamp":"2025-09-05 00:30:24.052305323 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.052 [INFO][5055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.052 [INFO][5055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.052 [INFO][5055] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.488 [INFO][5055] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" host="localhost" Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.548 [INFO][5055] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.594 [INFO][5055] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.597 [INFO][5055] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.601 [INFO][5055] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.601 [INFO][5055] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" host="localhost" Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.606 [INFO][5055] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6 Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.612 [INFO][5055] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" host="localhost" Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.621 [INFO][5055] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" host="localhost" Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.621 [INFO][5055] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" host="localhost" Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.621 [INFO][5055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:30:24.823699 containerd[1572]: 2025-09-05 00:30:24.622 [INFO][5055] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" HandleID="k8s-pod-network.d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" Workload="localhost-k8s-coredns--674b8bbfcf--q96lv-eth0" Sep 5 00:30:24.824705 containerd[1572]: 2025-09-05 00:30:24.626 [INFO][5039] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-q96lv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q96lv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--q96lv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5b699e17-324d-48df-a118-37822782a2a2", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-q96lv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif501e4f66e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:24.824705 containerd[1572]: 2025-09-05 00:30:24.628 [INFO][5039] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-q96lv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q96lv-eth0" Sep 5 00:30:24.824705 containerd[1572]: 2025-09-05 00:30:24.628 [INFO][5039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif501e4f66e7 ContainerID="d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-q96lv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q96lv-eth0" Sep 5 00:30:24.824705 containerd[1572]: 2025-09-05 00:30:24.636 [INFO][5039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-q96lv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q96lv-eth0" Sep 5 00:30:24.824705 containerd[1572]: 2025-09-05 00:30:24.637 [INFO][5039] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-q96lv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q96lv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--q96lv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5b699e17-324d-48df-a118-37822782a2a2", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6", Pod:"coredns-674b8bbfcf-q96lv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif501e4f66e7", MAC:"16:51:fd:f0:af:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:30:24.824705 containerd[1572]: 2025-09-05 00:30:24.818 [INFO][5039] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-q96lv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q96lv-eth0" Sep 5 00:30:24.833065 kubelet[2773]: I0905 00:30:24.832304 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-854b5fb597-xvvm4" podStartSLOduration=31.882756792 podStartE2EDuration="38.832277426s" podCreationTimestamp="2025-09-05 00:29:46 +0000 UTC" firstStartedPulling="2025-09-05 00:30:15.360462835 +0000 UTC m=+43.547551057" lastFinishedPulling="2025-09-05 00:30:22.309983469 +0000 UTC m=+50.497071691" observedRunningTime="2025-09-05 00:30:23.494715335 +0000 UTC m=+51.681803567" watchObservedRunningTime="2025-09-05 00:30:24.832277426 +0000 UTC m=+53.019365648" Sep 5 00:30:24.880962 containerd[1572]: time="2025-09-05T00:30:24.880877359Z" level=info msg="connecting to shim d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6" address="unix:///run/containerd/s/8a2d71905c8a4511212d187aebb259a8bc8fc335d546ed92d828b4ccdf2854dd" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:30:24.917360 systemd[1]: Started cri-containerd-d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6.scope - libcontainer container d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6. Sep 5 00:30:24.938445 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:30:24.986863 containerd[1572]: time="2025-09-05T00:30:24.986788212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q96lv,Uid:5b699e17-324d-48df-a118-37822782a2a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6\"" Sep 5 00:30:24.988390 kubelet[2773]: E0905 00:30:24.988335 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:24.993461 containerd[1572]: time="2025-09-05T00:30:24.993400120Z" level=info msg="CreateContainer within sandbox \"d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:30:25.007424 containerd[1572]: time="2025-09-05T00:30:25.007361114Z" level=info msg="Container 066f683e6faa1b61aac7372666cfd1723f8ac272e25250d7c7094259cadbaad5: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:30:25.012173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3442376278.mount: Deactivated successfully. Sep 5 00:30:25.020820 containerd[1572]: time="2025-09-05T00:30:25.020765627Z" level=info msg="CreateContainer within sandbox \"d5f015e79f392d7d18da5dd6c6be41f479ce2743883e152f365237ec0763f1d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"066f683e6faa1b61aac7372666cfd1723f8ac272e25250d7c7094259cadbaad5\"" Sep 5 00:30:25.021983 containerd[1572]: time="2025-09-05T00:30:25.021957303Z" level=info msg="StartContainer for \"066f683e6faa1b61aac7372666cfd1723f8ac272e25250d7c7094259cadbaad5\"" Sep 5 00:30:25.025058 containerd[1572]: time="2025-09-05T00:30:25.024140639Z" level=info msg="connecting to shim 066f683e6faa1b61aac7372666cfd1723f8ac272e25250d7c7094259cadbaad5" address="unix:///run/containerd/s/8a2d71905c8a4511212d187aebb259a8bc8fc335d546ed92d828b4ccdf2854dd" protocol=ttrpc version=3 Sep 5 00:30:25.048549 systemd[1]: Started cri-containerd-066f683e6faa1b61aac7372666cfd1723f8ac272e25250d7c7094259cadbaad5.scope - libcontainer container 066f683e6faa1b61aac7372666cfd1723f8ac272e25250d7c7094259cadbaad5. Sep 5 00:30:25.098474 containerd[1572]: time="2025-09-05T00:30:25.098409321Z" level=info msg="StartContainer for \"066f683e6faa1b61aac7372666cfd1723f8ac272e25250d7c7094259cadbaad5\" returns successfully" Sep 5 00:30:25.282057 kubelet[2773]: E0905 00:30:25.281897 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:25.417147 kubelet[2773]: I0905 00:30:25.416961 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-q96lv" podStartSLOduration=47.416941506 podStartE2EDuration="47.416941506s" podCreationTimestamp="2025-09-05 00:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:30:25.416419556 +0000 UTC m=+53.603507778" watchObservedRunningTime="2025-09-05 00:30:25.416941506 +0000 UTC m=+53.604029738" Sep 5 00:30:26.291935 kubelet[2773]: E0905 00:30:26.291642 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:26.331293 systemd-networkd[1496]: calif501e4f66e7: Gained IPv6LL Sep 5 00:30:27.255537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3584578202.mount: Deactivated successfully. Sep 5 00:30:27.293548 kubelet[2773]: E0905 00:30:27.293489 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:27.402645 containerd[1572]: time="2025-09-05T00:30:27.402572597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:27.403758 containerd[1572]: time="2025-09-05T00:30:27.403689585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 5 00:30:27.405114 containerd[1572]: time="2025-09-05T00:30:27.405047689Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:27.407955 containerd[1572]: time="2025-09-05T00:30:27.407893808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:27.408766 containerd[1572]: time="2025-09-05T00:30:27.408727910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.09649668s" Sep 5 00:30:27.408766 containerd[1572]: time="2025-09-05T00:30:27.408765633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 5 00:30:27.410407 containerd[1572]: time="2025-09-05T00:30:27.410355795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 00:30:27.415271 containerd[1572]: time="2025-09-05T00:30:27.415205636Z" level=info msg="CreateContainer within sandbox \"3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 00:30:27.424684 containerd[1572]: time="2025-09-05T00:30:27.424647149Z" level=info msg="Container 6625982a165a8858564189090b98b81af86c65dab20bed80c7391365d806bf87: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:30:27.436195 containerd[1572]: time="2025-09-05T00:30:27.436148452Z" level=info msg="CreateContainer within sandbox \"3a7e437b22de85e13418e4629f0f813f16cf8e7ccc4f1faa431952b348680db3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6625982a165a8858564189090b98b81af86c65dab20bed80c7391365d806bf87\"" Sep 5 00:30:27.437119 containerd[1572]: time="2025-09-05T00:30:27.437084430Z" level=info msg="StartContainer for \"6625982a165a8858564189090b98b81af86c65dab20bed80c7391365d806bf87\"" Sep 5 00:30:27.438472 containerd[1572]: time="2025-09-05T00:30:27.438425772Z" level=info msg="connecting to shim 6625982a165a8858564189090b98b81af86c65dab20bed80c7391365d806bf87" address="unix:///run/containerd/s/8c19c7050eec7a24bbca1de1c7f7a5cc6de4881819fc9b587642570317154125" protocol=ttrpc version=3 Sep 5 00:30:27.479437 systemd[1]: Started cri-containerd-6625982a165a8858564189090b98b81af86c65dab20bed80c7391365d806bf87.scope - libcontainer container 6625982a165a8858564189090b98b81af86c65dab20bed80c7391365d806bf87. Sep 5 00:30:27.544369 containerd[1572]: time="2025-09-05T00:30:27.544218874Z" level=info msg="StartContainer for \"6625982a165a8858564189090b98b81af86c65dab20bed80c7391365d806bf87\" returns successfully" Sep 5 00:30:28.298005 kubelet[2773]: E0905 00:30:28.297938 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:29.617337 systemd[1]: Started sshd@9-10.0.0.38:22-10.0.0.1:53720.service - OpenSSH per-connection server daemon (10.0.0.1:53720). Sep 5 00:30:29.720157 sshd[5232]: Accepted publickey for core from 10.0.0.1 port 53720 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:30:29.722960 sshd-session[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:30:29.729425 systemd-logind[1540]: New session 9 of user core. Sep 5 00:30:29.740338 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:30:29.911455 sshd[5236]: Connection closed by 10.0.0.1 port 53720 Sep 5 00:30:29.912258 sshd-session[5232]: pam_unix(sshd:session): session closed for user core Sep 5 00:30:29.917642 systemd[1]: sshd@9-10.0.0.38:22-10.0.0.1:53720.service: Deactivated successfully. Sep 5 00:30:29.920774 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:30:29.922617 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:30:29.924865 systemd-logind[1540]: Removed session 9. Sep 5 00:30:30.152788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2503443388.mount: Deactivated successfully. Sep 5 00:30:31.613346 containerd[1572]: time="2025-09-05T00:30:31.613264598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:31.615074 containerd[1572]: time="2025-09-05T00:30:31.614450502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 5 00:30:31.616404 containerd[1572]: time="2025-09-05T00:30:31.616347987Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:31.619942 containerd[1572]: time="2025-09-05T00:30:31.619869000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:31.621341 containerd[1572]: time="2025-09-05T00:30:31.620715350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.210312695s" Sep 5 00:30:31.621341 containerd[1572]: time="2025-09-05T00:30:31.620757932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 5 00:30:31.622659 containerd[1572]: time="2025-09-05T00:30:31.622615230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 00:30:31.631593 containerd[1572]: time="2025-09-05T00:30:31.631540223Z" level=info msg="CreateContainer within sandbox \"4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 00:30:31.644080 containerd[1572]: time="2025-09-05T00:30:31.643436649Z" level=info msg="Container 49fe674055357d4affaff4a673f3d8616f6caaef38c842ac08f155ef659a8bb3: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:30:31.655074 containerd[1572]: time="2025-09-05T00:30:31.654995124Z" level=info msg="CreateContainer within sandbox \"4ca3f17d8ea8983296b6bcf8ee2d00f3054768fd9ca01a110320fe1dc08f31ac\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"49fe674055357d4affaff4a673f3d8616f6caaef38c842ac08f155ef659a8bb3\"" Sep 5 00:30:31.655673 containerd[1572]: time="2025-09-05T00:30:31.655628424Z" level=info msg="StartContainer for \"49fe674055357d4affaff4a673f3d8616f6caaef38c842ac08f155ef659a8bb3\"" Sep 5 00:30:31.657053 containerd[1572]: time="2025-09-05T00:30:31.656847993Z" level=info msg="connecting to shim 49fe674055357d4affaff4a673f3d8616f6caaef38c842ac08f155ef659a8bb3" address="unix:///run/containerd/s/d8a08381d94f48f5280474508b570c72f17faf454b0354c1d29d716279967ec2" protocol=ttrpc version=3 Sep 5 00:30:31.702362 systemd[1]: Started cri-containerd-49fe674055357d4affaff4a673f3d8616f6caaef38c842ac08f155ef659a8bb3.scope - libcontainer container 49fe674055357d4affaff4a673f3d8616f6caaef38c842ac08f155ef659a8bb3. Sep 5 00:30:31.767263 containerd[1572]: time="2025-09-05T00:30:31.767202619Z" level=info msg="StartContainer for \"49fe674055357d4affaff4a673f3d8616f6caaef38c842ac08f155ef659a8bb3\" returns successfully" Sep 5 00:30:32.337429 kubelet[2773]: I0905 00:30:32.337327 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-nnvn4" podStartSLOduration=28.234662451 podStartE2EDuration="44.337298099s" podCreationTimestamp="2025-09-05 00:29:48 +0000 UTC" firstStartedPulling="2025-09-05 00:30:15.519115167 +0000 UTC m=+43.706203389" lastFinishedPulling="2025-09-05 00:30:31.621750785 +0000 UTC m=+59.808839037" observedRunningTime="2025-09-05 00:30:32.337270987 +0000 UTC m=+60.524359219" watchObservedRunningTime="2025-09-05 00:30:32.337298099 +0000 UTC m=+60.524386331" Sep 5 00:30:32.338596 kubelet[2773]: I0905 00:30:32.337539 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7d9959749-mvpcl" podStartSLOduration=6.034098202 podStartE2EDuration="20.337531318s" podCreationTimestamp="2025-09-05 00:30:12 +0000 UTC" firstStartedPulling="2025-09-05 00:30:13.106693466 +0000 UTC m=+41.293781688" lastFinishedPulling="2025-09-05 00:30:27.410126582 +0000 UTC m=+55.597214804" observedRunningTime="2025-09-05 00:30:28.310318339 +0000 UTC m=+56.497406571" watchObservedRunningTime="2025-09-05 00:30:32.337531318 +0000 UTC m=+60.524619541" Sep 5 00:30:32.430830 containerd[1572]: time="2025-09-05T00:30:32.430733758Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49fe674055357d4affaff4a673f3d8616f6caaef38c842ac08f155ef659a8bb3\" id:\"00cd964b5ab10b6bd6c77b93d65fa8dc20aea61acbc004440d386a01d9d0937f\" pid:5318 exit_status:1 exited_at:{seconds:1757032232 nanos:430085690}" Sep 5 00:30:33.403642 containerd[1572]: time="2025-09-05T00:30:33.403566266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:33.406346 containerd[1572]: time="2025-09-05T00:30:33.406285548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 5 00:30:33.408179 containerd[1572]: time="2025-09-05T00:30:33.408144614Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:33.411207 containerd[1572]: time="2025-09-05T00:30:33.411143633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:33.412360 containerd[1572]: time="2025-09-05T00:30:33.412320276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.789667333s" Sep 5 00:30:33.412484 containerd[1572]: time="2025-09-05T00:30:33.412465666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 5 00:30:33.420750 containerd[1572]: time="2025-09-05T00:30:33.420697721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49fe674055357d4affaff4a673f3d8616f6caaef38c842ac08f155ef659a8bb3\" id:\"f9d11b2f3f859e85cbdbc0c8a8f9a4e3875c52d5c0be246baaae450cc08906d4\" pid:5347 exit_status:1 exited_at:{seconds:1757032233 nanos:420347988}" Sep 5 00:30:33.422171 containerd[1572]: time="2025-09-05T00:30:33.422141188Z" level=info msg="CreateContainer within sandbox \"f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:30:33.433983 containerd[1572]: time="2025-09-05T00:30:33.432856871Z" level=info msg="Container 85f76a04692abf3973939260ff96b0caf417501dc484190c07d460d5ba39c1aa: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:30:33.443573 containerd[1572]: time="2025-09-05T00:30:33.443530543Z" level=info msg="CreateContainer within sandbox \"f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"85f76a04692abf3973939260ff96b0caf417501dc484190c07d460d5ba39c1aa\"" Sep 5 00:30:33.444223 containerd[1572]: time="2025-09-05T00:30:33.444188148Z" level=info msg="StartContainer for \"85f76a04692abf3973939260ff96b0caf417501dc484190c07d460d5ba39c1aa\"" Sep 5 00:30:33.445896 containerd[1572]: time="2025-09-05T00:30:33.445858802Z" level=info msg="connecting to shim 85f76a04692abf3973939260ff96b0caf417501dc484190c07d460d5ba39c1aa" address="unix:///run/containerd/s/d36effee32f1b7578ae6f0fad343c46475f9e7e8ca7a62121883d693dd8598ac" protocol=ttrpc version=3 Sep 5 00:30:33.468285 systemd[1]: Started cri-containerd-85f76a04692abf3973939260ff96b0caf417501dc484190c07d460d5ba39c1aa.scope - libcontainer container 85f76a04692abf3973939260ff96b0caf417501dc484190c07d460d5ba39c1aa. Sep 5 00:30:33.521168 containerd[1572]: time="2025-09-05T00:30:33.521106671Z" level=info msg="StartContainer for \"85f76a04692abf3973939260ff96b0caf417501dc484190c07d460d5ba39c1aa\" returns successfully" Sep 5 00:30:33.523147 containerd[1572]: time="2025-09-05T00:30:33.523093252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 00:30:34.927701 systemd[1]: Started sshd@10-10.0.0.38:22-10.0.0.1:47914.service - OpenSSH per-connection server daemon (10.0.0.1:47914). Sep 5 00:30:35.096459 sshd[5399]: Accepted publickey for core from 10.0.0.1 port 47914 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:30:35.098790 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:30:35.103951 systemd-logind[1540]: New session 10 of user core. Sep 5 00:30:35.114176 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:30:35.275548 sshd[5402]: Connection closed by 10.0.0.1 port 47914 Sep 5 00:30:35.275976 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Sep 5 00:30:35.281889 systemd[1]: sshd@10-10.0.0.38:22-10.0.0.1:47914.service: Deactivated successfully. Sep 5 00:30:35.284744 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:30:35.285645 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:30:35.287408 systemd-logind[1540]: Removed session 10. Sep 5 00:30:36.039068 containerd[1572]: time="2025-09-05T00:30:36.038976369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:36.040263 containerd[1572]: time="2025-09-05T00:30:36.040221790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 5 00:30:36.044580 containerd[1572]: time="2025-09-05T00:30:36.044483683Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:36.047016 containerd[1572]: time="2025-09-05T00:30:36.046964486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:30:36.047853 containerd[1572]: time="2025-09-05T00:30:36.047801984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.524629499s" Sep 5 00:30:36.047853 containerd[1572]: time="2025-09-05T00:30:36.047838444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 5 00:30:36.053442 containerd[1572]: time="2025-09-05T00:30:36.053396104Z" level=info msg="CreateContainer within sandbox \"f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:30:36.064470 containerd[1572]: time="2025-09-05T00:30:36.064349584Z" level=info msg="Container b241dfad9aec7ed864028e7e291025e6f678de475efb4f7605ea92ecabedb55f: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:30:36.075661 containerd[1572]: time="2025-09-05T00:30:36.075601516Z" level=info msg="CreateContainer within sandbox \"f687111b2bbb6f56d9bdbdeea6e4a532713a53e7b638c1a9f6696d8448d551c5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b241dfad9aec7ed864028e7e291025e6f678de475efb4f7605ea92ecabedb55f\"" Sep 5 00:30:36.076296 containerd[1572]: time="2025-09-05T00:30:36.076265562Z" level=info msg="StartContainer for \"b241dfad9aec7ed864028e7e291025e6f678de475efb4f7605ea92ecabedb55f\"" Sep 5 00:30:36.077773 containerd[1572]: time="2025-09-05T00:30:36.077739031Z" level=info msg="connecting to shim b241dfad9aec7ed864028e7e291025e6f678de475efb4f7605ea92ecabedb55f" address="unix:///run/containerd/s/d36effee32f1b7578ae6f0fad343c46475f9e7e8ca7a62121883d693dd8598ac" protocol=ttrpc version=3 Sep 5 00:30:36.102297 systemd[1]: Started cri-containerd-b241dfad9aec7ed864028e7e291025e6f678de475efb4f7605ea92ecabedb55f.scope - libcontainer container b241dfad9aec7ed864028e7e291025e6f678de475efb4f7605ea92ecabedb55f. Sep 5 00:30:36.153417 containerd[1572]: time="2025-09-05T00:30:36.153356240Z" level=info msg="StartContainer for \"b241dfad9aec7ed864028e7e291025e6f678de475efb4f7605ea92ecabedb55f\" returns successfully" Sep 5 00:30:36.409241 kubelet[2773]: I0905 00:30:36.408866 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-g5g8f" podStartSLOduration=27.755198341 podStartE2EDuration="47.408842142s" podCreationTimestamp="2025-09-05 00:29:49 +0000 UTC" firstStartedPulling="2025-09-05 00:30:16.394957207 +0000 UTC m=+44.582045429" lastFinishedPulling="2025-09-05 00:30:36.048601008 +0000 UTC m=+64.235689230" observedRunningTime="2025-09-05 00:30:36.407941052 +0000 UTC m=+64.595029304" watchObservedRunningTime="2025-09-05 00:30:36.408842142 +0000 UTC m=+64.595930364" Sep 5 00:30:37.080475 kubelet[2773]: I0905 00:30:37.080415 2773 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:30:37.081918 kubelet[2773]: I0905 00:30:37.081874 2773 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:30:40.291548 systemd[1]: Started sshd@11-10.0.0.38:22-10.0.0.1:38082.service - OpenSSH per-connection server daemon (10.0.0.1:38082). Sep 5 00:30:40.351513 sshd[5457]: Accepted publickey for core from 10.0.0.1 port 38082 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:30:40.354765 sshd-session[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:30:40.360872 systemd-logind[1540]: New session 11 of user core. Sep 5 00:30:40.369417 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:30:40.528535 sshd[5460]: Connection closed by 10.0.0.1 port 38082 Sep 5 00:30:40.529055 sshd-session[5457]: pam_unix(sshd:session): session closed for user core Sep 5 00:30:40.539684 systemd[1]: sshd@11-10.0.0.38:22-10.0.0.1:38082.service: Deactivated successfully. Sep 5 00:30:40.542018 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:30:40.542892 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:30:40.546819 systemd[1]: Started sshd@12-10.0.0.38:22-10.0.0.1:38094.service - OpenSSH per-connection server daemon (10.0.0.1:38094). Sep 5 00:30:40.547872 systemd-logind[1540]: Removed session 11. Sep 5 00:30:40.624910 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 38094 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:30:40.627218 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:30:40.633626 systemd-logind[1540]: New session 12 of user core. Sep 5 00:30:40.642479 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:30:40.821265 sshd[5478]: Connection closed by 10.0.0.1 port 38094 Sep 5 00:30:40.822354 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Sep 5 00:30:40.838340 systemd[1]: sshd@12-10.0.0.38:22-10.0.0.1:38094.service: Deactivated successfully. Sep 5 00:30:40.842217 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:30:40.844652 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:30:40.850276 systemd[1]: Started sshd@13-10.0.0.38:22-10.0.0.1:38106.service - OpenSSH per-connection server daemon (10.0.0.1:38106). Sep 5 00:30:40.853533 systemd-logind[1540]: Removed session 12. Sep 5 00:30:40.906996 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 38106 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:30:40.909313 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:30:40.915568 systemd-logind[1540]: New session 13 of user core. Sep 5 00:30:40.930252 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:30:41.064372 sshd[5493]: Connection closed by 10.0.0.1 port 38106 Sep 5 00:30:41.064872 sshd-session[5490]: pam_unix(sshd:session): session closed for user core Sep 5 00:30:41.071178 systemd[1]: sshd@13-10.0.0.38:22-10.0.0.1:38106.service: Deactivated successfully. Sep 5 00:30:41.074266 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:30:41.075540 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:30:41.076997 systemd-logind[1540]: Removed session 13. Sep 5 00:30:42.335482 containerd[1572]: time="2025-09-05T00:30:42.335426533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78744935d218bec9ce5e07ad189c03ce56974f61dce163d0f169da48825546a7\" id:\"ab47e2a63af21eb6dd53e1c33813b492c13b0369026b17d7702db1f1cbd9aaaf\" pid:5516 exited_at:{seconds:1757032242 nanos:334631412}" Sep 5 00:30:46.081574 systemd[1]: Started sshd@14-10.0.0.38:22-10.0.0.1:38116.service - OpenSSH per-connection server daemon (10.0.0.1:38116). Sep 5 00:30:46.150016 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 38116 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:30:46.152479 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:30:46.157925 systemd-logind[1540]: New session 14 of user core. Sep 5 00:30:46.173214 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:30:46.318290 sshd[5540]: Connection closed by 10.0.0.1 port 38116 Sep 5 00:30:46.318707 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Sep 5 00:30:46.325210 systemd[1]: sshd@14-10.0.0.38:22-10.0.0.1:38116.service: Deactivated successfully. Sep 5 00:30:46.327805 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:30:46.328809 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:30:46.330892 systemd-logind[1540]: Removed session 14. Sep 5 00:30:46.916389 kubelet[2773]: E0905 00:30:46.916310 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:48.877775 kubelet[2773]: I0905 00:30:48.877710 2773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:30:49.321200 containerd[1572]: time="2025-09-05T00:30:49.320973215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43c03a541c9bb9ebe72678e52b83cf15ff16775b0961fdaa4bc1bfbd58c548f9\" id:\"e70f0de8a83e5968ff1f5e9393990e3e3f95305b1360adc4e83540145c213920\" pid:5570 exited_at:{seconds:1757032249 nanos:320522725}" Sep 5 00:30:49.921296 kubelet[2773]: E0905 00:30:49.921225 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:51.335395 systemd[1]: Started sshd@15-10.0.0.38:22-10.0.0.1:35130.service - OpenSSH per-connection server daemon (10.0.0.1:35130). Sep 5 00:30:51.424639 sshd[5581]: Accepted publickey for core from 10.0.0.1 port 35130 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:30:51.427210 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:30:51.433011 systemd-logind[1540]: New session 15 of user core. Sep 5 00:30:51.440280 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:30:51.631878 sshd[5584]: Connection closed by 10.0.0.1 port 35130 Sep 5 00:30:51.632349 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Sep 5 00:30:51.637821 systemd[1]: sshd@15-10.0.0.38:22-10.0.0.1:35130.service: Deactivated successfully. Sep 5 00:30:51.640392 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:30:51.641295 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:30:51.643129 systemd-logind[1540]: Removed session 15. Sep 5 00:30:53.916543 kubelet[2773]: E0905 00:30:53.916479 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:30:56.649300 systemd[1]: Started sshd@16-10.0.0.38:22-10.0.0.1:35140.service - OpenSSH per-connection server daemon (10.0.0.1:35140). Sep 5 00:30:56.704210 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 35140 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:30:56.706011 sshd-session[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:30:56.711317 systemd-logind[1540]: New session 16 of user core. Sep 5 00:30:56.731396 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:30:56.878070 sshd[5606]: Connection closed by 10.0.0.1 port 35140 Sep 5 00:30:56.879384 sshd-session[5603]: pam_unix(sshd:session): session closed for user core Sep 5 00:30:56.884926 systemd[1]: sshd@16-10.0.0.38:22-10.0.0.1:35140.service: Deactivated successfully. Sep 5 00:30:56.888108 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:30:56.890188 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:30:56.893623 systemd-logind[1540]: Removed session 16. Sep 5 00:31:01.897005 systemd[1]: Started sshd@17-10.0.0.38:22-10.0.0.1:38926.service - OpenSSH per-connection server daemon (10.0.0.1:38926). Sep 5 00:31:01.973278 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 38926 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:31:01.975507 sshd-session[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:01.981407 systemd-logind[1540]: New session 17 of user core. Sep 5 00:31:01.989301 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:31:02.169790 sshd[5623]: Connection closed by 10.0.0.1 port 38926 Sep 5 00:31:02.170086 sshd-session[5620]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:02.174721 systemd[1]: sshd@17-10.0.0.38:22-10.0.0.1:38926.service: Deactivated successfully. Sep 5 00:31:02.177184 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:31:02.179079 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:31:02.180694 systemd-logind[1540]: Removed session 17. Sep 5 00:31:03.428537 containerd[1572]: time="2025-09-05T00:31:03.428483633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49fe674055357d4affaff4a673f3d8616f6caaef38c842ac08f155ef659a8bb3\" id:\"985e067419baf2042489d4019c6b840fa1543f2048408d5d655910e14aae0a5a\" pid:5647 exited_at:{seconds:1757032263 nanos:427913230}" Sep 5 00:31:07.187481 systemd[1]: Started sshd@18-10.0.0.38:22-10.0.0.1:38930.service - OpenSSH per-connection server daemon (10.0.0.1:38930). Sep 5 00:31:07.247581 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 38930 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:31:07.249588 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:07.254974 systemd-logind[1540]: New session 18 of user core. Sep 5 00:31:07.266392 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:31:07.420793 sshd[5667]: Connection closed by 10.0.0.1 port 38930 Sep 5 00:31:07.421428 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:07.436016 systemd[1]: sshd@18-10.0.0.38:22-10.0.0.1:38930.service: Deactivated successfully. Sep 5 00:31:07.439008 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:31:07.441364 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:31:07.445800 systemd[1]: Started sshd@19-10.0.0.38:22-10.0.0.1:38938.service - OpenSSH per-connection server daemon (10.0.0.1:38938). Sep 5 00:31:07.447703 systemd-logind[1540]: Removed session 18. Sep 5 00:31:07.522855 sshd[5680]: Accepted publickey for core from 10.0.0.1 port 38938 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:31:07.524874 sshd-session[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:07.530577 systemd-logind[1540]: New session 19 of user core. Sep 5 00:31:07.541352 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:31:09.087952 sshd[5683]: Connection closed by 10.0.0.1 port 38938 Sep 5 00:31:09.090114 sshd-session[5680]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:09.098671 systemd[1]: sshd@19-10.0.0.38:22-10.0.0.1:38938.service: Deactivated successfully. Sep 5 00:31:09.101234 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:31:09.102376 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:31:09.106905 systemd[1]: Started sshd@20-10.0.0.38:22-10.0.0.1:38952.service - OpenSSH per-connection server daemon (10.0.0.1:38952). Sep 5 00:31:09.107758 systemd-logind[1540]: Removed session 19. Sep 5 00:31:09.185425 sshd[5697]: Accepted publickey for core from 10.0.0.1 port 38952 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:31:09.187330 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:09.192154 systemd-logind[1540]: New session 20 of user core. Sep 5 00:31:09.202151 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:31:10.166267 sshd[5700]: Connection closed by 10.0.0.1 port 38952 Sep 5 00:31:10.166701 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:10.180389 systemd[1]: sshd@20-10.0.0.38:22-10.0.0.1:38952.service: Deactivated successfully. Sep 5 00:31:10.182825 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:31:10.183782 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:31:10.188297 systemd[1]: Started sshd@21-10.0.0.38:22-10.0.0.1:57522.service - OpenSSH per-connection server daemon (10.0.0.1:57522). Sep 5 00:31:10.189504 systemd-logind[1540]: Removed session 20. Sep 5 00:31:10.251551 sshd[5741]: Accepted publickey for core from 10.0.0.1 port 57522 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:31:10.253197 sshd-session[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:10.257751 systemd-logind[1540]: New session 21 of user core. Sep 5 00:31:10.264213 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:31:11.406743 sshd[5744]: Connection closed by 10.0.0.1 port 57522 Sep 5 00:31:11.406303 sshd-session[5741]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:11.420070 systemd[1]: sshd@21-10.0.0.38:22-10.0.0.1:57522.service: Deactivated successfully. Sep 5 00:31:11.422913 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:31:11.423993 systemd-logind[1540]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:31:11.427817 systemd[1]: Started sshd@22-10.0.0.38:22-10.0.0.1:57528.service - OpenSSH per-connection server daemon (10.0.0.1:57528). Sep 5 00:31:11.429353 systemd-logind[1540]: Removed session 21. Sep 5 00:31:11.482895 sshd[5756]: Accepted publickey for core from 10.0.0.1 port 57528 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:31:11.484353 sshd-session[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:11.489184 systemd-logind[1540]: New session 22 of user core. Sep 5 00:31:11.498187 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:31:11.646178 sshd[5759]: Connection closed by 10.0.0.1 port 57528 Sep 5 00:31:11.646539 sshd-session[5756]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:11.652471 systemd[1]: sshd@22-10.0.0.38:22-10.0.0.1:57528.service: Deactivated successfully. Sep 5 00:31:11.655459 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:31:11.656470 systemd-logind[1540]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:31:11.658181 systemd-logind[1540]: Removed session 22. Sep 5 00:31:11.917052 kubelet[2773]: E0905 00:31:11.916874 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:12.395605 containerd[1572]: time="2025-09-05T00:31:12.395531541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78744935d218bec9ce5e07ad189c03ce56974f61dce163d0f169da48825546a7\" id:\"12a75beb100a720a028fa13439546ed8c2b4ec62ca2a8a814e1b45355320c516\" pid:5784 exited_at:{seconds:1757032272 nanos:395018690}" Sep 5 00:31:16.667300 systemd[1]: Started sshd@23-10.0.0.38:22-10.0.0.1:57542.service - OpenSSH per-connection server daemon (10.0.0.1:57542). Sep 5 00:31:16.720177 sshd[5815]: Accepted publickey for core from 10.0.0.1 port 57542 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:31:16.722602 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:16.727604 systemd-logind[1540]: New session 23 of user core. Sep 5 00:31:16.738273 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:31:16.740949 containerd[1572]: time="2025-09-05T00:31:16.740909281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43c03a541c9bb9ebe72678e52b83cf15ff16775b0961fdaa4bc1bfbd58c548f9\" id:\"962cad9ec2487752b799b7b9e35db36579cdb3a6738d2de1d90e9117ef1943e6\" pid:5808 exited_at:{seconds:1757032276 nanos:740542997}" Sep 5 00:31:16.857674 sshd[5822]: Connection closed by 10.0.0.1 port 57542 Sep 5 00:31:16.858083 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:16.862932 systemd[1]: sshd@23-10.0.0.38:22-10.0.0.1:57542.service: Deactivated successfully. Sep 5 00:31:16.865215 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:31:16.865955 systemd-logind[1540]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:31:16.867125 systemd-logind[1540]: Removed session 23. Sep 5 00:31:19.311327 containerd[1572]: time="2025-09-05T00:31:19.311256270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43c03a541c9bb9ebe72678e52b83cf15ff16775b0961fdaa4bc1bfbd58c548f9\" id:\"32b465ad120742d679d854808323dedc52a8f6f4717a9f379dcbde44ae06d165\" pid:5846 exited_at:{seconds:1757032279 nanos:310454833}" Sep 5 00:31:21.873358 systemd[1]: Started sshd@24-10.0.0.38:22-10.0.0.1:51028.service - OpenSSH per-connection server daemon (10.0.0.1:51028). Sep 5 00:31:21.951371 sshd[5858]: Accepted publickey for core from 10.0.0.1 port 51028 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:31:21.955733 sshd-session[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:21.965217 systemd-logind[1540]: New session 24 of user core. Sep 5 00:31:21.972245 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:31:22.180088 sshd[5861]: Connection closed by 10.0.0.1 port 51028 Sep 5 00:31:22.180449 sshd-session[5858]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:22.186260 systemd[1]: sshd@24-10.0.0.38:22-10.0.0.1:51028.service: Deactivated successfully. Sep 5 00:31:22.188951 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:31:22.191285 systemd-logind[1540]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:31:22.192974 systemd-logind[1540]: Removed session 24. Sep 5 00:31:22.915738 kubelet[2773]: E0905 00:31:22.915681 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:24.147535 containerd[1572]: time="2025-09-05T00:31:24.147466087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49fe674055357d4affaff4a673f3d8616f6caaef38c842ac08f155ef659a8bb3\" id:\"7d233c773c050982496e887db3e3c254a148bbc48acd10e5f23648614ef9857d\" pid:5886 exited_at:{seconds:1757032284 nanos:146952145}" Sep 5 00:31:27.192414 systemd[1]: Started sshd@25-10.0.0.38:22-10.0.0.1:51044.service - OpenSSH per-connection server daemon (10.0.0.1:51044). Sep 5 00:31:27.265622 sshd[5901]: Accepted publickey for core from 10.0.0.1 port 51044 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:31:27.267205 sshd-session[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:27.271827 systemd-logind[1540]: New session 25 of user core. Sep 5 00:31:27.286153 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:31:27.510158 sshd[5904]: Connection closed by 10.0.0.1 port 51044 Sep 5 00:31:27.511250 sshd-session[5901]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:27.516203 systemd-logind[1540]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:31:27.518541 systemd[1]: sshd@25-10.0.0.38:22-10.0.0.1:51044.service: Deactivated successfully. Sep 5 00:31:27.521526 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:31:27.525726 systemd-logind[1540]: Removed session 25. Sep 5 00:31:32.523718 systemd[1]: Started sshd@26-10.0.0.38:22-10.0.0.1:54584.service - OpenSSH per-connection server daemon (10.0.0.1:54584). Sep 5 00:31:32.590358 sshd[5919]: Accepted publickey for core from 10.0.0.1 port 54584 ssh2: RSA SHA256:FTPBIqxhuV7uWZ1wDxThX13wKyZS1sKik/rNT688yZo Sep 5 00:31:32.592845 sshd-session[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:32.598476 systemd-logind[1540]: New session 26 of user core. Sep 5 00:31:32.608250 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 00:31:32.824316 sshd[5922]: Connection closed by 10.0.0.1 port 54584 Sep 5 00:31:32.829338 sshd-session[5919]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:32.838255 systemd-logind[1540]: Session 26 logged out. Waiting for processes to exit. Sep 5 00:31:32.840836 systemd[1]: sshd@26-10.0.0.38:22-10.0.0.1:54584.service: Deactivated successfully. Sep 5 00:31:32.844326 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 00:31:32.847495 systemd-logind[1540]: Removed session 26. Sep 5 00:31:33.452662 containerd[1572]: time="2025-09-05T00:31:33.452606019Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49fe674055357d4affaff4a673f3d8616f6caaef38c842ac08f155ef659a8bb3\" id:\"0d8fbca73facd728036ebbebfb770eafbf844a1c7c01148d6ee1262494efea78\" pid:5948 exited_at:{seconds:1757032293 nanos:451317886}" Sep 5 00:31:33.916559 kubelet[2773]: E0905 00:31:33.916509 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"