Sep 5 00:46:03.783962 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:16:03 -00 2025 Sep 5 00:46:03.783992 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8098f8b005e7ec91dd20cd6ed926d3f56a1236d6886e322045b268199230ff25 Sep 5 00:46:03.784003 kernel: BIOS-provided physical RAM map: Sep 5 00:46:03.784013 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 5 00:46:03.784026 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 5 00:46:03.784036 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 5 00:46:03.784057 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 5 00:46:03.784070 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 5 00:46:03.784078 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 00:46:03.784087 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 5 00:46:03.784096 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 00:46:03.784105 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 5 00:46:03.784114 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 00:46:03.784123 kernel: NX (Execute Disable) protection: active Sep 5 00:46:03.784137 kernel: APIC: Static calls initialized Sep 5 00:46:03.784146 kernel: SMBIOS 2.8 present. Sep 5 00:46:03.784156 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 5 00:46:03.784166 kernel: DMI: Memory slots populated: 1/1 Sep 5 00:46:03.784176 kernel: Hypervisor detected: KVM Sep 5 00:46:03.784185 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:46:03.784195 kernel: kvm-clock: using sched offset of 3302791513 cycles Sep 5 00:46:03.784205 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:46:03.784215 kernel: tsc: Detected 2794.748 MHz processor Sep 5 00:46:03.784228 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:46:03.784239 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:46:03.784248 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 5 00:46:03.784258 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 5 00:46:03.784268 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:46:03.784278 kernel: Using GB pages for direct mapping Sep 5 00:46:03.784288 kernel: ACPI: Early table checksum verification disabled Sep 5 00:46:03.784297 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 5 00:46:03.784307 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:46:03.784319 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:46:03.784328 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:46:03.784337 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 5 00:46:03.784346 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:46:03.784356 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:46:03.784365 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:46:03.784375 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:46:03.784385 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 5 00:46:03.784401 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 5 00:46:03.784411 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 5 00:46:03.784422 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 5 00:46:03.784432 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 5 00:46:03.784443 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 5 00:46:03.784453 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 5 00:46:03.784465 kernel: No NUMA configuration found Sep 5 00:46:03.784476 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 5 00:46:03.784486 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 5 00:46:03.784496 kernel: Zone ranges: Sep 5 00:46:03.784506 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:46:03.784516 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 5 00:46:03.784527 kernel: Normal empty Sep 5 00:46:03.784537 kernel: Device empty Sep 5 00:46:03.784547 kernel: Movable zone start for each node Sep 5 00:46:03.784560 kernel: Early memory node ranges Sep 5 00:46:03.784570 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 5 00:46:03.784580 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 5 00:46:03.784591 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 5 00:46:03.784601 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:46:03.784611 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 5 00:46:03.784622 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 5 00:46:03.784632 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:46:03.784642 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:46:03.784670 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:46:03.784683 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:46:03.784693 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:46:03.784703 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:46:03.784712 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:46:03.784722 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:46:03.784732 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:46:03.784742 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:46:03.784753 kernel: TSC deadline timer available Sep 5 00:46:03.784763 kernel: CPU topo: Max. logical packages: 1 Sep 5 00:46:03.784776 kernel: CPU topo: Max. logical dies: 1 Sep 5 00:46:03.784786 kernel: CPU topo: Max. dies per package: 1 Sep 5 00:46:03.784797 kernel: CPU topo: Max. threads per core: 1 Sep 5 00:46:03.784807 kernel: CPU topo: Num. cores per package: 4 Sep 5 00:46:03.784817 kernel: CPU topo: Num. threads per package: 4 Sep 5 00:46:03.784827 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 5 00:46:03.784838 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:46:03.784848 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:46:03.784859 kernel: kvm-guest: setup PV sched yield Sep 5 00:46:03.784869 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 5 00:46:03.784881 kernel: Booting paravirtualized kernel on KVM Sep 5 00:46:03.784892 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:46:03.784903 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:46:03.784913 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 5 00:46:03.784924 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 5 00:46:03.784934 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:46:03.784944 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:46:03.784955 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:46:03.784966 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8098f8b005e7ec91dd20cd6ed926d3f56a1236d6886e322045b268199230ff25 Sep 5 00:46:03.784980 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:46:03.784990 kernel: random: crng init done Sep 5 00:46:03.785000 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:46:03.785011 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:46:03.785021 kernel: Fallback order for Node 0: 0 Sep 5 00:46:03.785032 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 5 00:46:03.785051 kernel: Policy zone: DMA32 Sep 5 00:46:03.785062 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:46:03.785075 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:46:03.785085 kernel: ftrace: allocating 40099 entries in 157 pages Sep 5 00:46:03.785096 kernel: ftrace: allocated 157 pages with 5 groups Sep 5 00:46:03.785106 kernel: Dynamic Preempt: voluntary Sep 5 00:46:03.785116 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:46:03.785127 kernel: rcu: RCU event tracing is enabled. Sep 5 00:46:03.785138 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:46:03.785149 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:46:03.785159 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:46:03.785172 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:46:03.785182 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:46:03.785193 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:46:03.785204 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:46:03.785214 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:46:03.785225 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:46:03.785235 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:46:03.785246 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:46:03.785267 kernel: Console: colour VGA+ 80x25 Sep 5 00:46:03.785278 kernel: printk: legacy console [ttyS0] enabled Sep 5 00:46:03.785289 kernel: ACPI: Core revision 20240827 Sep 5 00:46:03.785300 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:46:03.785312 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:46:03.785323 kernel: x2apic enabled Sep 5 00:46:03.785334 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:46:03.785345 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:46:03.785356 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:46:03.785369 kernel: kvm-guest: setup PV IPIs Sep 5 00:46:03.785380 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:46:03.785391 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 00:46:03.785403 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 5 00:46:03.785414 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:46:03.785425 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:46:03.785436 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:46:03.785447 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:46:03.785460 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:46:03.785471 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:46:03.785482 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:46:03.785493 kernel: active return thunk: retbleed_return_thunk Sep 5 00:46:03.785503 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:46:03.785514 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:46:03.785526 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:46:03.785537 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:46:03.785548 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:46:03.785562 kernel: active return thunk: srso_return_thunk Sep 5 00:46:03.785573 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:46:03.785584 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:46:03.785595 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:46:03.785606 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:46:03.785617 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:46:03.785628 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:46:03.785639 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:46:03.785663 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:46:03.785693 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 5 00:46:03.785704 kernel: landlock: Up and running. Sep 5 00:46:03.785715 kernel: SELinux: Initializing. Sep 5 00:46:03.785726 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:46:03.785738 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:46:03.785749 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:46:03.785760 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:46:03.785771 kernel: ... version: 0 Sep 5 00:46:03.785781 kernel: ... bit width: 48 Sep 5 00:46:03.785795 kernel: ... generic registers: 6 Sep 5 00:46:03.785806 kernel: ... value mask: 0000ffffffffffff Sep 5 00:46:03.785817 kernel: ... max period: 00007fffffffffff Sep 5 00:46:03.785827 kernel: ... fixed-purpose events: 0 Sep 5 00:46:03.785838 kernel: ... event mask: 000000000000003f Sep 5 00:46:03.785849 kernel: signal: max sigframe size: 1776 Sep 5 00:46:03.785860 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:46:03.785871 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:46:03.785882 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 5 00:46:03.785895 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:46:03.785906 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:46:03.785917 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:46:03.785928 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:46:03.785939 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 5 00:46:03.785950 kernel: Memory: 2430968K/2571752K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 134856K reserved, 0K cma-reserved) Sep 5 00:46:03.785961 kernel: devtmpfs: initialized Sep 5 00:46:03.785972 kernel: x86/mm: Memory block size: 128MB Sep 5 00:46:03.785986 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:46:03.786001 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:46:03.786012 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:46:03.786023 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:46:03.786034 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:46:03.786053 kernel: audit: type=2000 audit(1757033160.671:1): state=initialized audit_enabled=0 res=1 Sep 5 00:46:03.786064 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:46:03.786075 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:46:03.786086 kernel: cpuidle: using governor menu Sep 5 00:46:03.786097 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:46:03.786110 kernel: dca service started, version 1.12.1 Sep 5 00:46:03.786121 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 5 00:46:03.786132 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 00:46:03.786143 kernel: PCI: Using configuration type 1 for base access Sep 5 00:46:03.786154 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:46:03.786165 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:46:03.786176 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:46:03.786187 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:46:03.786198 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:46:03.786211 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:46:03.786222 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:46:03.786233 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:46:03.786244 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:46:03.786254 kernel: ACPI: Interpreter enabled Sep 5 00:46:03.786265 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:46:03.786276 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:46:03.786287 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:46:03.786298 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:46:03.786311 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:46:03.786322 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:46:03.786531 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:46:03.786692 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:46:03.786839 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:46:03.786854 kernel: PCI host bridge to bus 0000:00 Sep 5 00:46:03.787008 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:46:03.787158 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:46:03.787296 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:46:03.787429 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 5 00:46:03.787593 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 00:46:03.787757 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 5 00:46:03.787892 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:46:03.788066 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 5 00:46:03.788231 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 5 00:46:03.788378 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 5 00:46:03.788524 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 5 00:46:03.788706 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 5 00:46:03.788856 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:46:03.789021 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 5 00:46:03.789186 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 5 00:46:03.789335 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 5 00:46:03.789483 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 5 00:46:03.789640 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 5 00:46:03.789816 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 5 00:46:03.789964 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 5 00:46:03.790120 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 5 00:46:03.790283 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 5 00:46:03.790430 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 5 00:46:03.790577 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 5 00:46:03.790769 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 5 00:46:03.790919 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 5 00:46:03.791089 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 5 00:46:03.791238 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:46:03.791398 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 5 00:46:03.791547 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 5 00:46:03.791714 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 5 00:46:03.791872 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 5 00:46:03.792020 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 5 00:46:03.792035 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:46:03.792059 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:46:03.792070 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:46:03.792080 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:46:03.792091 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:46:03.792101 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:46:03.792112 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:46:03.792122 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:46:03.792133 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:46:03.792143 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:46:03.792156 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:46:03.792167 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:46:03.792178 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:46:03.792188 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:46:03.792198 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:46:03.792209 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:46:03.792220 kernel: iommu: Default domain type: Translated Sep 5 00:46:03.792230 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:46:03.792241 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:46:03.792252 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:46:03.792264 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 5 00:46:03.792275 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 5 00:46:03.792423 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:46:03.792570 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:46:03.792733 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:46:03.792749 kernel: vgaarb: loaded Sep 5 00:46:03.792760 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:46:03.792771 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:46:03.792785 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:46:03.792796 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:46:03.792806 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:46:03.792817 kernel: pnp: PnP ACPI init Sep 5 00:46:03.792974 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 00:46:03.792990 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:46:03.793001 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:46:03.793012 kernel: NET: Registered PF_INET protocol family Sep 5 00:46:03.793026 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:46:03.793037 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:46:03.793056 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:46:03.793066 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:46:03.793077 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:46:03.793087 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:46:03.793098 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:46:03.793132 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:46:03.793143 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:46:03.793160 kernel: NET: Registered PF_XDP protocol family Sep 5 00:46:03.793304 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:46:03.793439 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:46:03.793573 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:46:03.793729 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 5 00:46:03.793863 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 00:46:03.793997 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 5 00:46:03.794012 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:46:03.794027 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 00:46:03.794038 kernel: Initialise system trusted keyrings Sep 5 00:46:03.794058 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:46:03.794071 kernel: Key type asymmetric registered Sep 5 00:46:03.794083 kernel: Asymmetric key parser 'x509' registered Sep 5 00:46:03.794095 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 00:46:03.794105 kernel: io scheduler mq-deadline registered Sep 5 00:46:03.794116 kernel: io scheduler kyber registered Sep 5 00:46:03.794126 kernel: io scheduler bfq registered Sep 5 00:46:03.794140 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:46:03.794151 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:46:03.794161 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:46:03.794172 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:46:03.794183 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:46:03.794193 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:46:03.794204 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:46:03.794215 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:46:03.794225 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:46:03.794390 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:46:03.794407 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:46:03.794542 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:46:03.794698 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:46:03 UTC (1757033163) Sep 5 00:46:03.794838 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 5 00:46:03.794853 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:46:03.794864 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:46:03.794875 kernel: Segment Routing with IPv6 Sep 5 00:46:03.794889 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:46:03.794900 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:46:03.794910 kernel: Key type dns_resolver registered Sep 5 00:46:03.794921 kernel: IPI shorthand broadcast: enabled Sep 5 00:46:03.794931 kernel: sched_clock: Marking stable (2710002369, 108604476)->(2835673734, -17066889) Sep 5 00:46:03.794942 kernel: registered taskstats version 1 Sep 5 00:46:03.794952 kernel: Loading compiled-in X.509 certificates Sep 5 00:46:03.794963 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 46ac630679a94cf97f27908ed9d949b10b130587' Sep 5 00:46:03.794974 kernel: Demotion targets for Node 0: null Sep 5 00:46:03.794987 kernel: Key type .fscrypt registered Sep 5 00:46:03.794997 kernel: Key type fscrypt-provisioning registered Sep 5 00:46:03.795008 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:46:03.795019 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:46:03.795029 kernel: ima: No architecture policies found Sep 5 00:46:03.795040 kernel: clk: Disabling unused clocks Sep 5 00:46:03.795062 kernel: Warning: unable to open an initial console. Sep 5 00:46:03.795073 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 5 00:46:03.795083 kernel: Write protecting the kernel read-only data: 24576k Sep 5 00:46:03.795095 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 5 00:46:03.795103 kernel: Run /init as init process Sep 5 00:46:03.795111 kernel: with arguments: Sep 5 00:46:03.795118 kernel: /init Sep 5 00:46:03.795126 kernel: with environment: Sep 5 00:46:03.795133 kernel: HOME=/ Sep 5 00:46:03.795141 kernel: TERM=linux Sep 5 00:46:03.795148 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:46:03.795157 systemd[1]: Successfully made /usr/ read-only. Sep 5 00:46:03.795179 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:46:03.795189 systemd[1]: Detected virtualization kvm. Sep 5 00:46:03.795198 systemd[1]: Detected architecture x86-64. Sep 5 00:46:03.795206 systemd[1]: Running in initrd. Sep 5 00:46:03.795214 systemd[1]: No hostname configured, using default hostname. Sep 5 00:46:03.795225 systemd[1]: Hostname set to . Sep 5 00:46:03.795234 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:46:03.795243 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:46:03.795251 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:46:03.795259 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:46:03.795268 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:46:03.795277 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:46:03.795286 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:46:03.795297 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:46:03.795307 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:46:03.795315 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:46:03.795324 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:46:03.795332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:46:03.795340 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:46:03.795349 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:46:03.795359 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:46:03.795368 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:46:03.795378 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:46:03.795386 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:46:03.795395 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:46:03.795404 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 5 00:46:03.795412 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:46:03.795421 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:46:03.795431 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:46:03.795439 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:46:03.795448 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:46:03.795456 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:46:03.795467 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:46:03.795476 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 5 00:46:03.795486 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:46:03.795495 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:46:03.795504 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:46:03.795512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:46:03.795521 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:46:03.795532 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:46:03.795540 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:46:03.795549 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:46:03.795577 systemd-journald[218]: Collecting audit messages is disabled. Sep 5 00:46:03.795599 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:46:03.795608 systemd-journald[218]: Journal started Sep 5 00:46:03.795628 systemd-journald[218]: Runtime Journal (/run/log/journal/6f0ef38295314d45b9d31b4698adb0f2) is 6M, max 48.6M, 42.5M free. Sep 5 00:46:03.780991 systemd-modules-load[221]: Inserted module 'overlay' Sep 5 00:46:03.825326 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:46:03.825349 kernel: Bridge firewalling registered Sep 5 00:46:03.809730 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 5 00:46:03.828169 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:46:03.828569 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:46:03.830820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:46:03.835782 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:46:03.837622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:46:03.847337 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:46:03.848272 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:46:03.858322 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:46:03.859246 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:46:03.859532 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 5 00:46:03.865316 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:46:03.867755 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:46:03.870393 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:46:03.871751 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:46:03.896852 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8098f8b005e7ec91dd20cd6ed926d3f56a1236d6886e322045b268199230ff25 Sep 5 00:46:03.916080 systemd-resolved[260]: Positive Trust Anchors: Sep 5 00:46:03.916092 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:46:03.916123 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:46:03.918715 systemd-resolved[260]: Defaulting to hostname 'linux'. Sep 5 00:46:03.919791 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:46:03.924911 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:46:04.008685 kernel: SCSI subsystem initialized Sep 5 00:46:04.017674 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:46:04.028674 kernel: iscsi: registered transport (tcp) Sep 5 00:46:04.049668 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:46:04.049699 kernel: QLogic iSCSI HBA Driver Sep 5 00:46:04.067691 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:46:04.092437 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:46:04.094630 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:46:04.141319 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:46:04.143295 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:46:04.203672 kernel: raid6: avx2x4 gen() 29771 MB/s Sep 5 00:46:04.220663 kernel: raid6: avx2x2 gen() 30152 MB/s Sep 5 00:46:04.237706 kernel: raid6: avx2x1 gen() 25042 MB/s Sep 5 00:46:04.237722 kernel: raid6: using algorithm avx2x2 gen() 30152 MB/s Sep 5 00:46:04.255760 kernel: raid6: .... xor() 19234 MB/s, rmw enabled Sep 5 00:46:04.255777 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:46:04.276677 kernel: xor: automatically using best checksumming function avx Sep 5 00:46:04.434677 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:46:04.441892 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:46:04.445430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:46:04.478561 systemd-udevd[474]: Using default interface naming scheme 'v255'. Sep 5 00:46:04.483880 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:46:04.485220 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:46:04.510876 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Sep 5 00:46:04.535211 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:46:04.537543 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:46:04.608805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:46:04.612206 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:46:04.646249 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:46:04.653253 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:46:04.661677 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:46:04.661708 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:46:04.661727 kernel: GPT:9289727 != 19775487 Sep 5 00:46:04.662882 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:46:04.662902 kernel: GPT:9289727 != 19775487 Sep 5 00:46:04.662912 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:46:04.663985 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:46:04.675457 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:46:04.676837 kernel: libata version 3.00 loaded. Sep 5 00:46:04.677035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:46:04.679859 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:46:04.684839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:46:04.687925 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 5 00:46:04.688667 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:46:04.688867 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:46:04.698828 kernel: AES CTR mode by8 optimization enabled Sep 5 00:46:04.711455 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 5 00:46:04.711750 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 5 00:46:04.711947 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:46:04.720668 kernel: scsi host0: ahci Sep 5 00:46:04.724678 kernel: scsi host1: ahci Sep 5 00:46:04.729685 kernel: scsi host2: ahci Sep 5 00:46:04.730991 kernel: scsi host3: ahci Sep 5 00:46:04.731526 kernel: scsi host4: ahci Sep 5 00:46:04.732347 kernel: scsi host5: ahci Sep 5 00:46:04.732745 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 5 00:46:04.732759 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 5 00:46:04.732769 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 5 00:46:04.732779 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 5 00:46:04.732789 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 5 00:46:04.732799 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 5 00:46:04.735118 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:46:04.766911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:46:04.778201 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:46:04.798373 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:46:04.801485 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:46:04.811956 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:46:04.814944 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:46:04.844461 disk-uuid[636]: Primary Header is updated. Sep 5 00:46:04.844461 disk-uuid[636]: Secondary Entries is updated. Sep 5 00:46:04.844461 disk-uuid[636]: Secondary Header is updated. Sep 5 00:46:04.847685 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:46:04.851669 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:46:05.044636 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:46:05.044708 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:46:05.044722 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:46:05.044741 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:46:05.045677 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:46:05.046676 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:46:05.047684 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 00:46:05.047703 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:46:05.048131 kernel: ata3.00: applying bridge limits Sep 5 00:46:05.048686 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 00:46:05.049760 kernel: ata3.00: configured for UDMA/100 Sep 5 00:46:05.050686 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:46:05.106685 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:46:05.106894 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:46:05.120677 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:46:05.411006 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:46:05.413804 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:46:05.416407 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:46:05.418895 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:46:05.421878 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:46:05.456494 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:46:05.853663 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:46:05.853845 disk-uuid[638]: The operation has completed successfully. Sep 5 00:46:05.890368 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:46:05.890487 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:46:05.920272 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:46:05.932523 sh[666]: Success Sep 5 00:46:05.949915 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:46:05.949945 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:46:05.951037 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 5 00:46:05.960694 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 5 00:46:05.990250 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:46:05.993249 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:46:06.013106 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:46:06.021066 kernel: BTRFS: device fsid 576be3ac-7582-49ed-82f8-99c78beeeda2 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (678) Sep 5 00:46:06.021093 kernel: BTRFS info (device dm-0): first mount of filesystem 576be3ac-7582-49ed-82f8-99c78beeeda2 Sep 5 00:46:06.021104 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:46:06.026675 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:46:06.026701 kernel: BTRFS info (device dm-0): enabling free space tree Sep 5 00:46:06.027562 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:46:06.028613 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:46:06.031597 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:46:06.032565 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:46:06.036235 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:46:06.058034 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Sep 5 00:46:06.058075 kernel: BTRFS info (device vda6): first mount of filesystem 2861b466-0188-457c-9fd5-d64bb65bd98a Sep 5 00:46:06.058087 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:46:06.061666 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:46:06.061688 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:46:06.066661 kernel: BTRFS info (device vda6): last unmount of filesystem 2861b466-0188-457c-9fd5-d64bb65bd98a Sep 5 00:46:06.067165 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:46:06.069172 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:46:06.148952 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:46:06.154049 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:46:06.161493 ignition[756]: Ignition 2.21.0 Sep 5 00:46:06.161507 ignition[756]: Stage: fetch-offline Sep 5 00:46:06.161590 ignition[756]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:46:06.161599 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:46:06.161735 ignition[756]: parsed url from cmdline: "" Sep 5 00:46:06.161742 ignition[756]: no config URL provided Sep 5 00:46:06.161747 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:46:06.161759 ignition[756]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:46:06.161787 ignition[756]: op(1): [started] loading QEMU firmware config module Sep 5 00:46:06.161792 ignition[756]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:46:06.174042 ignition[756]: op(1): [finished] loading QEMU firmware config module Sep 5 00:46:06.195741 systemd-networkd[854]: lo: Link UP Sep 5 00:46:06.195751 systemd-networkd[854]: lo: Gained carrier Sep 5 00:46:06.197242 systemd-networkd[854]: Enumeration completed Sep 5 00:46:06.197452 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:46:06.197583 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:46:06.197587 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:46:06.198675 systemd-networkd[854]: eth0: Link UP Sep 5 00:46:06.198806 systemd-networkd[854]: eth0: Gained carrier Sep 5 00:46:06.198814 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:46:06.200928 systemd[1]: Reached target network.target - Network. Sep 5 00:46:06.222692 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.4/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:46:06.226804 ignition[756]: parsing config with SHA512: 96bf8b18c09cf0765b23b140000c202f2a437cc3d9a26de5cfacfcc19136d9786323c2e172ad4f7321d45892a1595ae19f3f783a960803709382fa21e4753403 Sep 5 00:46:06.230415 unknown[756]: fetched base config from "system" Sep 5 00:46:06.230426 unknown[756]: fetched user config from "qemu" Sep 5 00:46:06.230854 ignition[756]: fetch-offline: fetch-offline passed Sep 5 00:46:06.230902 ignition[756]: Ignition finished successfully Sep 5 00:46:06.234125 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:46:06.235454 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:46:06.236231 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:46:06.277968 ignition[861]: Ignition 2.21.0 Sep 5 00:46:06.277999 ignition[861]: Stage: kargs Sep 5 00:46:06.278294 ignition[861]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:46:06.278316 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:46:06.280736 ignition[861]: kargs: kargs passed Sep 5 00:46:06.280789 ignition[861]: Ignition finished successfully Sep 5 00:46:06.285084 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:46:06.287339 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:46:06.319442 ignition[869]: Ignition 2.21.0 Sep 5 00:46:06.319455 ignition[869]: Stage: disks Sep 5 00:46:06.319578 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:46:06.319588 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:46:06.324002 ignition[869]: disks: disks passed Sep 5 00:46:06.324670 ignition[869]: Ignition finished successfully Sep 5 00:46:06.327362 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:46:06.329796 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:46:06.330252 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:46:06.330572 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:46:06.331093 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:46:06.331407 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:46:06.332990 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:46:06.360394 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 5 00:46:06.368212 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:46:06.369494 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:46:06.473673 kernel: EXT4-fs (vda9): mounted filesystem b20472b4-8182-496c-8475-ee073ab90b5c r/w with ordered data mode. Quota mode: none. Sep 5 00:46:06.473844 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:46:06.474694 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:46:06.477827 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:46:06.479048 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:46:06.480101 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:46:06.480143 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:46:06.480167 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:46:06.497862 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:46:06.499747 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:46:06.504533 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Sep 5 00:46:06.504559 kernel: BTRFS info (device vda6): first mount of filesystem 2861b466-0188-457c-9fd5-d64bb65bd98a Sep 5 00:46:06.504570 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:46:06.508677 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:46:06.508727 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:46:06.510557 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:46:06.536296 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:46:06.541400 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:46:06.545985 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:46:06.550564 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:46:06.634990 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:46:06.636285 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:46:06.638211 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:46:06.653754 kernel: BTRFS info (device vda6): last unmount of filesystem 2861b466-0188-457c-9fd5-d64bb65bd98a Sep 5 00:46:06.665204 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:46:06.679344 ignition[1001]: INFO : Ignition 2.21.0 Sep 5 00:46:06.679344 ignition[1001]: INFO : Stage: mount Sep 5 00:46:06.681000 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:46:06.681000 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:46:06.684394 ignition[1001]: INFO : mount: mount passed Sep 5 00:46:06.685197 ignition[1001]: INFO : Ignition finished successfully Sep 5 00:46:06.688934 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:46:06.691862 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:46:07.020125 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:46:07.021801 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:46:07.052408 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1013) Sep 5 00:46:07.052436 kernel: BTRFS info (device vda6): first mount of filesystem 2861b466-0188-457c-9fd5-d64bb65bd98a Sep 5 00:46:07.052448 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:46:07.055668 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:46:07.055694 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:46:07.057912 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:46:07.092404 ignition[1030]: INFO : Ignition 2.21.0 Sep 5 00:46:07.092404 ignition[1030]: INFO : Stage: files Sep 5 00:46:07.094579 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:46:07.094579 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:46:07.097191 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:46:07.098343 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:46:07.098343 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:46:07.101364 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:46:07.101364 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:46:07.101364 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:46:07.100316 unknown[1030]: wrote ssh authorized keys file for user: core Sep 5 00:46:07.106495 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 5 00:46:07.106495 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 5 00:46:07.277511 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:46:07.550557 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 5 00:46:07.550557 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:46:07.554457 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:46:07.554457 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:46:07.554457 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:46:07.554457 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:46:07.554457 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:46:07.554457 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:46:07.554457 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:46:07.566969 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:46:07.566969 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:46:07.566969 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:46:07.566969 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:46:07.566969 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:46:07.566969 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 5 00:46:08.014695 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 00:46:08.028824 systemd-networkd[854]: eth0: Gained IPv6LL Sep 5 00:46:08.670925 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:46:08.670925 ignition[1030]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 00:46:08.675165 ignition[1030]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:46:08.677109 ignition[1030]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:46:08.677109 ignition[1030]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 00:46:08.677109 ignition[1030]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 00:46:08.677109 ignition[1030]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:46:08.677109 ignition[1030]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:46:08.677109 ignition[1030]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 00:46:08.677109 ignition[1030]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:46:08.692782 ignition[1030]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:46:08.697426 ignition[1030]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:46:08.699115 ignition[1030]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:46:08.699115 ignition[1030]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:46:08.699115 ignition[1030]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:46:08.703442 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:46:08.703442 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:46:08.703442 ignition[1030]: INFO : files: files passed Sep 5 00:46:08.703442 ignition[1030]: INFO : Ignition finished successfully Sep 5 00:46:08.707928 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:46:08.711154 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:46:08.713414 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:46:08.728494 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:46:08.728620 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:46:08.731855 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:46:08.733223 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:46:08.733223 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:46:08.737491 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:46:08.735295 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:46:08.737988 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:46:08.741372 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:46:08.780027 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:46:08.780162 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:46:08.780620 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:46:08.783527 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:46:08.785617 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:46:08.788574 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:46:08.804120 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:46:08.806685 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:46:08.827626 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:46:08.827978 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:46:08.830263 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:46:08.830593 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:46:08.830713 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:46:08.835593 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:46:08.837596 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:46:08.838132 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:46:08.838446 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:46:08.838938 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:46:08.844206 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:46:08.844509 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:46:08.844992 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:46:08.849970 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:46:08.851848 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:46:08.852156 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:46:08.855510 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:46:08.855615 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:46:08.858507 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:46:08.859226 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:46:08.859506 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:46:08.863341 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:46:08.864098 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:46:08.864205 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:46:08.868497 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:46:08.868608 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:46:08.869239 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:46:08.869474 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:46:08.876713 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:46:08.878001 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:46:08.878561 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:46:08.878888 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:46:08.878979 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:46:08.882750 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:46:08.882828 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:46:08.885149 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:46:08.885258 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:46:08.887270 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:46:08.887371 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:46:08.892799 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:46:08.895774 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:46:08.896642 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:46:08.896816 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:46:08.897317 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:46:08.897420 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:46:08.904818 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:46:08.904931 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:46:08.917008 ignition[1086]: INFO : Ignition 2.21.0 Sep 5 00:46:08.917008 ignition[1086]: INFO : Stage: umount Sep 5 00:46:08.918565 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:46:08.918565 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:46:08.920989 ignition[1086]: INFO : umount: umount passed Sep 5 00:46:08.921926 ignition[1086]: INFO : Ignition finished successfully Sep 5 00:46:08.924220 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:46:08.924826 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:46:08.924962 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:46:08.926289 systemd[1]: Stopped target network.target - Network. Sep 5 00:46:08.927908 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:46:08.927959 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:46:08.929226 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:46:08.929268 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:46:08.929536 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:46:08.929580 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:46:08.929899 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:46:08.929954 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:46:08.934459 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:46:08.936377 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:46:08.945419 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:46:08.945542 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:46:08.949496 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 5 00:46:08.949762 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:46:08.949816 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:46:08.953640 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:46:08.956220 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:46:08.956338 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:46:08.960127 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 5 00:46:08.960268 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 5 00:46:08.962350 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:46:08.962386 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:46:08.965029 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:46:08.965608 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:46:08.965707 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:46:08.966130 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:46:08.966172 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:46:08.971028 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:46:08.971073 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:46:08.971417 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:46:08.975631 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 00:46:08.993396 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:46:08.993569 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:46:08.994221 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:46:08.994265 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:46:08.996969 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:46:08.997005 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:46:08.997259 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:46:08.997301 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:46:09.002029 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:46:09.002076 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:46:09.002867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:46:09.002923 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:46:09.010476 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:46:09.012525 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 5 00:46:09.012583 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:46:09.016141 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:46:09.016192 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:46:09.019524 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 00:46:09.019571 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:46:09.023108 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:46:09.023157 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:46:09.023697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:46:09.023737 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:46:09.028664 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:46:09.028766 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:46:09.034439 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:46:09.034550 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:46:09.101442 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:46:09.101558 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:46:09.102274 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:46:09.104320 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:46:09.104375 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:46:09.109052 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:46:09.133387 systemd[1]: Switching root. Sep 5 00:46:09.164786 systemd-journald[218]: Journal stopped Sep 5 00:46:10.308205 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Sep 5 00:46:10.308271 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:46:10.308290 kernel: SELinux: policy capability open_perms=1 Sep 5 00:46:10.308302 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:46:10.308315 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:46:10.308332 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:46:10.308346 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:46:10.308360 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:46:10.308373 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:46:10.308388 kernel: SELinux: policy capability userspace_initial_context=0 Sep 5 00:46:10.308401 kernel: audit: type=1403 audit(1757033169.560:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:46:10.308417 systemd[1]: Successfully loaded SELinux policy in 46.024ms. Sep 5 00:46:10.308445 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.235ms. Sep 5 00:46:10.308464 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:46:10.308485 systemd[1]: Detected virtualization kvm. Sep 5 00:46:10.308500 systemd[1]: Detected architecture x86-64. Sep 5 00:46:10.308514 systemd[1]: Detected first boot. Sep 5 00:46:10.308529 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:46:10.308542 zram_generator::config[1131]: No configuration found. Sep 5 00:46:10.308555 kernel: Guest personality initialized and is inactive Sep 5 00:46:10.308566 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 5 00:46:10.308577 kernel: Initialized host personality Sep 5 00:46:10.308590 kernel: NET: Registered PF_VSOCK protocol family Sep 5 00:46:10.308601 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:46:10.308614 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 5 00:46:10.308631 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:46:10.308655 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:46:10.308667 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:46:10.308680 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:46:10.308692 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:46:10.308704 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:46:10.308723 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:46:10.308736 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:46:10.308748 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:46:10.308760 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:46:10.308771 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:46:10.308783 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:46:10.308795 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:46:10.308808 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:46:10.308822 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:46:10.308834 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:46:10.308846 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:46:10.308858 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:46:10.308879 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:46:10.308891 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:46:10.308904 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:46:10.308916 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:46:10.308930 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:46:10.308942 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:46:10.308954 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:46:10.308967 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:46:10.308979 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:46:10.308991 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:46:10.309003 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:46:10.309015 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:46:10.309027 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 5 00:46:10.309042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:46:10.309054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:46:10.309065 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:46:10.309077 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:46:10.309089 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:46:10.309101 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:46:10.309113 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:46:10.309125 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:46:10.309137 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:46:10.309151 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:46:10.309163 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:46:10.309175 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:46:10.309188 systemd[1]: Reached target machines.target - Containers. Sep 5 00:46:10.309200 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:46:10.309212 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:46:10.309224 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:46:10.309236 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:46:10.309251 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:46:10.309264 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:46:10.309279 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:46:10.309294 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:46:10.309309 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:46:10.309324 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:46:10.309338 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:46:10.309353 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:46:10.309368 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:46:10.309384 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:46:10.309400 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:46:10.309414 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:46:10.309426 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:46:10.309438 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:46:10.309450 kernel: loop: module loaded Sep 5 00:46:10.309461 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:46:10.309474 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 5 00:46:10.309490 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:46:10.309502 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:46:10.309514 systemd[1]: Stopped verity-setup.service. Sep 5 00:46:10.309526 kernel: fuse: init (API version 7.41) Sep 5 00:46:10.309537 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:46:10.309551 kernel: ACPI: bus type drm_connector registered Sep 5 00:46:10.309563 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:46:10.309574 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:46:10.309586 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:46:10.309597 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:46:10.309609 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:46:10.309626 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:46:10.309638 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:46:10.309687 systemd-journald[1199]: Collecting audit messages is disabled. Sep 5 00:46:10.309709 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:46:10.309724 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:46:10.309735 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:46:10.309748 systemd-journald[1199]: Journal started Sep 5 00:46:10.309770 systemd-journald[1199]: Runtime Journal (/run/log/journal/6f0ef38295314d45b9d31b4698adb0f2) is 6M, max 48.6M, 42.5M free. Sep 5 00:46:10.070586 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:46:10.096428 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:46:10.096886 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:46:10.311309 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:46:10.313669 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:46:10.315278 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:46:10.316722 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:46:10.316940 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:46:10.318284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:46:10.318525 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:46:10.319987 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:46:10.320194 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:46:10.321575 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:46:10.321902 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:46:10.323296 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:46:10.324746 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:46:10.326294 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:46:10.327842 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 5 00:46:10.343173 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:46:10.345930 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:46:10.348096 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:46:10.349245 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:46:10.349273 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:46:10.351241 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 5 00:46:10.362748 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:46:10.363885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:46:10.365820 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:46:10.367818 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:46:10.369714 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:46:10.372065 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:46:10.373252 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:46:10.375747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:46:10.380834 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:46:10.383771 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:46:10.387958 systemd-journald[1199]: Time spent on flushing to /var/log/journal/6f0ef38295314d45b9d31b4698adb0f2 is 14.673ms for 981 entries. Sep 5 00:46:10.387958 systemd-journald[1199]: System Journal (/var/log/journal/6f0ef38295314d45b9d31b4698adb0f2) is 8M, max 195.6M, 187.6M free. Sep 5 00:46:10.416222 systemd-journald[1199]: Received client request to flush runtime journal. Sep 5 00:46:10.416275 kernel: loop0: detected capacity change from 0 to 146240 Sep 5 00:46:10.393261 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:46:10.394789 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:46:10.396796 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:46:10.398381 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:46:10.408032 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:46:10.411763 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 5 00:46:10.420289 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:46:10.428603 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:46:10.434520 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Sep 5 00:46:10.434539 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Sep 5 00:46:10.437692 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:46:10.442332 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:46:10.445518 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:46:10.449828 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 5 00:46:10.460666 kernel: loop1: detected capacity change from 0 to 113872 Sep 5 00:46:10.489676 kernel: loop2: detected capacity change from 0 to 221472 Sep 5 00:46:10.490431 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:46:10.494268 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:46:10.519695 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 5 00:46:10.519713 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 5 00:46:10.525337 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:46:10.525680 kernel: loop3: detected capacity change from 0 to 146240 Sep 5 00:46:10.538665 kernel: loop4: detected capacity change from 0 to 113872 Sep 5 00:46:10.547663 kernel: loop5: detected capacity change from 0 to 221472 Sep 5 00:46:10.556871 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:46:10.557395 (sd-merge)[1274]: Merged extensions into '/usr'. Sep 5 00:46:10.561659 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:46:10.561676 systemd[1]: Reloading... Sep 5 00:46:10.610678 zram_generator::config[1304]: No configuration found. Sep 5 00:46:10.712162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:46:10.715670 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:46:10.794263 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:46:10.794374 systemd[1]: Reloading finished in 232 ms. Sep 5 00:46:10.828128 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:46:10.829867 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:46:10.845058 systemd[1]: Starting ensure-sysext.service... Sep 5 00:46:10.846900 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:46:10.857863 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:46:10.857878 systemd[1]: Reloading... Sep 5 00:46:10.869376 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 5 00:46:10.869711 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 5 00:46:10.870152 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:46:10.870531 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:46:10.871413 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:46:10.871803 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 5 00:46:10.871883 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 5 00:46:10.876297 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:46:10.876310 systemd-tmpfiles[1339]: Skipping /boot Sep 5 00:46:10.890940 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:46:10.890955 systemd-tmpfiles[1339]: Skipping /boot Sep 5 00:46:10.915679 zram_generator::config[1369]: No configuration found. Sep 5 00:46:11.003372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:46:11.082973 systemd[1]: Reloading finished in 224 ms. Sep 5 00:46:11.108274 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:46:11.124390 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:46:11.133240 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:46:11.135554 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:46:11.137886 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:46:11.146806 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:46:11.149501 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:46:11.151832 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:46:11.156183 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:46:11.156348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:46:11.158062 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:46:11.161523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:46:11.164810 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:46:11.165965 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:46:11.166063 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:46:11.166147 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:46:11.167328 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:46:11.169228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:46:11.169751 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:46:11.178469 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:46:11.178761 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:46:11.180543 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:46:11.180892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:46:11.185065 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:46:11.193684 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Sep 5 00:46:11.197099 systemd[1]: Finished ensure-sysext.service. Sep 5 00:46:11.200118 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:46:11.200386 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:46:11.201605 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:46:11.204804 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:46:11.209520 augenrules[1441]: No rules Sep 5 00:46:11.213587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:46:11.216014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:46:11.217288 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:46:11.217321 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:46:11.220420 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:46:11.221879 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:46:11.227112 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:46:11.228176 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:46:11.228799 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:46:11.230438 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:46:11.230720 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:46:11.232118 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:46:11.233985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:46:11.234214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:46:11.236252 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:46:11.242875 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:46:11.244346 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:46:11.244545 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:46:11.246037 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:46:11.246269 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:46:11.262524 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:46:11.270230 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:46:11.271301 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:46:11.271373 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:46:11.271397 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:46:11.307852 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:46:11.324997 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:46:11.380823 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:46:11.382668 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 5 00:46:11.389672 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:46:11.397490 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:46:11.400576 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:46:11.421848 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:46:11.459546 systemd-networkd[1486]: lo: Link UP Sep 5 00:46:11.459558 systemd-networkd[1486]: lo: Gained carrier Sep 5 00:46:11.461118 systemd-networkd[1486]: Enumeration completed Sep 5 00:46:11.461209 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:46:11.461471 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:46:11.461484 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:46:11.462196 systemd-networkd[1486]: eth0: Link UP Sep 5 00:46:11.462342 systemd-networkd[1486]: eth0: Gained carrier Sep 5 00:46:11.462362 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:46:11.466826 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 5 00:46:11.473732 systemd-networkd[1486]: eth0: DHCPv4 address 10.0.0.4/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:46:11.474392 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:46:11.477378 systemd-resolved[1408]: Positive Trust Anchors: Sep 5 00:46:11.477394 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:46:11.477426 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:46:11.481257 systemd-resolved[1408]: Defaulting to hostname 'linux'. Sep 5 00:46:11.483690 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:46:11.484876 systemd[1]: Reached target network.target - Network. Sep 5 00:46:11.486758 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:46:11.516119 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 5 00:46:11.522781 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:46:11.524078 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:46:11.525262 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:46:11.526513 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:46:11.527954 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 5 00:46:11.528605 systemd-timesyncd[1448]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:46:11.528686 systemd-timesyncd[1448]: Initial clock synchronization to Fri 2025-09-05 00:46:11.418145 UTC. Sep 5 00:46:11.529250 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:46:11.531108 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:46:11.531166 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:46:11.532144 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:46:11.535181 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:46:11.536533 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:46:11.537904 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:46:11.540661 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:46:11.546016 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:46:11.554031 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 5 00:46:11.555538 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 5 00:46:11.556925 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 5 00:46:11.557768 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:46:11.558020 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:46:11.567553 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:46:11.569024 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 5 00:46:11.571552 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:46:11.580271 kernel: kvm_amd: TSC scaling supported Sep 5 00:46:11.580297 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:46:11.580310 kernel: kvm_amd: Nested Paging enabled Sep 5 00:46:11.580322 kernel: kvm_amd: LBR virtualization supported Sep 5 00:46:11.581803 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:46:11.581846 kernel: kvm_amd: Virtual GIF supported Sep 5 00:46:11.586007 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:46:11.587097 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:46:11.588218 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:46:11.588260 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:46:11.590437 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:46:11.595128 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:46:11.599768 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:46:11.605892 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:46:11.609336 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:46:11.610380 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:46:11.611876 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 5 00:46:11.614867 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:46:11.616808 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:46:11.620465 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:46:11.630255 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:46:11.631583 jq[1532]: false Sep 5 00:46:11.636882 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:46:11.637955 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing passwd entry cache Sep 5 00:46:11.637966 oslogin_cache_refresh[1534]: Refreshing passwd entry cache Sep 5 00:46:11.640667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:46:11.642629 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:46:11.648841 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:46:11.648873 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting users, quitting Sep 5 00:46:11.648873 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 00:46:11.648873 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing group entry cache Sep 5 00:46:11.648303 oslogin_cache_refresh[1534]: Failure getting users, quitting Sep 5 00:46:11.648325 oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 00:46:11.648372 oslogin_cache_refresh[1534]: Refreshing group entry cache Sep 5 00:46:11.649328 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:46:11.650805 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:46:11.657381 extend-filesystems[1533]: Found /dev/vda6 Sep 5 00:46:11.658298 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:46:11.658999 oslogin_cache_refresh[1534]: Failure getting groups, quitting Sep 5 00:46:11.659621 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting groups, quitting Sep 5 00:46:11.659621 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 00:46:11.659012 oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 00:46:11.663823 extend-filesystems[1533]: Found /dev/vda9 Sep 5 00:46:11.668685 extend-filesystems[1533]: Checking size of /dev/vda9 Sep 5 00:46:11.666274 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:46:11.667893 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:46:11.668240 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:46:11.668967 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 5 00:46:11.669226 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 5 00:46:11.671134 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:46:11.676249 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:46:11.680438 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:46:11.682637 jq[1550]: true Sep 5 00:46:11.680904 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:46:11.683370 extend-filesystems[1533]: Resized partition /dev/vda9 Sep 5 00:46:11.685122 extend-filesystems[1566]: resize2fs 1.47.2 (1-Jan-2025) Sep 5 00:46:11.691709 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:46:11.693367 jq[1564]: true Sep 5 00:46:11.709660 update_engine[1549]: I20250905 00:46:11.709105 1549 main.cc:92] Flatcar Update Engine starting Sep 5 00:46:11.716978 tar[1561]: linux-amd64/helm Sep 5 00:46:11.720348 (ntainerd)[1565]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:46:11.741664 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:46:11.768516 dbus-daemon[1530]: [system] SELinux support is enabled Sep 5 00:46:11.769013 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:46:11.771968 extend-filesystems[1566]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:46:11.771968 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:46:11.771968 extend-filesystems[1566]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:46:11.806638 extend-filesystems[1533]: Resized filesystem in /dev/vda9 Sep 5 00:46:11.818882 bash[1594]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:46:11.818991 update_engine[1549]: I20250905 00:46:11.774749 1549 update_check_scheduler.cc:74] Next update check in 3m4s Sep 5 00:46:11.778301 systemd-logind[1539]: Watching system buttons on /dev/input/event2 (Power Button) Sep 5 00:46:11.778320 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:46:11.809583 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:46:11.810190 systemd-logind[1539]: New seat seat0. Sep 5 00:46:11.810280 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:46:11.812451 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:46:11.813833 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:46:11.824033 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:46:11.829398 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 00:46:11.834304 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:46:11.836519 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:46:11.837897 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:46:11.838019 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:46:11.839375 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:46:11.839477 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:46:11.844855 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:46:11.891608 locksmithd[1607]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:46:11.922756 containerd[1565]: time="2025-09-05T00:46:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 5 00:46:11.924912 containerd[1565]: time="2025-09-05T00:46:11.924862223Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 5 00:46:11.936632 containerd[1565]: time="2025-09-05T00:46:11.936468463Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.488µs" Sep 5 00:46:11.936632 containerd[1565]: time="2025-09-05T00:46:11.936506114Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 5 00:46:11.936632 containerd[1565]: time="2025-09-05T00:46:11.936525701Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 5 00:46:11.936961 containerd[1565]: time="2025-09-05T00:46:11.936941831Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 5 00:46:11.937026 containerd[1565]: time="2025-09-05T00:46:11.937013706Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 5 00:46:11.937087 containerd[1565]: time="2025-09-05T00:46:11.937075893Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:46:11.937200 containerd[1565]: time="2025-09-05T00:46:11.937184336Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:46:11.937248 containerd[1565]: time="2025-09-05T00:46:11.937236454Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:46:11.937574 containerd[1565]: time="2025-09-05T00:46:11.937555743Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:46:11.937624 containerd[1565]: time="2025-09-05T00:46:11.937612609Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:46:11.938488 containerd[1565]: time="2025-09-05T00:46:11.937667893Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:46:11.938488 containerd[1565]: time="2025-09-05T00:46:11.937677772Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 5 00:46:11.938488 containerd[1565]: time="2025-09-05T00:46:11.937764885Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 5 00:46:11.938488 containerd[1565]: time="2025-09-05T00:46:11.937988144Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:46:11.938488 containerd[1565]: time="2025-09-05T00:46:11.938013862Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:46:11.938488 containerd[1565]: time="2025-09-05T00:46:11.938023330Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 5 00:46:11.938488 containerd[1565]: time="2025-09-05T00:46:11.938055059Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 5 00:46:11.938488 containerd[1565]: time="2025-09-05T00:46:11.938249895Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 5 00:46:11.938488 containerd[1565]: time="2025-09-05T00:46:11.938317872Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:46:11.943577 containerd[1565]: time="2025-09-05T00:46:11.943547562Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 5 00:46:11.943614 containerd[1565]: time="2025-09-05T00:46:11.943602936Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 5 00:46:11.943634 containerd[1565]: time="2025-09-05T00:46:11.943622813Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 5 00:46:11.943666 containerd[1565]: time="2025-09-05T00:46:11.943635938Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 5 00:46:11.943717 containerd[1565]: time="2025-09-05T00:46:11.943698846Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 5 00:46:11.943717 containerd[1565]: time="2025-09-05T00:46:11.943714254Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 5 00:46:11.943769 containerd[1565]: time="2025-09-05T00:46:11.943726197Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 5 00:46:11.943769 containerd[1565]: time="2025-09-05T00:46:11.943738760Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 5 00:46:11.943769 containerd[1565]: time="2025-09-05T00:46:11.943754941Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 5 00:46:11.943769 containerd[1565]: time="2025-09-05T00:46:11.943765320Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 5 00:46:11.943846 containerd[1565]: time="2025-09-05T00:46:11.943774578Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 5 00:46:11.943846 containerd[1565]: time="2025-09-05T00:46:11.943788153Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 5 00:46:11.943926 containerd[1565]: time="2025-09-05T00:46:11.943905082Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 5 00:46:11.943948 containerd[1565]: time="2025-09-05T00:46:11.943927995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 5 00:46:11.943967 containerd[1565]: time="2025-09-05T00:46:11.943947502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 5 00:46:11.943967 containerd[1565]: time="2025-09-05T00:46:11.943958683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 5 00:46:11.944003 containerd[1565]: time="2025-09-05T00:46:11.943969303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 5 00:46:11.944003 containerd[1565]: time="2025-09-05T00:46:11.943980444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 5 00:46:11.944003 containerd[1565]: time="2025-09-05T00:46:11.943991775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 5 00:46:11.944003 containerd[1565]: time="2025-09-05T00:46:11.944001904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 5 00:46:11.944082 containerd[1565]: time="2025-09-05T00:46:11.944014588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 5 00:46:11.944082 containerd[1565]: time="2025-09-05T00:46:11.944025338Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 5 00:46:11.944082 containerd[1565]: time="2025-09-05T00:46:11.944035888Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 5 00:46:11.944139 containerd[1565]: time="2025-09-05T00:46:11.944090921Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 5 00:46:11.944139 containerd[1565]: time="2025-09-05T00:46:11.944104476Z" level=info msg="Start snapshots syncer" Sep 5 00:46:11.944139 containerd[1565]: time="2025-09-05T00:46:11.944132729Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 5 00:46:11.944376 containerd[1565]: time="2025-09-05T00:46:11.944337734Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 5 00:46:11.944494 containerd[1565]: time="2025-09-05T00:46:11.944387527Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 5 00:46:11.944494 containerd[1565]: time="2025-09-05T00:46:11.944455795Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 5 00:46:11.944588 containerd[1565]: time="2025-09-05T00:46:11.944564349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 5 00:46:11.944616 containerd[1565]: time="2025-09-05T00:46:11.944599014Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 5 00:46:11.944616 containerd[1565]: time="2025-09-05T00:46:11.944611077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 5 00:46:11.944664 containerd[1565]: time="2025-09-05T00:46:11.944622979Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 5 00:46:11.944664 containerd[1565]: time="2025-09-05T00:46:11.944633769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 5 00:46:11.944740 containerd[1565]: time="2025-09-05T00:46:11.944721263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 5 00:46:11.944740 containerd[1565]: time="2025-09-05T00:46:11.944737935Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 5 00:46:11.944807 containerd[1565]: time="2025-09-05T00:46:11.944782298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 5 00:46:11.944807 containerd[1565]: time="2025-09-05T00:46:11.944798789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 5 00:46:11.944854 containerd[1565]: time="2025-09-05T00:46:11.944808797Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 5 00:46:11.944874 containerd[1565]: time="2025-09-05T00:46:11.944861627Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:46:11.944895 containerd[1565]: time="2025-09-05T00:46:11.944878618Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:46:11.944895 containerd[1565]: time="2025-09-05T00:46:11.944886844Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:46:11.944964 containerd[1565]: time="2025-09-05T00:46:11.944896071Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:46:11.944964 containerd[1565]: time="2025-09-05T00:46:11.944956905Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 5 00:46:11.945003 containerd[1565]: time="2025-09-05T00:46:11.944971623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 5 00:46:11.945003 containerd[1565]: time="2025-09-05T00:46:11.944982373Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 5 00:46:11.945003 containerd[1565]: time="2025-09-05T00:46:11.944999786Z" level=info msg="runtime interface created" Sep 5 00:46:11.945056 containerd[1565]: time="2025-09-05T00:46:11.945006078Z" level=info msg="created NRI interface" Sep 5 00:46:11.945056 containerd[1565]: time="2025-09-05T00:46:11.945015134Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 5 00:46:11.945056 containerd[1565]: time="2025-09-05T00:46:11.945024903Z" level=info msg="Connect containerd service" Sep 5 00:46:11.945056 containerd[1565]: time="2025-09-05T00:46:11.945044670Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:46:11.945943 containerd[1565]: time="2025-09-05T00:46:11.945909312Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:46:12.031103 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:46:12.040051 containerd[1565]: time="2025-09-05T00:46:12.039549002Z" level=info msg="Start subscribing containerd event" Sep 5 00:46:12.040151 containerd[1565]: time="2025-09-05T00:46:12.040062040Z" level=info msg="Start recovering state" Sep 5 00:46:12.040151 containerd[1565]: time="2025-09-05T00:46:12.040138258Z" level=info msg="Start event monitor" Sep 5 00:46:12.040151 containerd[1565]: time="2025-09-05T00:46:12.040151479Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:46:12.040207 containerd[1565]: time="2025-09-05T00:46:12.040158366Z" level=info msg="Start streaming server" Sep 5 00:46:12.040207 containerd[1565]: time="2025-09-05T00:46:12.040171734Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 5 00:46:12.040207 containerd[1565]: time="2025-09-05T00:46:12.040183749Z" level=info msg="runtime interface starting up..." Sep 5 00:46:12.040207 containerd[1565]: time="2025-09-05T00:46:12.040189242Z" level=info msg="starting plugins..." Sep 5 00:46:12.040207 containerd[1565]: time="2025-09-05T00:46:12.040201751Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 5 00:46:12.040558 containerd[1565]: time="2025-09-05T00:46:12.040538165Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:46:12.040655 containerd[1565]: time="2025-09-05T00:46:12.040620085Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:46:12.040868 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:46:12.043031 containerd[1565]: time="2025-09-05T00:46:12.043016576Z" level=info msg="containerd successfully booted in 0.120777s" Sep 5 00:46:12.054419 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:46:12.057326 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:46:12.079693 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:46:12.080000 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:46:12.083269 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:46:12.108150 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:46:12.110882 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:46:12.112921 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:46:12.114254 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:46:12.163470 tar[1561]: linux-amd64/LICENSE Sep 5 00:46:12.163562 tar[1561]: linux-amd64/README.md Sep 5 00:46:12.182606 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:46:12.892795 systemd-networkd[1486]: eth0: Gained IPv6LL Sep 5 00:46:12.896095 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:46:12.897911 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:46:12.900519 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:46:12.903040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:46:12.905191 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:46:12.939857 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:46:12.941728 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:46:12.941992 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:46:12.944331 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:46:13.610656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:46:13.612178 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:46:13.613433 systemd[1]: Startup finished in 2.764s (kernel) + 5.952s (initrd) + 4.096s (userspace) = 12.813s. Sep 5 00:46:13.641964 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:46:14.040514 kubelet[1672]: E0905 00:46:14.040431 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:46:14.044304 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:46:14.044496 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:46:14.044935 systemd[1]: kubelet.service: Consumed 977ms CPU time, 265.8M memory peak. Sep 5 00:46:16.518575 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:46:16.519810 systemd[1]: Started sshd@0-10.0.0.4:22-10.0.0.1:58540.service - OpenSSH per-connection server daemon (10.0.0.1:58540). Sep 5 00:46:16.565959 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 58540 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:46:16.567563 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:46:16.578906 systemd-logind[1539]: New session 1 of user core. Sep 5 00:46:16.580287 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:46:16.581420 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:46:16.609591 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:46:16.611996 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:46:16.629809 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:46:16.632121 systemd-logind[1539]: New session c1 of user core. Sep 5 00:46:16.784989 systemd[1690]: Queued start job for default target default.target. Sep 5 00:46:16.802811 systemd[1690]: Created slice app.slice - User Application Slice. Sep 5 00:46:16.802834 systemd[1690]: Reached target paths.target - Paths. Sep 5 00:46:16.802872 systemd[1690]: Reached target timers.target - Timers. Sep 5 00:46:16.804248 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:46:16.814282 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:46:16.814344 systemd[1690]: Reached target sockets.target - Sockets. Sep 5 00:46:16.814384 systemd[1690]: Reached target basic.target - Basic System. Sep 5 00:46:16.814440 systemd[1690]: Reached target default.target - Main User Target. Sep 5 00:46:16.814477 systemd[1690]: Startup finished in 176ms. Sep 5 00:46:16.814949 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:46:16.816590 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:46:16.882136 systemd[1]: Started sshd@1-10.0.0.4:22-10.0.0.1:58548.service - OpenSSH per-connection server daemon (10.0.0.1:58548). Sep 5 00:46:16.928441 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 58548 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:46:16.929741 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:46:16.934084 systemd-logind[1539]: New session 2 of user core. Sep 5 00:46:16.947770 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:46:16.998990 sshd[1703]: Connection closed by 10.0.0.1 port 58548 Sep 5 00:46:16.999259 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Sep 5 00:46:17.014943 systemd[1]: sshd@1-10.0.0.4:22-10.0.0.1:58548.service: Deactivated successfully. Sep 5 00:46:17.016339 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:46:17.017073 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:46:17.019750 systemd[1]: Started sshd@2-10.0.0.4:22-10.0.0.1:58554.service - OpenSSH per-connection server daemon (10.0.0.1:58554). Sep 5 00:46:17.020272 systemd-logind[1539]: Removed session 2. Sep 5 00:46:17.076830 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 58554 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:46:17.078045 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:46:17.082245 systemd-logind[1539]: New session 3 of user core. Sep 5 00:46:17.094750 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:46:17.143362 sshd[1711]: Connection closed by 10.0.0.1 port 58554 Sep 5 00:46:17.143676 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Sep 5 00:46:17.153826 systemd[1]: sshd@2-10.0.0.4:22-10.0.0.1:58554.service: Deactivated successfully. Sep 5 00:46:17.155399 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:46:17.156132 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:46:17.158529 systemd[1]: Started sshd@3-10.0.0.4:22-10.0.0.1:58564.service - OpenSSH per-connection server daemon (10.0.0.1:58564). Sep 5 00:46:17.159233 systemd-logind[1539]: Removed session 3. Sep 5 00:46:17.208199 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 58564 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:46:17.209662 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:46:17.213685 systemd-logind[1539]: New session 4 of user core. Sep 5 00:46:17.226759 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:46:17.279337 sshd[1720]: Connection closed by 10.0.0.1 port 58564 Sep 5 00:46:17.279687 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Sep 5 00:46:17.288127 systemd[1]: sshd@3-10.0.0.4:22-10.0.0.1:58564.service: Deactivated successfully. Sep 5 00:46:17.289911 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:46:17.290615 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:46:17.293839 systemd[1]: Started sshd@4-10.0.0.4:22-10.0.0.1:58566.service - OpenSSH per-connection server daemon (10.0.0.1:58566). Sep 5 00:46:17.294457 systemd-logind[1539]: Removed session 4. Sep 5 00:46:17.343002 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 58566 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:46:17.344380 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:46:17.348284 systemd-logind[1539]: New session 5 of user core. Sep 5 00:46:17.357777 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:46:17.413285 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:46:17.413592 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:46:17.433147 sudo[1729]: pam_unix(sudo:session): session closed for user root Sep 5 00:46:17.434633 sshd[1728]: Connection closed by 10.0.0.1 port 58566 Sep 5 00:46:17.434988 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Sep 5 00:46:17.450293 systemd[1]: sshd@4-10.0.0.4:22-10.0.0.1:58566.service: Deactivated successfully. Sep 5 00:46:17.452045 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:46:17.452782 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:46:17.455468 systemd[1]: Started sshd@5-10.0.0.4:22-10.0.0.1:58572.service - OpenSSH per-connection server daemon (10.0.0.1:58572). Sep 5 00:46:17.456019 systemd-logind[1539]: Removed session 5. Sep 5 00:46:17.507698 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 58572 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:46:17.508928 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:46:17.512940 systemd-logind[1539]: New session 6 of user core. Sep 5 00:46:17.522758 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:46:17.574529 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:46:17.574849 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:46:17.734215 sudo[1739]: pam_unix(sudo:session): session closed for user root Sep 5 00:46:17.740076 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 5 00:46:17.740355 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:46:17.749545 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:46:17.800462 augenrules[1761]: No rules Sep 5 00:46:17.802106 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:46:17.802350 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:46:17.803471 sudo[1738]: pam_unix(sudo:session): session closed for user root Sep 5 00:46:17.804984 sshd[1737]: Connection closed by 10.0.0.1 port 58572 Sep 5 00:46:17.805317 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Sep 5 00:46:17.817162 systemd[1]: sshd@5-10.0.0.4:22-10.0.0.1:58572.service: Deactivated successfully. Sep 5 00:46:17.818951 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:46:17.819758 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:46:17.822491 systemd[1]: Started sshd@6-10.0.0.4:22-10.0.0.1:58584.service - OpenSSH per-connection server daemon (10.0.0.1:58584). Sep 5 00:46:17.823059 systemd-logind[1539]: Removed session 6. Sep 5 00:46:17.878225 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 58584 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:46:17.879510 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:46:17.883411 systemd-logind[1539]: New session 7 of user core. Sep 5 00:46:17.892746 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:46:17.944880 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:46:17.945235 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:46:18.237258 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:46:18.247960 (dockerd)[1794]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:46:18.457956 dockerd[1794]: time="2025-09-05T00:46:18.457889411Z" level=info msg="Starting up" Sep 5 00:46:18.459405 dockerd[1794]: time="2025-09-05T00:46:18.459370678Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 5 00:46:18.813745 dockerd[1794]: time="2025-09-05T00:46:18.813692336Z" level=info msg="Loading containers: start." Sep 5 00:46:18.823678 kernel: Initializing XFRM netlink socket Sep 5 00:46:19.047832 systemd-networkd[1486]: docker0: Link UP Sep 5 00:46:19.052127 dockerd[1794]: time="2025-09-05T00:46:19.052089069Z" level=info msg="Loading containers: done." Sep 5 00:46:19.065194 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2267910867-merged.mount: Deactivated successfully. Sep 5 00:46:19.066507 dockerd[1794]: time="2025-09-05T00:46:19.066456948Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:46:19.066563 dockerd[1794]: time="2025-09-05T00:46:19.066548601Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 5 00:46:19.066702 dockerd[1794]: time="2025-09-05T00:46:19.066684467Z" level=info msg="Initializing buildkit" Sep 5 00:46:19.094095 dockerd[1794]: time="2025-09-05T00:46:19.094055665Z" level=info msg="Completed buildkit initialization" Sep 5 00:46:19.100101 dockerd[1794]: time="2025-09-05T00:46:19.100062592Z" level=info msg="Daemon has completed initialization" Sep 5 00:46:19.100174 dockerd[1794]: time="2025-09-05T00:46:19.100118383Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:46:19.100324 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:46:20.012937 containerd[1565]: time="2025-09-05T00:46:20.012899520Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 00:46:21.121037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009987615.mount: Deactivated successfully. Sep 5 00:46:21.963554 containerd[1565]: time="2025-09-05T00:46:21.963483083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:21.964311 containerd[1565]: time="2025-09-05T00:46:21.964255370Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 5 00:46:21.965507 containerd[1565]: time="2025-09-05T00:46:21.965448508Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:21.967736 containerd[1565]: time="2025-09-05T00:46:21.967688966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:21.968567 containerd[1565]: time="2025-09-05T00:46:21.968519011Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 1.95558002s" Sep 5 00:46:21.968621 containerd[1565]: time="2025-09-05T00:46:21.968568269Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 5 00:46:21.969188 containerd[1565]: time="2025-09-05T00:46:21.969164539Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 00:46:23.356342 containerd[1565]: time="2025-09-05T00:46:23.356272334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:23.363256 containerd[1565]: time="2025-09-05T00:46:23.363223338Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 5 00:46:23.364583 containerd[1565]: time="2025-09-05T00:46:23.364540171Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:23.366870 containerd[1565]: time="2025-09-05T00:46:23.366822584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:23.367673 containerd[1565]: time="2025-09-05T00:46:23.367630190Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.398439645s" Sep 5 00:46:23.367710 containerd[1565]: time="2025-09-05T00:46:23.367671846Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 5 00:46:23.368124 containerd[1565]: time="2025-09-05T00:46:23.368105810Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 00:46:24.234229 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:46:24.236369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:46:24.774501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:46:24.788048 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:46:24.971348 containerd[1565]: time="2025-09-05T00:46:24.971289309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:24.972150 containerd[1565]: time="2025-09-05T00:46:24.972091263Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 5 00:46:24.973918 containerd[1565]: time="2025-09-05T00:46:24.973469435Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:24.975917 containerd[1565]: time="2025-09-05T00:46:24.975884807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:24.976876 containerd[1565]: time="2025-09-05T00:46:24.976844288Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.608658559s" Sep 5 00:46:24.976876 containerd[1565]: time="2025-09-05T00:46:24.976872652Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 5 00:46:24.977123 kubelet[2073]: E0905 00:46:24.977080 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:46:24.977594 containerd[1565]: time="2025-09-05T00:46:24.977570849Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 00:46:24.984817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:46:24.985066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:46:24.985535 systemd[1]: kubelet.service: Consumed 216ms CPU time, 111M memory peak. Sep 5 00:46:25.897146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4071518770.mount: Deactivated successfully. Sep 5 00:46:26.596010 containerd[1565]: time="2025-09-05T00:46:26.595928066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:26.597447 containerd[1565]: time="2025-09-05T00:46:26.597386898Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 5 00:46:26.600883 containerd[1565]: time="2025-09-05T00:46:26.599076151Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:26.602827 containerd[1565]: time="2025-09-05T00:46:26.602750304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:26.603428 containerd[1565]: time="2025-09-05T00:46:26.603376996Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 1.625779873s" Sep 5 00:46:26.603428 containerd[1565]: time="2025-09-05T00:46:26.603412117Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 5 00:46:26.604131 containerd[1565]: time="2025-09-05T00:46:26.604062492Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 00:46:27.189830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503085620.mount: Deactivated successfully. Sep 5 00:46:27.834177 containerd[1565]: time="2025-09-05T00:46:27.834105929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:27.835260 containerd[1565]: time="2025-09-05T00:46:27.834966881Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 5 00:46:27.836987 containerd[1565]: time="2025-09-05T00:46:27.836961389Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:27.839506 containerd[1565]: time="2025-09-05T00:46:27.839443254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:27.840306 containerd[1565]: time="2025-09-05T00:46:27.840277825Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.236167114s" Sep 5 00:46:27.840345 containerd[1565]: time="2025-09-05T00:46:27.840308496Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 5 00:46:27.840761 containerd[1565]: time="2025-09-05T00:46:27.840735653Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:46:28.316535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2763974440.mount: Deactivated successfully. Sep 5 00:46:28.425087 containerd[1565]: time="2025-09-05T00:46:28.425029611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:46:28.425953 containerd[1565]: time="2025-09-05T00:46:28.425916570Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:46:28.432622 containerd[1565]: time="2025-09-05T00:46:28.432574445Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:46:28.452340 containerd[1565]: time="2025-09-05T00:46:28.452303473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:46:28.452936 containerd[1565]: time="2025-09-05T00:46:28.452911333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 612.006607ms" Sep 5 00:46:28.452936 containerd[1565]: time="2025-09-05T00:46:28.452934978Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:46:28.453342 containerd[1565]: time="2025-09-05T00:46:28.453294747Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 00:46:28.998441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582186137.mount: Deactivated successfully. Sep 5 00:46:30.968071 containerd[1565]: time="2025-09-05T00:46:30.968018174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:30.968753 containerd[1565]: time="2025-09-05T00:46:30.968697902Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 5 00:46:30.969878 containerd[1565]: time="2025-09-05T00:46:30.969819477Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:30.972204 containerd[1565]: time="2025-09-05T00:46:30.972171778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:30.973262 containerd[1565]: time="2025-09-05T00:46:30.973219297Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.51989884s" Sep 5 00:46:30.973262 containerd[1565]: time="2025-09-05T00:46:30.973257972Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 5 00:46:33.398631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:46:33.398856 systemd[1]: kubelet.service: Consumed 216ms CPU time, 111M memory peak. Sep 5 00:46:33.401060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:46:33.424635 systemd[1]: Reload requested from client PID 2232 ('systemctl') (unit session-7.scope)... Sep 5 00:46:33.424664 systemd[1]: Reloading... Sep 5 00:46:33.512221 zram_generator::config[2274]: No configuration found. Sep 5 00:46:33.755185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:46:33.871335 systemd[1]: Reloading finished in 446 ms. Sep 5 00:46:33.939292 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:46:33.939387 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:46:33.939684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:46:33.939722 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.3M memory peak. Sep 5 00:46:33.941238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:46:34.128564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:46:34.132427 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:46:34.164821 kubelet[2322]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:46:34.164821 kubelet[2322]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:46:34.164821 kubelet[2322]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:46:34.165184 kubelet[2322]: I0905 00:46:34.164866 2322 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:46:34.386829 kubelet[2322]: I0905 00:46:34.386674 2322 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:46:34.386987 kubelet[2322]: I0905 00:46:34.386966 2322 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:46:34.387408 kubelet[2322]: I0905 00:46:34.387388 2322 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:46:34.405928 kubelet[2322]: E0905 00:46:34.405891 2322 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:46:34.406671 kubelet[2322]: I0905 00:46:34.406636 2322 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:46:34.411791 kubelet[2322]: I0905 00:46:34.411767 2322 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:46:34.417477 kubelet[2322]: I0905 00:46:34.417450 2322 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:46:34.418026 kubelet[2322]: I0905 00:46:34.417998 2322 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:46:34.418160 kubelet[2322]: I0905 00:46:34.418137 2322 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:46:34.418303 kubelet[2322]: I0905 00:46:34.418158 2322 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:46:34.418403 kubelet[2322]: I0905 00:46:34.418312 2322 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:46:34.418403 kubelet[2322]: I0905 00:46:34.418320 2322 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:46:34.418445 kubelet[2322]: I0905 00:46:34.418413 2322 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:46:34.420203 kubelet[2322]: I0905 00:46:34.420184 2322 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:46:34.420203 kubelet[2322]: I0905 00:46:34.420203 2322 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:46:34.420282 kubelet[2322]: I0905 00:46:34.420232 2322 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:46:34.420282 kubelet[2322]: I0905 00:46:34.420249 2322 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:46:34.422271 kubelet[2322]: I0905 00:46:34.422244 2322 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 5 00:46:34.422571 kubelet[2322]: I0905 00:46:34.422550 2322 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:46:34.423029 kubelet[2322]: W0905 00:46:34.423002 2322 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:46:34.423743 kubelet[2322]: W0905 00:46:34.423686 2322 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.4:6443: connect: connection refused Sep 5 00:46:34.423743 kubelet[2322]: E0905 00:46:34.423737 2322 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:46:34.425745 kubelet[2322]: I0905 00:46:34.425585 2322 server.go:1274] "Started kubelet" Sep 5 00:46:34.425913 kubelet[2322]: I0905 00:46:34.425864 2322 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:46:34.426968 kubelet[2322]: I0905 00:46:34.426939 2322 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:46:34.429391 kubelet[2322]: I0905 00:46:34.428221 2322 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:46:34.429391 kubelet[2322]: I0905 00:46:34.428463 2322 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:46:34.431663 kubelet[2322]: I0905 00:46:34.431613 2322 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:46:34.432813 kubelet[2322]: I0905 00:46:34.432795 2322 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:46:34.432929 kubelet[2322]: I0905 00:46:34.432915 2322 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:46:34.433016 kubelet[2322]: E0905 00:46:34.433001 2322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:46:34.433262 kubelet[2322]: I0905 00:46:34.433246 2322 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:46:34.433301 kubelet[2322]: I0905 00:46:34.433296 2322 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:46:34.433986 kubelet[2322]: E0905 00:46:34.432007 2322 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.4:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.4:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623c61944e4f16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:46:34.425569046 +0000 UTC m=+0.289450424,LastTimestamp:2025-09-05 00:46:34.425569046 +0000 UTC m=+0.289450424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:46:34.434720 kubelet[2322]: W0905 00:46:34.434667 2322 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.4:6443: connect: connection refused Sep 5 00:46:34.434768 kubelet[2322]: E0905 00:46:34.434736 2322 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:46:34.435571 kubelet[2322]: E0905 00:46:34.435533 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.4:6443: connect: connection refused" interval="200ms" Sep 5 00:46:34.435571 kubelet[2322]: W0905 00:46:34.435542 2322 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.4:6443: connect: connection refused Sep 5 00:46:34.435659 kubelet[2322]: E0905 00:46:34.435587 2322 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:46:34.436451 kubelet[2322]: E0905 00:46:34.436427 2322 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:46:34.436543 kubelet[2322]: I0905 00:46:34.436536 2322 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:46:34.436568 kubelet[2322]: I0905 00:46:34.436546 2322 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:46:34.436624 kubelet[2322]: I0905 00:46:34.436609 2322 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:46:34.448826 kubelet[2322]: I0905 00:46:34.448164 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:46:34.449607 kubelet[2322]: I0905 00:46:34.449306 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:46:34.449607 kubelet[2322]: I0905 00:46:34.449327 2322 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:46:34.449607 kubelet[2322]: I0905 00:46:34.449346 2322 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:46:34.449607 kubelet[2322]: E0905 00:46:34.449385 2322 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:46:34.453189 kubelet[2322]: W0905 00:46:34.453083 2322 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.4:6443: connect: connection refused Sep 5 00:46:34.453189 kubelet[2322]: E0905 00:46:34.453128 2322 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:46:34.453337 kubelet[2322]: I0905 00:46:34.453316 2322 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:46:34.453337 kubelet[2322]: I0905 00:46:34.453330 2322 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:46:34.453384 kubelet[2322]: I0905 00:46:34.453345 2322 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:46:34.533432 kubelet[2322]: E0905 00:46:34.533357 2322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:46:34.549704 kubelet[2322]: E0905 00:46:34.549671 2322 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:46:34.633874 kubelet[2322]: E0905 00:46:34.633839 2322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:46:34.636228 kubelet[2322]: E0905 00:46:34.636190 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.4:6443: connect: connection refused" interval="400ms" Sep 5 00:46:34.734149 kubelet[2322]: E0905 00:46:34.734106 2322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:46:34.750294 kubelet[2322]: E0905 00:46:34.750256 2322 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:46:34.834562 kubelet[2322]: E0905 00:46:34.834518 2322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:46:34.935294 kubelet[2322]: E0905 00:46:34.935262 2322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:46:34.965363 kubelet[2322]: I0905 00:46:34.965316 2322 policy_none.go:49] "None policy: Start" Sep 5 00:46:34.965941 kubelet[2322]: I0905 00:46:34.965926 2322 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:46:34.965996 kubelet[2322]: I0905 00:46:34.965964 2322 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:46:34.972102 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:46:34.985364 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:46:34.988381 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:46:34.999406 kubelet[2322]: I0905 00:46:34.999374 2322 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:46:34.999584 kubelet[2322]: I0905 00:46:34.999555 2322 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:46:34.999628 kubelet[2322]: I0905 00:46:34.999571 2322 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:46:34.999997 kubelet[2322]: I0905 00:46:34.999966 2322 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:46:35.000866 kubelet[2322]: E0905 00:46:35.000846 2322 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:46:35.036847 kubelet[2322]: E0905 00:46:35.036824 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.4:6443: connect: connection refused" interval="800ms" Sep 5 00:46:35.101617 kubelet[2322]: I0905 00:46:35.101599 2322 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:46:35.101901 kubelet[2322]: E0905 00:46:35.101879 2322 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.4:6443/api/v1/nodes\": dial tcp 10.0.0.4:6443: connect: connection refused" node="localhost" Sep 5 00:46:35.158099 systemd[1]: Created slice kubepods-burstable-podcbd28f7cee82f32e037d685ace799b61.slice - libcontainer container kubepods-burstable-podcbd28f7cee82f32e037d685ace799b61.slice. Sep 5 00:46:35.171442 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 5 00:46:35.197257 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 5 00:46:35.237178 kubelet[2322]: I0905 00:46:35.237111 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbd28f7cee82f32e037d685ace799b61-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cbd28f7cee82f32e037d685ace799b61\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:46:35.237178 kubelet[2322]: I0905 00:46:35.237160 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbd28f7cee82f32e037d685ace799b61-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cbd28f7cee82f32e037d685ace799b61\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:46:35.237459 kubelet[2322]: I0905 00:46:35.237199 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:46:35.237459 kubelet[2322]: I0905 00:46:35.237219 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:46:35.237459 kubelet[2322]: I0905 00:46:35.237239 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:46:35.237459 kubelet[2322]: I0905 00:46:35.237256 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbd28f7cee82f32e037d685ace799b61-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cbd28f7cee82f32e037d685ace799b61\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:46:35.237459 kubelet[2322]: I0905 00:46:35.237276 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:46:35.237565 kubelet[2322]: I0905 00:46:35.237293 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:46:35.237565 kubelet[2322]: I0905 00:46:35.237310 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:46:35.302941 kubelet[2322]: I0905 00:46:35.302927 2322 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:46:35.303196 kubelet[2322]: E0905 00:46:35.303175 2322 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.4:6443/api/v1/nodes\": dial tcp 10.0.0.4:6443: connect: connection refused" node="localhost" Sep 5 00:46:35.470534 kubelet[2322]: E0905 00:46:35.470498 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:35.470914 containerd[1565]: time="2025-09-05T00:46:35.470885157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cbd28f7cee82f32e037d685ace799b61,Namespace:kube-system,Attempt:0,}" Sep 5 00:46:35.488834 containerd[1565]: time="2025-09-05T00:46:35.488717342Z" level=info msg="connecting to shim 21d113c372eff04b546ec6ed350886b6748f0f09892a9dcfa58ed495a7150996" address="unix:///run/containerd/s/9f59014e71e83d766731692043b6dba00b76d3b99a5576feab00d59be813529b" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:46:35.490446 kubelet[2322]: W0905 00:46:35.490419 2322 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.4:6443: connect: connection refused Sep 5 00:46:35.490514 kubelet[2322]: E0905 00:46:35.490463 2322 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:46:35.495933 kubelet[2322]: E0905 00:46:35.495892 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:35.496880 containerd[1565]: time="2025-09-05T00:46:35.496849750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 5 00:46:35.499470 kubelet[2322]: E0905 00:46:35.499450 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:35.500083 containerd[1565]: time="2025-09-05T00:46:35.499979482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 5 00:46:35.513851 systemd[1]: Started cri-containerd-21d113c372eff04b546ec6ed350886b6748f0f09892a9dcfa58ed495a7150996.scope - libcontainer container 21d113c372eff04b546ec6ed350886b6748f0f09892a9dcfa58ed495a7150996. Sep 5 00:46:35.526804 containerd[1565]: time="2025-09-05T00:46:35.526601658Z" level=info msg="connecting to shim 04ec41b9a03efb2ac6558b0cf03fd922ead65edcf201c6094e044acdb0b4666f" address="unix:///run/containerd/s/112fcff992d593e6c751c7dd2e6c02c98ffef11d8aae9660d0f22b6be6b5957d" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:46:35.532414 containerd[1565]: time="2025-09-05T00:46:35.532375730Z" level=info msg="connecting to shim b9f0a093b48691eddc5725662c295d265a0009602a2a85c96365bda37c269102" address="unix:///run/containerd/s/1ef849aa14fef50955d005e95f1a491b48f289ba1b78b213e67b74636d325a49" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:46:35.549817 systemd[1]: Started cri-containerd-04ec41b9a03efb2ac6558b0cf03fd922ead65edcf201c6094e044acdb0b4666f.scope - libcontainer container 04ec41b9a03efb2ac6558b0cf03fd922ead65edcf201c6094e044acdb0b4666f. Sep 5 00:46:35.560779 systemd[1]: Started cri-containerd-b9f0a093b48691eddc5725662c295d265a0009602a2a85c96365bda37c269102.scope - libcontainer container b9f0a093b48691eddc5725662c295d265a0009602a2a85c96365bda37c269102. Sep 5 00:46:35.567202 containerd[1565]: time="2025-09-05T00:46:35.567159211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cbd28f7cee82f32e037d685ace799b61,Namespace:kube-system,Attempt:0,} returns sandbox id \"21d113c372eff04b546ec6ed350886b6748f0f09892a9dcfa58ed495a7150996\"" Sep 5 00:46:35.568435 kubelet[2322]: E0905 00:46:35.568402 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:35.570470 containerd[1565]: time="2025-09-05T00:46:35.570441411Z" level=info msg="CreateContainer within sandbox \"21d113c372eff04b546ec6ed350886b6748f0f09892a9dcfa58ed495a7150996\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:46:35.578877 containerd[1565]: time="2025-09-05T00:46:35.578844734Z" level=info msg="Container 8aad383eefe19765a90d1105a40d86af21628194fb277a243b9ea9e6fe45a8fd: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:46:35.587907 containerd[1565]: time="2025-09-05T00:46:35.587855709Z" level=info msg="CreateContainer within sandbox \"21d113c372eff04b546ec6ed350886b6748f0f09892a9dcfa58ed495a7150996\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8aad383eefe19765a90d1105a40d86af21628194fb277a243b9ea9e6fe45a8fd\"" Sep 5 00:46:35.589584 containerd[1565]: time="2025-09-05T00:46:35.589539822Z" level=info msg="StartContainer for \"8aad383eefe19765a90d1105a40d86af21628194fb277a243b9ea9e6fe45a8fd\"" Sep 5 00:46:35.591386 containerd[1565]: time="2025-09-05T00:46:35.591312935Z" level=info msg="connecting to shim 8aad383eefe19765a90d1105a40d86af21628194fb277a243b9ea9e6fe45a8fd" address="unix:///run/containerd/s/9f59014e71e83d766731692043b6dba00b76d3b99a5576feab00d59be813529b" protocol=ttrpc version=3 Sep 5 00:46:35.603608 containerd[1565]: time="2025-09-05T00:46:35.603572328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"04ec41b9a03efb2ac6558b0cf03fd922ead65edcf201c6094e044acdb0b4666f\"" Sep 5 00:46:35.605343 kubelet[2322]: E0905 00:46:35.604815 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:35.607252 containerd[1565]: time="2025-09-05T00:46:35.607227078Z" level=info msg="CreateContainer within sandbox \"04ec41b9a03efb2ac6558b0cf03fd922ead65edcf201c6094e044acdb0b4666f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:46:35.615874 systemd[1]: Started cri-containerd-8aad383eefe19765a90d1105a40d86af21628194fb277a243b9ea9e6fe45a8fd.scope - libcontainer container 8aad383eefe19765a90d1105a40d86af21628194fb277a243b9ea9e6fe45a8fd. Sep 5 00:46:35.704387 kubelet[2322]: I0905 00:46:35.704292 2322 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:46:35.900022 containerd[1565]: time="2025-09-05T00:46:35.899916606Z" level=info msg="Container 9853b46762bb6868ecc90b9cc2704e1085e061dc0f324e8ae51a0410eef2a0e8: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:46:35.901668 containerd[1565]: time="2025-09-05T00:46:35.901608969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9f0a093b48691eddc5725662c295d265a0009602a2a85c96365bda37c269102\"" Sep 5 00:46:35.902613 containerd[1565]: time="2025-09-05T00:46:35.902585287Z" level=info msg="StartContainer for \"8aad383eefe19765a90d1105a40d86af21628194fb277a243b9ea9e6fe45a8fd\" returns successfully" Sep 5 00:46:35.902909 kubelet[2322]: E0905 00:46:35.902881 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:35.904592 containerd[1565]: time="2025-09-05T00:46:35.904278491Z" level=info msg="CreateContainer within sandbox \"b9f0a093b48691eddc5725662c295d265a0009602a2a85c96365bda37c269102\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:46:35.907898 containerd[1565]: time="2025-09-05T00:46:35.907877932Z" level=info msg="CreateContainer within sandbox \"04ec41b9a03efb2ac6558b0cf03fd922ead65edcf201c6094e044acdb0b4666f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9853b46762bb6868ecc90b9cc2704e1085e061dc0f324e8ae51a0410eef2a0e8\"" Sep 5 00:46:35.909400 containerd[1565]: time="2025-09-05T00:46:35.908197688Z" level=info msg="StartContainer for \"9853b46762bb6868ecc90b9cc2704e1085e061dc0f324e8ae51a0410eef2a0e8\"" Sep 5 00:46:35.909400 containerd[1565]: time="2025-09-05T00:46:35.909069816Z" level=info msg="connecting to shim 9853b46762bb6868ecc90b9cc2704e1085e061dc0f324e8ae51a0410eef2a0e8" address="unix:///run/containerd/s/112fcff992d593e6c751c7dd2e6c02c98ffef11d8aae9660d0f22b6be6b5957d" protocol=ttrpc version=3 Sep 5 00:46:35.913194 containerd[1565]: time="2025-09-05T00:46:35.913175384Z" level=info msg="Container 02832277bf090f7eec3cfa71d961a6c34c0245463d8e2c5e04c863c0edcb376e: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:46:35.920191 containerd[1565]: time="2025-09-05T00:46:35.920162637Z" level=info msg="CreateContainer within sandbox \"b9f0a093b48691eddc5725662c295d265a0009602a2a85c96365bda37c269102\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"02832277bf090f7eec3cfa71d961a6c34c0245463d8e2c5e04c863c0edcb376e\"" Sep 5 00:46:35.920639 containerd[1565]: time="2025-09-05T00:46:35.920512049Z" level=info msg="StartContainer for \"02832277bf090f7eec3cfa71d961a6c34c0245463d8e2c5e04c863c0edcb376e\"" Sep 5 00:46:35.921639 containerd[1565]: time="2025-09-05T00:46:35.921606612Z" level=info msg="connecting to shim 02832277bf090f7eec3cfa71d961a6c34c0245463d8e2c5e04c863c0edcb376e" address="unix:///run/containerd/s/1ef849aa14fef50955d005e95f1a491b48f289ba1b78b213e67b74636d325a49" protocol=ttrpc version=3 Sep 5 00:46:35.932794 systemd[1]: Started cri-containerd-9853b46762bb6868ecc90b9cc2704e1085e061dc0f324e8ae51a0410eef2a0e8.scope - libcontainer container 9853b46762bb6868ecc90b9cc2704e1085e061dc0f324e8ae51a0410eef2a0e8. Sep 5 00:46:35.936197 systemd[1]: Started cri-containerd-02832277bf090f7eec3cfa71d961a6c34c0245463d8e2c5e04c863c0edcb376e.scope - libcontainer container 02832277bf090f7eec3cfa71d961a6c34c0245463d8e2c5e04c863c0edcb376e. Sep 5 00:46:35.987492 containerd[1565]: time="2025-09-05T00:46:35.987443952Z" level=info msg="StartContainer for \"9853b46762bb6868ecc90b9cc2704e1085e061dc0f324e8ae51a0410eef2a0e8\" returns successfully" Sep 5 00:46:35.991275 containerd[1565]: time="2025-09-05T00:46:35.991253814Z" level=info msg="StartContainer for \"02832277bf090f7eec3cfa71d961a6c34c0245463d8e2c5e04c863c0edcb376e\" returns successfully" Sep 5 00:46:36.461710 kubelet[2322]: E0905 00:46:36.461676 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:36.462822 kubelet[2322]: E0905 00:46:36.462784 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:36.466784 kubelet[2322]: E0905 00:46:36.466763 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:36.687173 kubelet[2322]: E0905 00:46:36.687134 2322 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:46:36.782067 kubelet[2322]: I0905 00:46:36.781960 2322 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:46:36.782067 kubelet[2322]: E0905 00:46:36.782013 2322 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 00:46:36.792493 kubelet[2322]: E0905 00:46:36.792455 2322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:46:36.893185 kubelet[2322]: E0905 00:46:36.893146 2322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:46:36.994213 kubelet[2322]: E0905 00:46:36.994178 2322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:46:37.094796 kubelet[2322]: E0905 00:46:37.094706 2322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:46:37.423197 kubelet[2322]: I0905 00:46:37.423086 2322 apiserver.go:52] "Watching apiserver" Sep 5 00:46:37.433929 kubelet[2322]: I0905 00:46:37.433897 2322 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:46:37.470743 kubelet[2322]: E0905 00:46:37.470705 2322 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:46:37.470743 kubelet[2322]: E0905 00:46:37.470707 2322 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 5 00:46:37.471151 kubelet[2322]: E0905 00:46:37.470835 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:37.471151 kubelet[2322]: E0905 00:46:37.470837 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:38.567464 systemd[1]: Reload requested from client PID 2597 ('systemctl') (unit session-7.scope)... Sep 5 00:46:38.567477 systemd[1]: Reloading... Sep 5 00:46:38.624704 zram_generator::config[2643]: No configuration found. Sep 5 00:46:38.713436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:46:38.842867 systemd[1]: Reloading finished in 275 ms. Sep 5 00:46:38.877334 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:46:38.884671 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:46:38.885101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:46:38.885149 systemd[1]: kubelet.service: Consumed 676ms CPU time, 131.7M memory peak. Sep 5 00:46:38.888297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:46:39.079721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:46:39.083794 (kubelet)[2685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:46:39.131386 kubelet[2685]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:46:39.131386 kubelet[2685]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:46:39.131386 kubelet[2685]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:46:39.131386 kubelet[2685]: I0905 00:46:39.131358 2685 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:46:39.137901 kubelet[2685]: I0905 00:46:39.137261 2685 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:46:39.137901 kubelet[2685]: I0905 00:46:39.137284 2685 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:46:39.137901 kubelet[2685]: I0905 00:46:39.137635 2685 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:46:39.139277 kubelet[2685]: I0905 00:46:39.139262 2685 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 00:46:39.142113 kubelet[2685]: I0905 00:46:39.142097 2685 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:46:39.145459 kubelet[2685]: I0905 00:46:39.145431 2685 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:46:39.149677 kubelet[2685]: I0905 00:46:39.149662 2685 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:46:39.149825 kubelet[2685]: I0905 00:46:39.149809 2685 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:46:39.149940 kubelet[2685]: I0905 00:46:39.149921 2685 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:46:39.150093 kubelet[2685]: I0905 00:46:39.149937 2685 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:46:39.150173 kubelet[2685]: I0905 00:46:39.150096 2685 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:46:39.150173 kubelet[2685]: I0905 00:46:39.150104 2685 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:46:39.150173 kubelet[2685]: I0905 00:46:39.150127 2685 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:46:39.150232 kubelet[2685]: I0905 00:46:39.150224 2685 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:46:39.150251 kubelet[2685]: I0905 00:46:39.150234 2685 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:46:39.150277 kubelet[2685]: I0905 00:46:39.150263 2685 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:46:39.150277 kubelet[2685]: I0905 00:46:39.150272 2685 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:46:39.151042 kubelet[2685]: I0905 00:46:39.150889 2685 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 5 00:46:39.152761 kubelet[2685]: I0905 00:46:39.151277 2685 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:46:39.152761 kubelet[2685]: I0905 00:46:39.151623 2685 server.go:1274] "Started kubelet" Sep 5 00:46:39.155860 kubelet[2685]: I0905 00:46:39.153840 2685 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:46:39.155860 kubelet[2685]: I0905 00:46:39.154029 2685 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:46:39.155860 kubelet[2685]: I0905 00:46:39.155835 2685 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:46:39.163153 kubelet[2685]: I0905 00:46:39.163015 2685 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:46:39.164662 kubelet[2685]: I0905 00:46:39.164097 2685 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:46:39.164762 kubelet[2685]: I0905 00:46:39.164730 2685 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:46:39.164890 kubelet[2685]: I0905 00:46:39.164862 2685 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:46:39.165846 kubelet[2685]: I0905 00:46:39.165820 2685 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:46:39.166316 kubelet[2685]: I0905 00:46:39.166301 2685 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:46:39.166459 kubelet[2685]: I0905 00:46:39.166442 2685 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:46:39.167244 kubelet[2685]: I0905 00:46:39.163216 2685 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:46:39.168401 kubelet[2685]: E0905 00:46:39.168382 2685 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:46:39.169523 kubelet[2685]: I0905 00:46:39.169507 2685 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:46:39.175301 kubelet[2685]: I0905 00:46:39.175275 2685 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:46:39.178560 kubelet[2685]: I0905 00:46:39.178545 2685 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:46:39.178638 kubelet[2685]: I0905 00:46:39.178629 2685 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:46:39.178721 kubelet[2685]: I0905 00:46:39.178711 2685 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:46:39.178820 kubelet[2685]: E0905 00:46:39.178804 2685 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:46:39.201368 kubelet[2685]: I0905 00:46:39.201328 2685 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:46:39.201368 kubelet[2685]: I0905 00:46:39.201357 2685 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:46:39.201368 kubelet[2685]: I0905 00:46:39.201373 2685 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:46:39.201526 kubelet[2685]: I0905 00:46:39.201497 2685 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:46:39.201526 kubelet[2685]: I0905 00:46:39.201506 2685 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:46:39.201526 kubelet[2685]: I0905 00:46:39.201521 2685 policy_none.go:49] "None policy: Start" Sep 5 00:46:39.202019 kubelet[2685]: I0905 00:46:39.201993 2685 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:46:39.202051 kubelet[2685]: I0905 00:46:39.202026 2685 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:46:39.202167 kubelet[2685]: I0905 00:46:39.202152 2685 state_mem.go:75] "Updated machine memory state" Sep 5 00:46:39.206439 kubelet[2685]: I0905 00:46:39.206409 2685 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:46:39.206561 kubelet[2685]: I0905 00:46:39.206546 2685 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:46:39.206587 kubelet[2685]: I0905 00:46:39.206557 2685 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:46:39.207016 kubelet[2685]: I0905 00:46:39.206993 2685 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:46:39.312133 kubelet[2685]: I0905 00:46:39.312108 2685 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:46:39.316462 kubelet[2685]: I0905 00:46:39.316446 2685 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 5 00:46:39.316534 kubelet[2685]: I0905 00:46:39.316496 2685 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:46:39.466346 kubelet[2685]: I0905 00:46:39.466250 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:46:39.466346 kubelet[2685]: I0905 00:46:39.466275 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbd28f7cee82f32e037d685ace799b61-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cbd28f7cee82f32e037d685ace799b61\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:46:39.466346 kubelet[2685]: I0905 00:46:39.466291 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbd28f7cee82f32e037d685ace799b61-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cbd28f7cee82f32e037d685ace799b61\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:46:39.466346 kubelet[2685]: I0905 00:46:39.466309 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:46:39.466346 kubelet[2685]: I0905 00:46:39.466325 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:46:39.466520 kubelet[2685]: I0905 00:46:39.466338 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbd28f7cee82f32e037d685ace799b61-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cbd28f7cee82f32e037d685ace799b61\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:46:39.466520 kubelet[2685]: I0905 00:46:39.466354 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:46:39.466520 kubelet[2685]: I0905 00:46:39.466369 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:46:39.466520 kubelet[2685]: I0905 00:46:39.466383 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:46:39.585012 kubelet[2685]: E0905 00:46:39.584973 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:39.586055 kubelet[2685]: E0905 00:46:39.586000 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:39.586055 kubelet[2685]: E0905 00:46:39.586013 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:40.151189 kubelet[2685]: I0905 00:46:40.151157 2685 apiserver.go:52] "Watching apiserver" Sep 5 00:46:40.165331 kubelet[2685]: I0905 00:46:40.165294 2685 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:46:40.191048 kubelet[2685]: E0905 00:46:40.191019 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:40.191048 kubelet[2685]: E0905 00:46:40.191032 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:40.191586 kubelet[2685]: E0905 00:46:40.191571 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:40.209578 kubelet[2685]: I0905 00:46:40.209519 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.2095029130000001 podStartE2EDuration="1.209502913s" podCreationTimestamp="2025-09-05 00:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:46:40.208872717 +0000 UTC m=+1.121635824" watchObservedRunningTime="2025-09-05 00:46:40.209502913 +0000 UTC m=+1.122266020" Sep 5 00:46:40.222097 kubelet[2685]: I0905 00:46:40.222051 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.222012639 podStartE2EDuration="1.222012639s" podCreationTimestamp="2025-09-05 00:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:46:40.216085083 +0000 UTC m=+1.128848190" watchObservedRunningTime="2025-09-05 00:46:40.222012639 +0000 UTC m=+1.134775736" Sep 5 00:46:40.230883 kubelet[2685]: I0905 00:46:40.230752 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.230733621 podStartE2EDuration="1.230733621s" podCreationTimestamp="2025-09-05 00:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:46:40.222188531 +0000 UTC m=+1.134951638" watchObservedRunningTime="2025-09-05 00:46:40.230733621 +0000 UTC m=+1.143496718" Sep 5 00:46:41.192144 kubelet[2685]: E0905 00:46:41.192103 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:41.304172 kubelet[2685]: E0905 00:46:41.304134 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:43.497351 kubelet[2685]: I0905 00:46:43.497309 2685 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:46:43.497739 containerd[1565]: time="2025-09-05T00:46:43.497583591Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:46:43.497956 kubelet[2685]: I0905 00:46:43.497764 2685 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:46:43.695241 systemd[1]: Created slice kubepods-besteffort-podbce00a69_f54c_43b4_9a96_d113a12befdb.slice - libcontainer container kubepods-besteffort-podbce00a69_f54c_43b4_9a96_d113a12befdb.slice. Sep 5 00:46:43.793891 kubelet[2685]: I0905 00:46:43.793776 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bce00a69-f54c-43b4-9a96-d113a12befdb-lib-modules\") pod \"kube-proxy-95mm9\" (UID: \"bce00a69-f54c-43b4-9a96-d113a12befdb\") " pod="kube-system/kube-proxy-95mm9" Sep 5 00:46:43.793891 kubelet[2685]: I0905 00:46:43.793814 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bce00a69-f54c-43b4-9a96-d113a12befdb-kube-proxy\") pod \"kube-proxy-95mm9\" (UID: \"bce00a69-f54c-43b4-9a96-d113a12befdb\") " pod="kube-system/kube-proxy-95mm9" Sep 5 00:46:43.793891 kubelet[2685]: I0905 00:46:43.793835 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bce00a69-f54c-43b4-9a96-d113a12befdb-xtables-lock\") pod \"kube-proxy-95mm9\" (UID: \"bce00a69-f54c-43b4-9a96-d113a12befdb\") " pod="kube-system/kube-proxy-95mm9" Sep 5 00:46:43.793891 kubelet[2685]: I0905 00:46:43.793853 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqc6t\" (UniqueName: \"kubernetes.io/projected/bce00a69-f54c-43b4-9a96-d113a12befdb-kube-api-access-mqc6t\") pod \"kube-proxy-95mm9\" (UID: \"bce00a69-f54c-43b4-9a96-d113a12befdb\") " pod="kube-system/kube-proxy-95mm9" Sep 5 00:46:43.897753 kubelet[2685]: E0905 00:46:43.897722 2685 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 5 00:46:43.897753 kubelet[2685]: E0905 00:46:43.897747 2685 projected.go:194] Error preparing data for projected volume kube-api-access-mqc6t for pod kube-system/kube-proxy-95mm9: configmap "kube-root-ca.crt" not found Sep 5 00:46:43.897921 kubelet[2685]: E0905 00:46:43.897786 2685 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bce00a69-f54c-43b4-9a96-d113a12befdb-kube-api-access-mqc6t podName:bce00a69-f54c-43b4-9a96-d113a12befdb nodeName:}" failed. No retries permitted until 2025-09-05 00:46:44.397771605 +0000 UTC m=+5.310534712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mqc6t" (UniqueName: "kubernetes.io/projected/bce00a69-f54c-43b4-9a96-d113a12befdb-kube-api-access-mqc6t") pod "kube-proxy-95mm9" (UID: "bce00a69-f54c-43b4-9a96-d113a12befdb") : configmap "kube-root-ca.crt" not found Sep 5 00:46:44.607014 kubelet[2685]: E0905 00:46:44.606963 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:44.607846 containerd[1565]: time="2025-09-05T00:46:44.607789611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-95mm9,Uid:bce00a69-f54c-43b4-9a96-d113a12befdb,Namespace:kube-system,Attempt:0,}" Sep 5 00:46:44.623279 systemd[1]: Created slice kubepods-besteffort-podc8b7bc8f_8d3e_436f_b074_b82959e92b0d.slice - libcontainer container kubepods-besteffort-podc8b7bc8f_8d3e_436f_b074_b82959e92b0d.slice. Sep 5 00:46:44.634858 containerd[1565]: time="2025-09-05T00:46:44.634814728Z" level=info msg="connecting to shim 359b5eb44dbccca0177669a4b973c8b511a8f9b3dafb0e71a5c6138d26db7b51" address="unix:///run/containerd/s/c31c9c682ee205a6154ec56e7462325fb71947c1d7e7f4c9f4cab54a171a1ab5" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:46:44.676771 systemd[1]: Started cri-containerd-359b5eb44dbccca0177669a4b973c8b511a8f9b3dafb0e71a5c6138d26db7b51.scope - libcontainer container 359b5eb44dbccca0177669a4b973c8b511a8f9b3dafb0e71a5c6138d26db7b51. Sep 5 00:46:44.699242 containerd[1565]: time="2025-09-05T00:46:44.699189928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-95mm9,Uid:bce00a69-f54c-43b4-9a96-d113a12befdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"359b5eb44dbccca0177669a4b973c8b511a8f9b3dafb0e71a5c6138d26db7b51\"" Sep 5 00:46:44.700035 kubelet[2685]: E0905 00:46:44.700011 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:44.701299 kubelet[2685]: I0905 00:46:44.701252 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c8b7bc8f-8d3e-436f-b074-b82959e92b0d-var-lib-calico\") pod \"tigera-operator-58fc44c59b-qdhx8\" (UID: \"c8b7bc8f-8d3e-436f-b074-b82959e92b0d\") " pod="tigera-operator/tigera-operator-58fc44c59b-qdhx8" Sep 5 00:46:44.701447 kubelet[2685]: I0905 00:46:44.701282 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65lzx\" (UniqueName: \"kubernetes.io/projected/c8b7bc8f-8d3e-436f-b074-b82959e92b0d-kube-api-access-65lzx\") pod \"tigera-operator-58fc44c59b-qdhx8\" (UID: \"c8b7bc8f-8d3e-436f-b074-b82959e92b0d\") " pod="tigera-operator/tigera-operator-58fc44c59b-qdhx8" Sep 5 00:46:44.702522 containerd[1565]: time="2025-09-05T00:46:44.702096722Z" level=info msg="CreateContainer within sandbox \"359b5eb44dbccca0177669a4b973c8b511a8f9b3dafb0e71a5c6138d26db7b51\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:46:44.715066 containerd[1565]: time="2025-09-05T00:46:44.714945303Z" level=info msg="Container f507cdc66280f9ea001221eeb5f73766e1f06388b66527730884942b2841a704: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:46:44.716511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861026820.mount: Deactivated successfully. Sep 5 00:46:44.722831 containerd[1565]: time="2025-09-05T00:46:44.722785930Z" level=info msg="CreateContainer within sandbox \"359b5eb44dbccca0177669a4b973c8b511a8f9b3dafb0e71a5c6138d26db7b51\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f507cdc66280f9ea001221eeb5f73766e1f06388b66527730884942b2841a704\"" Sep 5 00:46:44.723266 containerd[1565]: time="2025-09-05T00:46:44.723240624Z" level=info msg="StartContainer for \"f507cdc66280f9ea001221eeb5f73766e1f06388b66527730884942b2841a704\"" Sep 5 00:46:44.724528 containerd[1565]: time="2025-09-05T00:46:44.724505732Z" level=info msg="connecting to shim f507cdc66280f9ea001221eeb5f73766e1f06388b66527730884942b2841a704" address="unix:///run/containerd/s/c31c9c682ee205a6154ec56e7462325fb71947c1d7e7f4c9f4cab54a171a1ab5" protocol=ttrpc version=3 Sep 5 00:46:44.752769 systemd[1]: Started cri-containerd-f507cdc66280f9ea001221eeb5f73766e1f06388b66527730884942b2841a704.scope - libcontainer container f507cdc66280f9ea001221eeb5f73766e1f06388b66527730884942b2841a704. Sep 5 00:46:44.794089 containerd[1565]: time="2025-09-05T00:46:44.794053575Z" level=info msg="StartContainer for \"f507cdc66280f9ea001221eeb5f73766e1f06388b66527730884942b2841a704\" returns successfully" Sep 5 00:46:44.926283 containerd[1565]: time="2025-09-05T00:46:44.926137642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-qdhx8,Uid:c8b7bc8f-8d3e-436f-b074-b82959e92b0d,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:46:44.947253 containerd[1565]: time="2025-09-05T00:46:44.947192377Z" level=info msg="connecting to shim e23225466dfdde4859039b34cf6e97465fc5733a67d0f44ec8eddba527085eb3" address="unix:///run/containerd/s/61777a4fe201d9ead12d5ca1335c5a9f500a94b0bb45359fa222fdcb45993c0f" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:46:44.971815 systemd[1]: Started cri-containerd-e23225466dfdde4859039b34cf6e97465fc5733a67d0f44ec8eddba527085eb3.scope - libcontainer container e23225466dfdde4859039b34cf6e97465fc5733a67d0f44ec8eddba527085eb3. Sep 5 00:46:45.010980 containerd[1565]: time="2025-09-05T00:46:45.010944995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-qdhx8,Uid:c8b7bc8f-8d3e-436f-b074-b82959e92b0d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e23225466dfdde4859039b34cf6e97465fc5733a67d0f44ec8eddba527085eb3\"" Sep 5 00:46:45.012708 containerd[1565]: time="2025-09-05T00:46:45.012482334Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 00:46:45.199128 kubelet[2685]: E0905 00:46:45.199019 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:45.206486 kubelet[2685]: I0905 00:46:45.206438 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-95mm9" podStartSLOduration=2.20642511 podStartE2EDuration="2.20642511s" podCreationTimestamp="2025-09-05 00:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:46:45.206278013 +0000 UTC m=+6.119041120" watchObservedRunningTime="2025-09-05 00:46:45.20642511 +0000 UTC m=+6.119188217" Sep 5 00:46:45.643448 kubelet[2685]: E0905 00:46:45.643130 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:46.200478 kubelet[2685]: E0905 00:46:46.200456 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:46.366601 kubelet[2685]: E0905 00:46:46.366562 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:46.420737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3242765516.mount: Deactivated successfully. Sep 5 00:46:46.842018 containerd[1565]: time="2025-09-05T00:46:46.841966028Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:46.842694 containerd[1565]: time="2025-09-05T00:46:46.842659081Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 5 00:46:46.843712 containerd[1565]: time="2025-09-05T00:46:46.843688645Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:46.845554 containerd[1565]: time="2025-09-05T00:46:46.845528663Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:46.846142 containerd[1565]: time="2025-09-05T00:46:46.846104355Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.833586835s" Sep 5 00:46:46.846142 containerd[1565]: time="2025-09-05T00:46:46.846139261Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 5 00:46:46.847993 containerd[1565]: time="2025-09-05T00:46:46.847971704Z" level=info msg="CreateContainer within sandbox \"e23225466dfdde4859039b34cf6e97465fc5733a67d0f44ec8eddba527085eb3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:46:46.855443 containerd[1565]: time="2025-09-05T00:46:46.855399983Z" level=info msg="Container 03e206999e14aa3c2c613580828883cf3847f1075b01290a9b0a2bcf7dc26a22: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:46:46.863003 containerd[1565]: time="2025-09-05T00:46:46.862968927Z" level=info msg="CreateContainer within sandbox \"e23225466dfdde4859039b34cf6e97465fc5733a67d0f44ec8eddba527085eb3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"03e206999e14aa3c2c613580828883cf3847f1075b01290a9b0a2bcf7dc26a22\"" Sep 5 00:46:46.863533 containerd[1565]: time="2025-09-05T00:46:46.863507669Z" level=info msg="StartContainer for \"03e206999e14aa3c2c613580828883cf3847f1075b01290a9b0a2bcf7dc26a22\"" Sep 5 00:46:46.864273 containerd[1565]: time="2025-09-05T00:46:46.864249643Z" level=info msg="connecting to shim 03e206999e14aa3c2c613580828883cf3847f1075b01290a9b0a2bcf7dc26a22" address="unix:///run/containerd/s/61777a4fe201d9ead12d5ca1335c5a9f500a94b0bb45359fa222fdcb45993c0f" protocol=ttrpc version=3 Sep 5 00:46:46.920790 systemd[1]: Started cri-containerd-03e206999e14aa3c2c613580828883cf3847f1075b01290a9b0a2bcf7dc26a22.scope - libcontainer container 03e206999e14aa3c2c613580828883cf3847f1075b01290a9b0a2bcf7dc26a22. Sep 5 00:46:46.946805 containerd[1565]: time="2025-09-05T00:46:46.946758865Z" level=info msg="StartContainer for \"03e206999e14aa3c2c613580828883cf3847f1075b01290a9b0a2bcf7dc26a22\" returns successfully" Sep 5 00:46:47.203365 kubelet[2685]: E0905 00:46:47.203157 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:47.209827 kubelet[2685]: I0905 00:46:47.209786 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-qdhx8" podStartSLOduration=1.375062763 podStartE2EDuration="3.209770314s" podCreationTimestamp="2025-09-05 00:46:44 +0000 UTC" firstStartedPulling="2025-09-05 00:46:45.012128028 +0000 UTC m=+5.924891135" lastFinishedPulling="2025-09-05 00:46:46.846835579 +0000 UTC m=+7.759598686" observedRunningTime="2025-09-05 00:46:47.209556632 +0000 UTC m=+8.122319740" watchObservedRunningTime="2025-09-05 00:46:47.209770314 +0000 UTC m=+8.122533421" Sep 5 00:46:48.206418 kubelet[2685]: E0905 00:46:48.206195 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:51.308430 kubelet[2685]: E0905 00:46:51.308161 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:52.281851 sudo[1773]: pam_unix(sudo:session): session closed for user root Sep 5 00:46:52.290194 sshd[1772]: Connection closed by 10.0.0.1 port 58584 Sep 5 00:46:52.291365 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Sep 5 00:46:52.302486 systemd[1]: sshd@6-10.0.0.4:22-10.0.0.1:58584.service: Deactivated successfully. Sep 5 00:46:52.311480 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:46:52.311907 systemd[1]: session-7.scope: Consumed 4.271s CPU time, 220.9M memory peak. Sep 5 00:46:52.314224 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:46:52.319217 systemd-logind[1539]: Removed session 7. Sep 5 00:46:55.382985 kubelet[2685]: W0905 00:46:55.382917 2685 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Sep 5 00:46:55.382985 kubelet[2685]: E0905 00:46:55.382976 2685 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 5 00:46:55.384804 systemd[1]: Created slice kubepods-besteffort-podf543abbf_ce6f_422b_8b0c_140272927950.slice - libcontainer container kubepods-besteffort-podf543abbf_ce6f_422b_8b0c_140272927950.slice. Sep 5 00:46:55.469788 kubelet[2685]: I0905 00:46:55.469748 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f543abbf-ce6f-422b-8b0c-140272927950-tigera-ca-bundle\") pod \"calico-typha-8645d6bfc5-wklcw\" (UID: \"f543abbf-ce6f-422b-8b0c-140272927950\") " pod="calico-system/calico-typha-8645d6bfc5-wklcw" Sep 5 00:46:55.469788 kubelet[2685]: I0905 00:46:55.469784 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f543abbf-ce6f-422b-8b0c-140272927950-typha-certs\") pod \"calico-typha-8645d6bfc5-wklcw\" (UID: \"f543abbf-ce6f-422b-8b0c-140272927950\") " pod="calico-system/calico-typha-8645d6bfc5-wklcw" Sep 5 00:46:55.469974 kubelet[2685]: I0905 00:46:55.469805 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txwzx\" (UniqueName: \"kubernetes.io/projected/f543abbf-ce6f-422b-8b0c-140272927950-kube-api-access-txwzx\") pod \"calico-typha-8645d6bfc5-wklcw\" (UID: \"f543abbf-ce6f-422b-8b0c-140272927950\") " pod="calico-system/calico-typha-8645d6bfc5-wklcw" Sep 5 00:46:55.793373 systemd[1]: Created slice kubepods-besteffort-pod533bf092_00de_4065_9bd2_61f00ddc5fb4.slice - libcontainer container kubepods-besteffort-pod533bf092_00de_4065_9bd2_61f00ddc5fb4.slice. Sep 5 00:46:55.872821 kubelet[2685]: I0905 00:46:55.872767 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/533bf092-00de-4065-9bd2-61f00ddc5fb4-node-certs\") pod \"calico-node-v4jz2\" (UID: \"533bf092-00de-4065-9bd2-61f00ddc5fb4\") " pod="calico-system/calico-node-v4jz2" Sep 5 00:46:55.872821 kubelet[2685]: I0905 00:46:55.872808 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/533bf092-00de-4065-9bd2-61f00ddc5fb4-var-run-calico\") pod \"calico-node-v4jz2\" (UID: \"533bf092-00de-4065-9bd2-61f00ddc5fb4\") " pod="calico-system/calico-node-v4jz2" Sep 5 00:46:55.872821 kubelet[2685]: I0905 00:46:55.872826 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/533bf092-00de-4065-9bd2-61f00ddc5fb4-cni-log-dir\") pod \"calico-node-v4jz2\" (UID: \"533bf092-00de-4065-9bd2-61f00ddc5fb4\") " pod="calico-system/calico-node-v4jz2" Sep 5 00:46:55.873022 kubelet[2685]: I0905 00:46:55.872846 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgkpk\" (UniqueName: \"kubernetes.io/projected/533bf092-00de-4065-9bd2-61f00ddc5fb4-kube-api-access-rgkpk\") pod \"calico-node-v4jz2\" (UID: \"533bf092-00de-4065-9bd2-61f00ddc5fb4\") " pod="calico-system/calico-node-v4jz2" Sep 5 00:46:55.873022 kubelet[2685]: I0905 00:46:55.872960 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/533bf092-00de-4065-9bd2-61f00ddc5fb4-flexvol-driver-host\") pod \"calico-node-v4jz2\" (UID: \"533bf092-00de-4065-9bd2-61f00ddc5fb4\") " pod="calico-system/calico-node-v4jz2" Sep 5 00:46:55.873022 kubelet[2685]: I0905 00:46:55.873015 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/533bf092-00de-4065-9bd2-61f00ddc5fb4-tigera-ca-bundle\") pod \"calico-node-v4jz2\" (UID: \"533bf092-00de-4065-9bd2-61f00ddc5fb4\") " pod="calico-system/calico-node-v4jz2" Sep 5 00:46:55.873102 kubelet[2685]: I0905 00:46:55.873037 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/533bf092-00de-4065-9bd2-61f00ddc5fb4-xtables-lock\") pod \"calico-node-v4jz2\" (UID: \"533bf092-00de-4065-9bd2-61f00ddc5fb4\") " pod="calico-system/calico-node-v4jz2" Sep 5 00:46:55.873102 kubelet[2685]: I0905 00:46:55.873051 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/533bf092-00de-4065-9bd2-61f00ddc5fb4-cni-net-dir\") pod \"calico-node-v4jz2\" (UID: \"533bf092-00de-4065-9bd2-61f00ddc5fb4\") " pod="calico-system/calico-node-v4jz2" Sep 5 00:46:55.873102 kubelet[2685]: I0905 00:46:55.873092 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/533bf092-00de-4065-9bd2-61f00ddc5fb4-lib-modules\") pod \"calico-node-v4jz2\" (UID: \"533bf092-00de-4065-9bd2-61f00ddc5fb4\") " pod="calico-system/calico-node-v4jz2" Sep 5 00:46:55.873230 kubelet[2685]: I0905 00:46:55.873200 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/533bf092-00de-4065-9bd2-61f00ddc5fb4-cni-bin-dir\") pod \"calico-node-v4jz2\" (UID: \"533bf092-00de-4065-9bd2-61f00ddc5fb4\") " pod="calico-system/calico-node-v4jz2" Sep 5 00:46:55.873230 kubelet[2685]: I0905 00:46:55.873220 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/533bf092-00de-4065-9bd2-61f00ddc5fb4-policysync\") pod \"calico-node-v4jz2\" (UID: \"533bf092-00de-4065-9bd2-61f00ddc5fb4\") " pod="calico-system/calico-node-v4jz2" Sep 5 00:46:55.873289 kubelet[2685]: I0905 00:46:55.873232 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/533bf092-00de-4065-9bd2-61f00ddc5fb4-var-lib-calico\") pod \"calico-node-v4jz2\" (UID: \"533bf092-00de-4065-9bd2-61f00ddc5fb4\") " pod="calico-system/calico-node-v4jz2" Sep 5 00:46:55.976674 kubelet[2685]: E0905 00:46:55.976437 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:55.976674 kubelet[2685]: W0905 00:46:55.976459 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:55.976674 kubelet[2685]: E0905 00:46:55.976476 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:55.981139 kubelet[2685]: E0905 00:46:55.981112 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:55.981139 kubelet[2685]: W0905 00:46:55.981135 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:55.981224 kubelet[2685]: E0905 00:46:55.981159 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:55.987866 kubelet[2685]: E0905 00:46:55.987820 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:55.987866 kubelet[2685]: W0905 00:46:55.987840 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:55.987866 kubelet[2685]: E0905 00:46:55.987859 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.009396 kubelet[2685]: E0905 00:46:56.009284 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4th8" podUID="c9538fe6-7e0a-462e-bbc0-1898cf53d69a" Sep 5 00:46:56.066256 kubelet[2685]: E0905 00:46:56.066149 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.066256 kubelet[2685]: W0905 00:46:56.066170 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.066256 kubelet[2685]: E0905 00:46:56.066196 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.066456 kubelet[2685]: E0905 00:46:56.066442 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.066456 kubelet[2685]: W0905 00:46:56.066453 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.066513 kubelet[2685]: E0905 00:46:56.066461 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.066759 kubelet[2685]: E0905 00:46:56.066722 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.066759 kubelet[2685]: W0905 00:46:56.066749 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.066934 kubelet[2685]: E0905 00:46:56.066780 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.067066 kubelet[2685]: E0905 00:46:56.067044 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.067066 kubelet[2685]: W0905 00:46:56.067064 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.067123 kubelet[2685]: E0905 00:46:56.067073 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.067263 kubelet[2685]: E0905 00:46:56.067247 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.067263 kubelet[2685]: W0905 00:46:56.067257 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.067263 kubelet[2685]: E0905 00:46:56.067264 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.067484 kubelet[2685]: E0905 00:46:56.067462 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.067484 kubelet[2685]: W0905 00:46:56.067472 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.067484 kubelet[2685]: E0905 00:46:56.067479 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.067658 kubelet[2685]: E0905 00:46:56.067629 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.067658 kubelet[2685]: W0905 00:46:56.067638 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.067713 kubelet[2685]: E0905 00:46:56.067664 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.067831 kubelet[2685]: E0905 00:46:56.067817 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.067831 kubelet[2685]: W0905 00:46:56.067827 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.067875 kubelet[2685]: E0905 00:46:56.067834 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.068013 kubelet[2685]: E0905 00:46:56.067998 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.068013 kubelet[2685]: W0905 00:46:56.068007 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.068083 kubelet[2685]: E0905 00:46:56.068033 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.068234 kubelet[2685]: E0905 00:46:56.068219 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.068234 kubelet[2685]: W0905 00:46:56.068229 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.068284 kubelet[2685]: E0905 00:46:56.068236 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.068416 kubelet[2685]: E0905 00:46:56.068402 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.068416 kubelet[2685]: W0905 00:46:56.068411 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.068466 kubelet[2685]: E0905 00:46:56.068418 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.068585 kubelet[2685]: E0905 00:46:56.068569 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.068585 kubelet[2685]: W0905 00:46:56.068579 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.068631 kubelet[2685]: E0905 00:46:56.068587 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.068788 kubelet[2685]: E0905 00:46:56.068774 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.068788 kubelet[2685]: W0905 00:46:56.068783 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.068842 kubelet[2685]: E0905 00:46:56.068791 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.068977 kubelet[2685]: E0905 00:46:56.068954 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.068977 kubelet[2685]: W0905 00:46:56.068963 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.068977 kubelet[2685]: E0905 00:46:56.068970 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.069131 kubelet[2685]: E0905 00:46:56.069116 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.069131 kubelet[2685]: W0905 00:46:56.069126 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.069191 kubelet[2685]: E0905 00:46:56.069133 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.069307 kubelet[2685]: E0905 00:46:56.069291 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.069307 kubelet[2685]: W0905 00:46:56.069301 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.069356 kubelet[2685]: E0905 00:46:56.069309 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.069475 kubelet[2685]: E0905 00:46:56.069460 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.069475 kubelet[2685]: W0905 00:46:56.069469 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.069530 kubelet[2685]: E0905 00:46:56.069476 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.069669 kubelet[2685]: E0905 00:46:56.069624 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.069669 kubelet[2685]: W0905 00:46:56.069633 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.069669 kubelet[2685]: E0905 00:46:56.069640 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.069828 kubelet[2685]: E0905 00:46:56.069814 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.069828 kubelet[2685]: W0905 00:46:56.069823 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.069940 kubelet[2685]: E0905 00:46:56.069831 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.070005 kubelet[2685]: E0905 00:46:56.069990 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.070005 kubelet[2685]: W0905 00:46:56.070000 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.070052 kubelet[2685]: E0905 00:46:56.070007 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.075521 kubelet[2685]: E0905 00:46:56.075485 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.075521 kubelet[2685]: W0905 00:46:56.075510 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.075521 kubelet[2685]: E0905 00:46:56.075532 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.075765 kubelet[2685]: I0905 00:46:56.075561 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c9538fe6-7e0a-462e-bbc0-1898cf53d69a-varrun\") pod \"csi-node-driver-h4th8\" (UID: \"c9538fe6-7e0a-462e-bbc0-1898cf53d69a\") " pod="calico-system/csi-node-driver-h4th8" Sep 5 00:46:56.075822 kubelet[2685]: E0905 00:46:56.075805 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.075822 kubelet[2685]: W0905 00:46:56.075817 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.075868 kubelet[2685]: E0905 00:46:56.075832 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.076119 kubelet[2685]: E0905 00:46:56.076093 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.076157 kubelet[2685]: W0905 00:46:56.076116 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.076157 kubelet[2685]: E0905 00:46:56.076145 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.076341 kubelet[2685]: E0905 00:46:56.076326 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.076341 kubelet[2685]: W0905 00:46:56.076336 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.076385 kubelet[2685]: E0905 00:46:56.076348 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.076534 kubelet[2685]: E0905 00:46:56.076518 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.076534 kubelet[2685]: W0905 00:46:56.076528 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.076580 kubelet[2685]: E0905 00:46:56.076536 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.076580 kubelet[2685]: I0905 00:46:56.076565 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9538fe6-7e0a-462e-bbc0-1898cf53d69a-kubelet-dir\") pod \"csi-node-driver-h4th8\" (UID: \"c9538fe6-7e0a-462e-bbc0-1898cf53d69a\") " pod="calico-system/csi-node-driver-h4th8" Sep 5 00:46:56.076802 kubelet[2685]: E0905 00:46:56.076783 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.076802 kubelet[2685]: W0905 00:46:56.076796 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.076850 kubelet[2685]: E0905 00:46:56.076809 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.076850 kubelet[2685]: I0905 00:46:56.076824 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c9538fe6-7e0a-462e-bbc0-1898cf53d69a-socket-dir\") pod \"csi-node-driver-h4th8\" (UID: \"c9538fe6-7e0a-462e-bbc0-1898cf53d69a\") " pod="calico-system/csi-node-driver-h4th8" Sep 5 00:46:56.077037 kubelet[2685]: E0905 00:46:56.077019 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.077037 kubelet[2685]: W0905 00:46:56.077032 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.077087 kubelet[2685]: E0905 00:46:56.077048 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.077087 kubelet[2685]: I0905 00:46:56.077067 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwgf6\" (UniqueName: \"kubernetes.io/projected/c9538fe6-7e0a-462e-bbc0-1898cf53d69a-kube-api-access-dwgf6\") pod \"csi-node-driver-h4th8\" (UID: \"c9538fe6-7e0a-462e-bbc0-1898cf53d69a\") " pod="calico-system/csi-node-driver-h4th8" Sep 5 00:46:56.077237 kubelet[2685]: E0905 00:46:56.077221 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.077237 kubelet[2685]: W0905 00:46:56.077233 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.077282 kubelet[2685]: E0905 00:46:56.077246 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.077420 kubelet[2685]: E0905 00:46:56.077405 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.077420 kubelet[2685]: W0905 00:46:56.077415 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.077466 kubelet[2685]: E0905 00:46:56.077426 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.077599 kubelet[2685]: E0905 00:46:56.077584 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.077599 kubelet[2685]: W0905 00:46:56.077594 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.077642 kubelet[2685]: E0905 00:46:56.077605 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.077793 kubelet[2685]: E0905 00:46:56.077779 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.077793 kubelet[2685]: W0905 00:46:56.077789 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.077840 kubelet[2685]: E0905 00:46:56.077801 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.077973 kubelet[2685]: E0905 00:46:56.077959 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.077973 kubelet[2685]: W0905 00:46:56.077970 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.078025 kubelet[2685]: E0905 00:46:56.077982 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.078025 kubelet[2685]: I0905 00:46:56.077997 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c9538fe6-7e0a-462e-bbc0-1898cf53d69a-registration-dir\") pod \"csi-node-driver-h4th8\" (UID: \"c9538fe6-7e0a-462e-bbc0-1898cf53d69a\") " pod="calico-system/csi-node-driver-h4th8" Sep 5 00:46:56.078200 kubelet[2685]: E0905 00:46:56.078181 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.078200 kubelet[2685]: W0905 00:46:56.078195 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.078281 kubelet[2685]: E0905 00:46:56.078210 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.078426 kubelet[2685]: E0905 00:46:56.078403 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.078426 kubelet[2685]: W0905 00:46:56.078418 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.078522 kubelet[2685]: E0905 00:46:56.078450 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.078601 kubelet[2685]: E0905 00:46:56.078587 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.078601 kubelet[2685]: W0905 00:46:56.078597 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.078689 kubelet[2685]: E0905 00:46:56.078607 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.078791 kubelet[2685]: E0905 00:46:56.078776 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.078791 kubelet[2685]: W0905 00:46:56.078786 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.078857 kubelet[2685]: E0905 00:46:56.078794 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.100819 containerd[1565]: time="2025-09-05T00:46:56.100765133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v4jz2,Uid:533bf092-00de-4065-9bd2-61f00ddc5fb4,Namespace:calico-system,Attempt:0,}" Sep 5 00:46:56.178615 kubelet[2685]: E0905 00:46:56.178585 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.178615 kubelet[2685]: W0905 00:46:56.178604 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.178731 kubelet[2685]: E0905 00:46:56.178621 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.178892 kubelet[2685]: E0905 00:46:56.178870 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.178892 kubelet[2685]: W0905 00:46:56.178881 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.178956 kubelet[2685]: E0905 00:46:56.178894 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.179209 kubelet[2685]: E0905 00:46:56.179173 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.179209 kubelet[2685]: W0905 00:46:56.179201 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.179260 kubelet[2685]: E0905 00:46:56.179230 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.179466 kubelet[2685]: E0905 00:46:56.179442 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.179466 kubelet[2685]: W0905 00:46:56.179454 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.179512 kubelet[2685]: E0905 00:46:56.179469 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.179663 kubelet[2685]: E0905 00:46:56.179635 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.179694 kubelet[2685]: W0905 00:46:56.179666 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.179694 kubelet[2685]: E0905 00:46:56.179679 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.179908 kubelet[2685]: E0905 00:46:56.179892 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.179908 kubelet[2685]: W0905 00:46:56.179902 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.179972 kubelet[2685]: E0905 00:46:56.179915 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.180114 kubelet[2685]: E0905 00:46:56.180099 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.180114 kubelet[2685]: W0905 00:46:56.180109 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.180154 kubelet[2685]: E0905 00:46:56.180122 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.180411 kubelet[2685]: E0905 00:46:56.180381 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.180441 kubelet[2685]: W0905 00:46:56.180409 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.180467 kubelet[2685]: E0905 00:46:56.180439 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.180628 kubelet[2685]: E0905 00:46:56.180613 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.180628 kubelet[2685]: W0905 00:46:56.180623 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.180697 kubelet[2685]: E0905 00:46:56.180637 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.180829 kubelet[2685]: E0905 00:46:56.180814 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.180829 kubelet[2685]: W0905 00:46:56.180824 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.180875 kubelet[2685]: E0905 00:46:56.180838 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.181053 kubelet[2685]: E0905 00:46:56.181036 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.181053 kubelet[2685]: W0905 00:46:56.181049 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.181107 kubelet[2685]: E0905 00:46:56.181062 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.181258 kubelet[2685]: E0905 00:46:56.181242 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.181258 kubelet[2685]: W0905 00:46:56.181254 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.181307 kubelet[2685]: E0905 00:46:56.181267 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.181452 kubelet[2685]: E0905 00:46:56.181436 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.181452 kubelet[2685]: W0905 00:46:56.181447 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.181502 kubelet[2685]: E0905 00:46:56.181462 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.181636 kubelet[2685]: E0905 00:46:56.181622 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.181636 kubelet[2685]: W0905 00:46:56.181631 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.181703 kubelet[2685]: E0905 00:46:56.181656 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.181847 kubelet[2685]: E0905 00:46:56.181829 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.181847 kubelet[2685]: W0905 00:46:56.181843 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.181889 kubelet[2685]: E0905 00:46:56.181858 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.182116 kubelet[2685]: E0905 00:46:56.182099 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.182116 kubelet[2685]: W0905 00:46:56.182110 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.182174 kubelet[2685]: E0905 00:46:56.182124 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.182320 kubelet[2685]: E0905 00:46:56.182305 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.182320 kubelet[2685]: W0905 00:46:56.182315 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.182367 kubelet[2685]: E0905 00:46:56.182343 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.182497 kubelet[2685]: E0905 00:46:56.182482 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.182497 kubelet[2685]: W0905 00:46:56.182492 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.182543 kubelet[2685]: E0905 00:46:56.182517 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.182711 kubelet[2685]: E0905 00:46:56.182695 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.182711 kubelet[2685]: W0905 00:46:56.182705 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.182762 kubelet[2685]: E0905 00:46:56.182718 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.182906 kubelet[2685]: E0905 00:46:56.182892 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.182906 kubelet[2685]: W0905 00:46:56.182901 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.183026 kubelet[2685]: E0905 00:46:56.182913 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.183104 kubelet[2685]: E0905 00:46:56.183089 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.183104 kubelet[2685]: W0905 00:46:56.183098 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.183147 kubelet[2685]: E0905 00:46:56.183111 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.183310 kubelet[2685]: E0905 00:46:56.183293 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.183310 kubelet[2685]: W0905 00:46:56.183304 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.183363 kubelet[2685]: E0905 00:46:56.183319 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.183488 kubelet[2685]: E0905 00:46:56.183472 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.183488 kubelet[2685]: W0905 00:46:56.183482 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.183529 kubelet[2685]: E0905 00:46:56.183491 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.183675 kubelet[2685]: E0905 00:46:56.183641 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.183675 kubelet[2685]: W0905 00:46:56.183672 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.183726 kubelet[2685]: E0905 00:46:56.183680 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.183860 kubelet[2685]: E0905 00:46:56.183846 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.183860 kubelet[2685]: W0905 00:46:56.183855 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.183911 kubelet[2685]: E0905 00:46:56.183862 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.236381 kubelet[2685]: E0905 00:46:56.236349 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.236381 kubelet[2685]: W0905 00:46:56.236369 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.236381 kubelet[2685]: E0905 00:46:56.236384 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.238745 kubelet[2685]: E0905 00:46:56.238688 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.238745 kubelet[2685]: W0905 00:46:56.238707 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.238745 kubelet[2685]: E0905 00:46:56.238719 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.279347 containerd[1565]: time="2025-09-05T00:46:56.279305336Z" level=info msg="connecting to shim a9c40245333479adbcf57fb04b6bc5c69b862e5c4e1a542c4c9d4cd5b24e78b0" address="unix:///run/containerd/s/9980ce0d45b2125a788e5ea256ffacac398644acde4cf6e4227d78ed61af1d4c" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:46:56.283265 kubelet[2685]: E0905 00:46:56.283241 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.283265 kubelet[2685]: W0905 00:46:56.283260 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.283352 kubelet[2685]: E0905 00:46:56.283280 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.307787 systemd[1]: Started cri-containerd-a9c40245333479adbcf57fb04b6bc5c69b862e5c4e1a542c4c9d4cd5b24e78b0.scope - libcontainer container a9c40245333479adbcf57fb04b6bc5c69b862e5c4e1a542c4c9d4cd5b24e78b0. Sep 5 00:46:56.333826 containerd[1565]: time="2025-09-05T00:46:56.333742194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v4jz2,Uid:533bf092-00de-4065-9bd2-61f00ddc5fb4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a9c40245333479adbcf57fb04b6bc5c69b862e5c4e1a542c4c9d4cd5b24e78b0\"" Sep 5 00:46:56.336707 containerd[1565]: time="2025-09-05T00:46:56.335851725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 00:46:56.384422 kubelet[2685]: E0905 00:46:56.384393 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.384422 kubelet[2685]: W0905 00:46:56.384415 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.384422 kubelet[2685]: E0905 00:46:56.384435 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.485376 kubelet[2685]: E0905 00:46:56.485333 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.485376 kubelet[2685]: W0905 00:46:56.485360 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.485376 kubelet[2685]: E0905 00:46:56.485387 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.571470 kubelet[2685]: E0905 00:46:56.571437 2685 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Sep 5 00:46:56.571582 kubelet[2685]: E0905 00:46:56.571514 2685 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f543abbf-ce6f-422b-8b0c-140272927950-typha-certs podName:f543abbf-ce6f-422b-8b0c-140272927950 nodeName:}" failed. No retries permitted until 2025-09-05 00:46:57.071497198 +0000 UTC m=+17.984260305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/f543abbf-ce6f-422b-8b0c-140272927950-typha-certs") pod "calico-typha-8645d6bfc5-wklcw" (UID: "f543abbf-ce6f-422b-8b0c-140272927950") : failed to sync secret cache: timed out waiting for the condition Sep 5 00:46:56.586481 kubelet[2685]: E0905 00:46:56.586394 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.586481 kubelet[2685]: W0905 00:46:56.586414 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.586481 kubelet[2685]: E0905 00:46:56.586432 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.687983 kubelet[2685]: E0905 00:46:56.687955 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.687983 kubelet[2685]: W0905 00:46:56.687973 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.687983 kubelet[2685]: E0905 00:46:56.687993 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.789312 kubelet[2685]: E0905 00:46:56.789276 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.789312 kubelet[2685]: W0905 00:46:56.789302 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.789312 kubelet[2685]: E0905 00:46:56.789320 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.800224 update_engine[1549]: I20250905 00:46:56.800166 1549 update_attempter.cc:509] Updating boot flags... Sep 5 00:46:56.893092 kubelet[2685]: E0905 00:46:56.892260 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.893092 kubelet[2685]: W0905 00:46:56.892281 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.893092 kubelet[2685]: E0905 00:46:56.892300 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:56.993849 kubelet[2685]: E0905 00:46:56.993810 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:56.993849 kubelet[2685]: W0905 00:46:56.993832 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:56.993849 kubelet[2685]: E0905 00:46:56.993854 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:57.095032 kubelet[2685]: E0905 00:46:57.094999 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:57.095032 kubelet[2685]: W0905 00:46:57.095020 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:57.095032 kubelet[2685]: E0905 00:46:57.095040 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:57.095380 kubelet[2685]: E0905 00:46:57.095366 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:57.095380 kubelet[2685]: W0905 00:46:57.095376 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:57.095453 kubelet[2685]: E0905 00:46:57.095385 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:57.095624 kubelet[2685]: E0905 00:46:57.095610 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:57.095624 kubelet[2685]: W0905 00:46:57.095621 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:57.095711 kubelet[2685]: E0905 00:46:57.095629 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:57.095854 kubelet[2685]: E0905 00:46:57.095832 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:57.095854 kubelet[2685]: W0905 00:46:57.095843 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:57.095854 kubelet[2685]: E0905 00:46:57.095851 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:57.096240 kubelet[2685]: E0905 00:46:57.096208 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:57.096240 kubelet[2685]: W0905 00:46:57.096229 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:57.096292 kubelet[2685]: E0905 00:46:57.096249 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:57.100225 kubelet[2685]: E0905 00:46:57.100197 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:46:57.100225 kubelet[2685]: W0905 00:46:57.100212 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:46:57.100225 kubelet[2685]: E0905 00:46:57.100223 2685 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:46:57.179993 kubelet[2685]: E0905 00:46:57.179877 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4th8" podUID="c9538fe6-7e0a-462e-bbc0-1898cf53d69a" Sep 5 00:46:57.189757 kubelet[2685]: E0905 00:46:57.189710 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:57.190147 containerd[1565]: time="2025-09-05T00:46:57.190108062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8645d6bfc5-wklcw,Uid:f543abbf-ce6f-422b-8b0c-140272927950,Namespace:calico-system,Attempt:0,}" Sep 5 00:46:57.209254 containerd[1565]: time="2025-09-05T00:46:57.209216521Z" level=info msg="connecting to shim ca0cbed87a850b6d9359e431a15fd448ce51eccb7b3979293c4cb55441f8f51e" address="unix:///run/containerd/s/e4db359d672ed29f4000d089751c40d8fcbe2273c323797809b41f2e4e1b3455" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:46:57.236791 systemd[1]: Started cri-containerd-ca0cbed87a850b6d9359e431a15fd448ce51eccb7b3979293c4cb55441f8f51e.scope - libcontainer container ca0cbed87a850b6d9359e431a15fd448ce51eccb7b3979293c4cb55441f8f51e. Sep 5 00:46:57.286193 containerd[1565]: time="2025-09-05T00:46:57.286077338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8645d6bfc5-wklcw,Uid:f543abbf-ce6f-422b-8b0c-140272927950,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca0cbed87a850b6d9359e431a15fd448ce51eccb7b3979293c4cb55441f8f51e\"" Sep 5 00:46:57.286935 kubelet[2685]: E0905 00:46:57.286887 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:46:58.220005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785865101.mount: Deactivated successfully. Sep 5 00:46:58.400101 containerd[1565]: time="2025-09-05T00:46:58.400059053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:58.400858 containerd[1565]: time="2025-09-05T00:46:58.400828317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 5 00:46:58.402004 containerd[1565]: time="2025-09-05T00:46:58.401970081Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:58.403950 containerd[1565]: time="2025-09-05T00:46:58.403922847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:46:58.404340 containerd[1565]: time="2025-09-05T00:46:58.404302039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.068419396s" Sep 5 00:46:58.404340 containerd[1565]: time="2025-09-05T00:46:58.404335943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 5 00:46:58.405104 containerd[1565]: time="2025-09-05T00:46:58.405087444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 00:46:58.406163 containerd[1565]: time="2025-09-05T00:46:58.406138126Z" level=info msg="CreateContainer within sandbox \"a9c40245333479adbcf57fb04b6bc5c69b862e5c4e1a542c4c9d4cd5b24e78b0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:46:58.414907 containerd[1565]: time="2025-09-05T00:46:58.414856195Z" level=info msg="Container 234dafde85c409675482a92354745eb5f92142525dd8f7bbf7351f88aef58e7d: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:46:58.422682 containerd[1565]: time="2025-09-05T00:46:58.422611276Z" level=info msg="CreateContainer within sandbox \"a9c40245333479adbcf57fb04b6bc5c69b862e5c4e1a542c4c9d4cd5b24e78b0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"234dafde85c409675482a92354745eb5f92142525dd8f7bbf7351f88aef58e7d\"" Sep 5 00:46:58.423145 containerd[1565]: time="2025-09-05T00:46:58.423116655Z" level=info msg="StartContainer for \"234dafde85c409675482a92354745eb5f92142525dd8f7bbf7351f88aef58e7d\"" Sep 5 00:46:58.424362 containerd[1565]: time="2025-09-05T00:46:58.424340062Z" level=info msg="connecting to shim 234dafde85c409675482a92354745eb5f92142525dd8f7bbf7351f88aef58e7d" address="unix:///run/containerd/s/9980ce0d45b2125a788e5ea256ffacac398644acde4cf6e4227d78ed61af1d4c" protocol=ttrpc version=3 Sep 5 00:46:58.444809 systemd[1]: Started cri-containerd-234dafde85c409675482a92354745eb5f92142525dd8f7bbf7351f88aef58e7d.scope - libcontainer container 234dafde85c409675482a92354745eb5f92142525dd8f7bbf7351f88aef58e7d. Sep 5 00:46:58.494728 containerd[1565]: time="2025-09-05T00:46:58.494687542Z" level=info msg="StartContainer for \"234dafde85c409675482a92354745eb5f92142525dd8f7bbf7351f88aef58e7d\" returns successfully" Sep 5 00:46:58.496196 systemd[1]: cri-containerd-234dafde85c409675482a92354745eb5f92142525dd8f7bbf7351f88aef58e7d.scope: Deactivated successfully. Sep 5 00:46:58.499238 containerd[1565]: time="2025-09-05T00:46:58.499210215Z" level=info msg="received exit event container_id:\"234dafde85c409675482a92354745eb5f92142525dd8f7bbf7351f88aef58e7d\" id:\"234dafde85c409675482a92354745eb5f92142525dd8f7bbf7351f88aef58e7d\" pid:3324 exited_at:{seconds:1757033218 nanos:498821074}" Sep 5 00:46:58.499343 containerd[1565]: time="2025-09-05T00:46:58.499312607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"234dafde85c409675482a92354745eb5f92142525dd8f7bbf7351f88aef58e7d\" id:\"234dafde85c409675482a92354745eb5f92142525dd8f7bbf7351f88aef58e7d\" pid:3324 exited_at:{seconds:1757033218 nanos:498821074}" Sep 5 00:46:59.179488 kubelet[2685]: E0905 00:46:59.179423 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4th8" podUID="c9538fe6-7e0a-462e-bbc0-1898cf53d69a" Sep 5 00:46:59.197512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-234dafde85c409675482a92354745eb5f92142525dd8f7bbf7351f88aef58e7d-rootfs.mount: Deactivated successfully. Sep 5 00:47:00.954677 containerd[1565]: time="2025-09-05T00:47:00.954622566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:00.955442 containerd[1565]: time="2025-09-05T00:47:00.955395597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 5 00:47:00.956547 containerd[1565]: time="2025-09-05T00:47:00.956514528Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:00.958396 containerd[1565]: time="2025-09-05T00:47:00.958363640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:00.958899 containerd[1565]: time="2025-09-05T00:47:00.958858409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.553679653s" Sep 5 00:47:00.958899 containerd[1565]: time="2025-09-05T00:47:00.958894236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 5 00:47:00.959695 containerd[1565]: time="2025-09-05T00:47:00.959633173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 00:47:00.965345 containerd[1565]: time="2025-09-05T00:47:00.965313538Z" level=info msg="CreateContainer within sandbox \"ca0cbed87a850b6d9359e431a15fd448ce51eccb7b3979293c4cb55441f8f51e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:47:00.972968 containerd[1565]: time="2025-09-05T00:47:00.972935227Z" level=info msg="Container 1fc2588b2c45f7b07a800ddf9306f3b9d9657e8ca812b074569a090b0dc771f3: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:47:00.981672 containerd[1565]: time="2025-09-05T00:47:00.981631703Z" level=info msg="CreateContainer within sandbox \"ca0cbed87a850b6d9359e431a15fd448ce51eccb7b3979293c4cb55441f8f51e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1fc2588b2c45f7b07a800ddf9306f3b9d9657e8ca812b074569a090b0dc771f3\"" Sep 5 00:47:00.982106 containerd[1565]: time="2025-09-05T00:47:00.982058083Z" level=info msg="StartContainer for \"1fc2588b2c45f7b07a800ddf9306f3b9d9657e8ca812b074569a090b0dc771f3\"" Sep 5 00:47:00.983016 containerd[1565]: time="2025-09-05T00:47:00.982985265Z" level=info msg="connecting to shim 1fc2588b2c45f7b07a800ddf9306f3b9d9657e8ca812b074569a090b0dc771f3" address="unix:///run/containerd/s/e4db359d672ed29f4000d089751c40d8fcbe2273c323797809b41f2e4e1b3455" protocol=ttrpc version=3 Sep 5 00:47:01.005765 systemd[1]: Started cri-containerd-1fc2588b2c45f7b07a800ddf9306f3b9d9657e8ca812b074569a090b0dc771f3.scope - libcontainer container 1fc2588b2c45f7b07a800ddf9306f3b9d9657e8ca812b074569a090b0dc771f3. Sep 5 00:47:01.054561 containerd[1565]: time="2025-09-05T00:47:01.054524847Z" level=info msg="StartContainer for \"1fc2588b2c45f7b07a800ddf9306f3b9d9657e8ca812b074569a090b0dc771f3\" returns successfully" Sep 5 00:47:01.181438 kubelet[2685]: E0905 00:47:01.181385 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4th8" podUID="c9538fe6-7e0a-462e-bbc0-1898cf53d69a" Sep 5 00:47:01.238688 kubelet[2685]: E0905 00:47:01.238626 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:01.253051 kubelet[2685]: I0905 00:47:01.253006 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8645d6bfc5-wklcw" podStartSLOduration=2.581407586 podStartE2EDuration="6.25299197s" podCreationTimestamp="2025-09-05 00:46:55 +0000 UTC" firstStartedPulling="2025-09-05 00:46:57.287922983 +0000 UTC m=+18.200686080" lastFinishedPulling="2025-09-05 00:47:00.959507347 +0000 UTC m=+21.872270464" observedRunningTime="2025-09-05 00:47:01.252498714 +0000 UTC m=+22.165261821" watchObservedRunningTime="2025-09-05 00:47:01.25299197 +0000 UTC m=+22.165755077" Sep 5 00:47:02.240092 kubelet[2685]: I0905 00:47:02.240054 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:47:02.240474 kubelet[2685]: E0905 00:47:02.240367 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:03.179912 kubelet[2685]: E0905 00:47:03.179856 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4th8" podUID="c9538fe6-7e0a-462e-bbc0-1898cf53d69a" Sep 5 00:47:05.179485 kubelet[2685]: E0905 00:47:05.179437 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4th8" podUID="c9538fe6-7e0a-462e-bbc0-1898cf53d69a" Sep 5 00:47:05.517025 containerd[1565]: time="2025-09-05T00:47:05.516973618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:05.517693 containerd[1565]: time="2025-09-05T00:47:05.517637124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 5 00:47:05.518792 containerd[1565]: time="2025-09-05T00:47:05.518746405Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:05.520698 containerd[1565]: time="2025-09-05T00:47:05.520630072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:05.521173 containerd[1565]: time="2025-09-05T00:47:05.521146430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.561455318s" Sep 5 00:47:05.521218 containerd[1565]: time="2025-09-05T00:47:05.521175595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 5 00:47:05.523498 containerd[1565]: time="2025-09-05T00:47:05.523463349Z" level=info msg="CreateContainer within sandbox \"a9c40245333479adbcf57fb04b6bc5c69b862e5c4e1a542c4c9d4cd5b24e78b0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:47:05.534024 containerd[1565]: time="2025-09-05T00:47:05.533991471Z" level=info msg="Container 81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:47:05.542661 containerd[1565]: time="2025-09-05T00:47:05.542615820Z" level=info msg="CreateContainer within sandbox \"a9c40245333479adbcf57fb04b6bc5c69b862e5c4e1a542c4c9d4cd5b24e78b0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df\"" Sep 5 00:47:05.543095 containerd[1565]: time="2025-09-05T00:47:05.543013956Z" level=info msg="StartContainer for \"81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df\"" Sep 5 00:47:05.544385 containerd[1565]: time="2025-09-05T00:47:05.544346127Z" level=info msg="connecting to shim 81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df" address="unix:///run/containerd/s/9980ce0d45b2125a788e5ea256ffacac398644acde4cf6e4227d78ed61af1d4c" protocol=ttrpc version=3 Sep 5 00:47:05.566791 systemd[1]: Started cri-containerd-81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df.scope - libcontainer container 81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df. Sep 5 00:47:05.670052 containerd[1565]: time="2025-09-05T00:47:05.670004734Z" level=info msg="StartContainer for \"81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df\" returns successfully" Sep 5 00:47:06.978752 systemd[1]: cri-containerd-81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df.scope: Deactivated successfully. Sep 5 00:47:06.979371 systemd[1]: cri-containerd-81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df.scope: Consumed 675ms CPU time, 180.1M memory peak, 3.2M read from disk, 171.3M written to disk. Sep 5 00:47:06.985700 containerd[1565]: time="2025-09-05T00:47:06.984108706Z" level=info msg="received exit event container_id:\"81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df\" id:\"81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df\" pid:3424 exited_at:{seconds:1757033226 nanos:983244353}" Sep 5 00:47:06.988347 containerd[1565]: time="2025-09-05T00:47:06.988226696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df\" id:\"81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df\" pid:3424 exited_at:{seconds:1757033226 nanos:983244353}" Sep 5 00:47:07.006431 kubelet[2685]: I0905 00:47:07.006388 2685 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 00:47:07.052337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81e5272a5f961968af9362823c9707459c602320c6eb66907b524611f15e90df-rootfs.mount: Deactivated successfully. Sep 5 00:47:07.081783 systemd[1]: Created slice kubepods-burstable-pod93e5e237_c20c_492d_bb77_960881bf88c6.slice - libcontainer container kubepods-burstable-pod93e5e237_c20c_492d_bb77_960881bf88c6.slice. Sep 5 00:47:07.109201 systemd[1]: Created slice kubepods-burstable-pod3643b61f_e08f_46ef_a337_d8bea75516a4.slice - libcontainer container kubepods-burstable-pod3643b61f_e08f_46ef_a337_d8bea75516a4.slice. Sep 5 00:47:07.116986 systemd[1]: Created slice kubepods-besteffort-podbb57882a_49b9_47c9_82b1_15ae93bc171c.slice - libcontainer container kubepods-besteffort-podbb57882a_49b9_47c9_82b1_15ae93bc171c.slice. Sep 5 00:47:07.126669 systemd[1]: Created slice kubepods-besteffort-podd48f1f52_ce3c_4e49_8a4d_b63e273da579.slice - libcontainer container kubepods-besteffort-podd48f1f52_ce3c_4e49_8a4d_b63e273da579.slice. Sep 5 00:47:07.167346 kubelet[2685]: I0905 00:47:07.167220 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3643b61f-e08f-46ef-a337-d8bea75516a4-config-volume\") pod \"coredns-7c65d6cfc9-tqbx2\" (UID: \"3643b61f-e08f-46ef-a337-d8bea75516a4\") " pod="kube-system/coredns-7c65d6cfc9-tqbx2" Sep 5 00:47:07.167346 kubelet[2685]: I0905 00:47:07.167278 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb57882a-49b9-47c9-82b1-15ae93bc171c-tigera-ca-bundle\") pod \"calico-kube-controllers-799779498d-kptz4\" (UID: \"bb57882a-49b9-47c9-82b1-15ae93bc171c\") " pod="calico-system/calico-kube-controllers-799779498d-kptz4" Sep 5 00:47:07.167346 kubelet[2685]: I0905 00:47:07.167306 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48f1f52-ce3c-4e49-8a4d-b63e273da579-config\") pod \"goldmane-7988f88666-6kn47\" (UID: \"d48f1f52-ce3c-4e49-8a4d-b63e273da579\") " pod="calico-system/goldmane-7988f88666-6kn47" Sep 5 00:47:07.167740 kubelet[2685]: I0905 00:47:07.167327 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d48f1f52-ce3c-4e49-8a4d-b63e273da579-goldmane-key-pair\") pod \"goldmane-7988f88666-6kn47\" (UID: \"d48f1f52-ce3c-4e49-8a4d-b63e273da579\") " pod="calico-system/goldmane-7988f88666-6kn47" Sep 5 00:47:07.167740 kubelet[2685]: I0905 00:47:07.167462 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwd42\" (UniqueName: \"kubernetes.io/projected/bb57882a-49b9-47c9-82b1-15ae93bc171c-kube-api-access-lwd42\") pod \"calico-kube-controllers-799779498d-kptz4\" (UID: \"bb57882a-49b9-47c9-82b1-15ae93bc171c\") " pod="calico-system/calico-kube-controllers-799779498d-kptz4" Sep 5 00:47:07.167740 kubelet[2685]: I0905 00:47:07.167498 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssjkm\" (UniqueName: \"kubernetes.io/projected/d48f1f52-ce3c-4e49-8a4d-b63e273da579-kube-api-access-ssjkm\") pod \"goldmane-7988f88666-6kn47\" (UID: \"d48f1f52-ce3c-4e49-8a4d-b63e273da579\") " pod="calico-system/goldmane-7988f88666-6kn47" Sep 5 00:47:07.167740 kubelet[2685]: I0905 00:47:07.167563 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qv84\" (UniqueName: \"kubernetes.io/projected/3643b61f-e08f-46ef-a337-d8bea75516a4-kube-api-access-2qv84\") pod \"coredns-7c65d6cfc9-tqbx2\" (UID: \"3643b61f-e08f-46ef-a337-d8bea75516a4\") " pod="kube-system/coredns-7c65d6cfc9-tqbx2" Sep 5 00:47:07.167740 kubelet[2685]: I0905 00:47:07.167621 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff245\" (UniqueName: \"kubernetes.io/projected/93e5e237-c20c-492d-bb77-960881bf88c6-kube-api-access-ff245\") pod \"coredns-7c65d6cfc9-628rf\" (UID: \"93e5e237-c20c-492d-bb77-960881bf88c6\") " pod="kube-system/coredns-7c65d6cfc9-628rf" Sep 5 00:47:07.167916 kubelet[2685]: I0905 00:47:07.167698 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93e5e237-c20c-492d-bb77-960881bf88c6-config-volume\") pod \"coredns-7c65d6cfc9-628rf\" (UID: \"93e5e237-c20c-492d-bb77-960881bf88c6\") " pod="kube-system/coredns-7c65d6cfc9-628rf" Sep 5 00:47:07.167916 kubelet[2685]: I0905 00:47:07.167723 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d48f1f52-ce3c-4e49-8a4d-b63e273da579-goldmane-ca-bundle\") pod \"goldmane-7988f88666-6kn47\" (UID: \"d48f1f52-ce3c-4e49-8a4d-b63e273da579\") " pod="calico-system/goldmane-7988f88666-6kn47" Sep 5 00:47:07.183141 systemd[1]: Created slice kubepods-besteffort-pod5f689ecb_467b_4e47_a9fb_c61a24f6068d.slice - libcontainer container kubepods-besteffort-pod5f689ecb_467b_4e47_a9fb_c61a24f6068d.slice. Sep 5 00:47:07.196911 systemd[1]: Created slice kubepods-besteffort-pode7abbcac_09c8_4c29_9ae9_dbfbd46c7fed.slice - libcontainer container kubepods-besteffort-pode7abbcac_09c8_4c29_9ae9_dbfbd46c7fed.slice. Sep 5 00:47:07.208307 systemd[1]: Created slice kubepods-besteffort-pod0bb9212a_7c18_4b3c_9f26_202f5352bc82.slice - libcontainer container kubepods-besteffort-pod0bb9212a_7c18_4b3c_9f26_202f5352bc82.slice. Sep 5 00:47:07.232041 systemd[1]: Created slice kubepods-besteffort-podc9538fe6_7e0a_462e_bbc0_1898cf53d69a.slice - libcontainer container kubepods-besteffort-podc9538fe6_7e0a_462e_bbc0_1898cf53d69a.slice. Sep 5 00:47:07.242916 containerd[1565]: time="2025-09-05T00:47:07.242254740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h4th8,Uid:c9538fe6-7e0a-462e-bbc0-1898cf53d69a,Namespace:calico-system,Attempt:0,}" Sep 5 00:47:07.263443 containerd[1565]: time="2025-09-05T00:47:07.263382575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 00:47:07.270724 kubelet[2685]: I0905 00:47:07.268178 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9wzs\" (UniqueName: \"kubernetes.io/projected/5f689ecb-467b-4e47-a9fb-c61a24f6068d-kube-api-access-f9wzs\") pod \"calico-apiserver-76595746b9-qr7sn\" (UID: \"5f689ecb-467b-4e47-a9fb-c61a24f6068d\") " pod="calico-apiserver/calico-apiserver-76595746b9-qr7sn" Sep 5 00:47:07.270724 kubelet[2685]: I0905 00:47:07.268237 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25cn2\" (UniqueName: \"kubernetes.io/projected/e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed-kube-api-access-25cn2\") pod \"whisker-5b8c957cd9-chp5h\" (UID: \"e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed\") " pod="calico-system/whisker-5b8c957cd9-chp5h" Sep 5 00:47:07.270724 kubelet[2685]: I0905 00:47:07.268332 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5f689ecb-467b-4e47-a9fb-c61a24f6068d-calico-apiserver-certs\") pod \"calico-apiserver-76595746b9-qr7sn\" (UID: \"5f689ecb-467b-4e47-a9fb-c61a24f6068d\") " pod="calico-apiserver/calico-apiserver-76595746b9-qr7sn" Sep 5 00:47:07.270724 kubelet[2685]: I0905 00:47:07.268391 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0bb9212a-7c18-4b3c-9f26-202f5352bc82-calico-apiserver-certs\") pod \"calico-apiserver-76595746b9-728ws\" (UID: \"0bb9212a-7c18-4b3c-9f26-202f5352bc82\") " pod="calico-apiserver/calico-apiserver-76595746b9-728ws" Sep 5 00:47:07.270724 kubelet[2685]: I0905 00:47:07.268429 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed-whisker-ca-bundle\") pod \"whisker-5b8c957cd9-chp5h\" (UID: \"e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed\") " pod="calico-system/whisker-5b8c957cd9-chp5h" Sep 5 00:47:07.271195 kubelet[2685]: I0905 00:47:07.268457 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed-whisker-backend-key-pair\") pod \"whisker-5b8c957cd9-chp5h\" (UID: \"e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed\") " pod="calico-system/whisker-5b8c957cd9-chp5h" Sep 5 00:47:07.271195 kubelet[2685]: I0905 00:47:07.268497 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2xl8\" (UniqueName: \"kubernetes.io/projected/0bb9212a-7c18-4b3c-9f26-202f5352bc82-kube-api-access-f2xl8\") pod \"calico-apiserver-76595746b9-728ws\" (UID: \"0bb9212a-7c18-4b3c-9f26-202f5352bc82\") " pod="calico-apiserver/calico-apiserver-76595746b9-728ws" Sep 5 00:47:07.390342 kubelet[2685]: E0905 00:47:07.390304 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:07.393354 containerd[1565]: time="2025-09-05T00:47:07.393309439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-628rf,Uid:93e5e237-c20c-492d-bb77-960881bf88c6,Namespace:kube-system,Attempt:0,}" Sep 5 00:47:07.403591 containerd[1565]: time="2025-09-05T00:47:07.403491250Z" level=error msg="Failed to destroy network for sandbox \"01e5cab1efa075064ac840784339d27a8bef5be049d046a598d48f5500b11efb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.406749 containerd[1565]: time="2025-09-05T00:47:07.406642034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h4th8,Uid:c9538fe6-7e0a-462e-bbc0-1898cf53d69a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e5cab1efa075064ac840784339d27a8bef5be049d046a598d48f5500b11efb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.415189 kubelet[2685]: E0905 00:47:07.414790 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:07.417318 containerd[1565]: time="2025-09-05T00:47:07.416321491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tqbx2,Uid:3643b61f-e08f-46ef-a337-d8bea75516a4,Namespace:kube-system,Attempt:0,}" Sep 5 00:47:07.425771 containerd[1565]: time="2025-09-05T00:47:07.425706407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-799779498d-kptz4,Uid:bb57882a-49b9-47c9-82b1-15ae93bc171c,Namespace:calico-system,Attempt:0,}" Sep 5 00:47:07.434502 kubelet[2685]: E0905 00:47:07.434443 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e5cab1efa075064ac840784339d27a8bef5be049d046a598d48f5500b11efb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.434797 kubelet[2685]: E0905 00:47:07.434773 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e5cab1efa075064ac840784339d27a8bef5be049d046a598d48f5500b11efb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h4th8" Sep 5 00:47:07.434894 kubelet[2685]: E0905 00:47:07.434875 2685 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e5cab1efa075064ac840784339d27a8bef5be049d046a598d48f5500b11efb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h4th8" Sep 5 00:47:07.435021 kubelet[2685]: E0905 00:47:07.434988 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h4th8_calico-system(c9538fe6-7e0a-462e-bbc0-1898cf53d69a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h4th8_calico-system(c9538fe6-7e0a-462e-bbc0-1898cf53d69a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01e5cab1efa075064ac840784339d27a8bef5be049d046a598d48f5500b11efb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h4th8" podUID="c9538fe6-7e0a-462e-bbc0-1898cf53d69a" Sep 5 00:47:07.436581 containerd[1565]: time="2025-09-05T00:47:07.436543788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6kn47,Uid:d48f1f52-ce3c-4e49-8a4d-b63e273da579,Namespace:calico-system,Attempt:0,}" Sep 5 00:47:07.490278 containerd[1565]: time="2025-09-05T00:47:07.490231836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76595746b9-qr7sn,Uid:5f689ecb-467b-4e47-a9fb-c61a24f6068d,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:47:07.506691 containerd[1565]: time="2025-09-05T00:47:07.506612421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8c957cd9-chp5h,Uid:e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed,Namespace:calico-system,Attempt:0,}" Sep 5 00:47:07.511981 containerd[1565]: time="2025-09-05T00:47:07.511921346Z" level=error msg="Failed to destroy network for sandbox \"767e39bcceda39042b6c65a7d5df945f9f8797133c10dd9b6e21fc2f33718d2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.517413 containerd[1565]: time="2025-09-05T00:47:07.517348072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-628rf,Uid:93e5e237-c20c-492d-bb77-960881bf88c6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"767e39bcceda39042b6c65a7d5df945f9f8797133c10dd9b6e21fc2f33718d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.518090 kubelet[2685]: E0905 00:47:07.517670 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"767e39bcceda39042b6c65a7d5df945f9f8797133c10dd9b6e21fc2f33718d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.518090 kubelet[2685]: E0905 00:47:07.517745 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"767e39bcceda39042b6c65a7d5df945f9f8797133c10dd9b6e21fc2f33718d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-628rf" Sep 5 00:47:07.518090 kubelet[2685]: E0905 00:47:07.517765 2685 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"767e39bcceda39042b6c65a7d5df945f9f8797133c10dd9b6e21fc2f33718d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-628rf" Sep 5 00:47:07.518242 kubelet[2685]: E0905 00:47:07.517819 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-628rf_kube-system(93e5e237-c20c-492d-bb77-960881bf88c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-628rf_kube-system(93e5e237-c20c-492d-bb77-960881bf88c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"767e39bcceda39042b6c65a7d5df945f9f8797133c10dd9b6e21fc2f33718d2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-628rf" podUID="93e5e237-c20c-492d-bb77-960881bf88c6" Sep 5 00:47:07.521680 containerd[1565]: time="2025-09-05T00:47:07.521600213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76595746b9-728ws,Uid:0bb9212a-7c18-4b3c-9f26-202f5352bc82,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:47:07.527419 containerd[1565]: time="2025-09-05T00:47:07.527338443Z" level=error msg="Failed to destroy network for sandbox \"bb76ed9881af11862fd29be8502712c4a48aa10223477ea5c73da7af8211829d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.530502 containerd[1565]: time="2025-09-05T00:47:07.530369111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tqbx2,Uid:3643b61f-e08f-46ef-a337-d8bea75516a4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb76ed9881af11862fd29be8502712c4a48aa10223477ea5c73da7af8211829d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.530994 kubelet[2685]: E0905 00:47:07.530878 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb76ed9881af11862fd29be8502712c4a48aa10223477ea5c73da7af8211829d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.531105 kubelet[2685]: E0905 00:47:07.531050 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb76ed9881af11862fd29be8502712c4a48aa10223477ea5c73da7af8211829d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tqbx2" Sep 5 00:47:07.531105 kubelet[2685]: E0905 00:47:07.531079 2685 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb76ed9881af11862fd29be8502712c4a48aa10223477ea5c73da7af8211829d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tqbx2" Sep 5 00:47:07.531203 kubelet[2685]: E0905 00:47:07.531154 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tqbx2_kube-system(3643b61f-e08f-46ef-a337-d8bea75516a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tqbx2_kube-system(3643b61f-e08f-46ef-a337-d8bea75516a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb76ed9881af11862fd29be8502712c4a48aa10223477ea5c73da7af8211829d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tqbx2" podUID="3643b61f-e08f-46ef-a337-d8bea75516a4" Sep 5 00:47:07.538502 containerd[1565]: time="2025-09-05T00:47:07.538433908Z" level=error msg="Failed to destroy network for sandbox \"de64a7747313b135c868be72b80817e7371e1f8ddf4205963573172cf40d8bd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.542046 containerd[1565]: time="2025-09-05T00:47:07.541968633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-799779498d-kptz4,Uid:bb57882a-49b9-47c9-82b1-15ae93bc171c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de64a7747313b135c868be72b80817e7371e1f8ddf4205963573172cf40d8bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.542357 kubelet[2685]: E0905 00:47:07.542247 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de64a7747313b135c868be72b80817e7371e1f8ddf4205963573172cf40d8bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.542357 kubelet[2685]: E0905 00:47:07.542325 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de64a7747313b135c868be72b80817e7371e1f8ddf4205963573172cf40d8bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-799779498d-kptz4" Sep 5 00:47:07.542357 kubelet[2685]: E0905 00:47:07.542349 2685 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de64a7747313b135c868be72b80817e7371e1f8ddf4205963573172cf40d8bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-799779498d-kptz4" Sep 5 00:47:07.542513 kubelet[2685]: E0905 00:47:07.542397 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-799779498d-kptz4_calico-system(bb57882a-49b9-47c9-82b1-15ae93bc171c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-799779498d-kptz4_calico-system(bb57882a-49b9-47c9-82b1-15ae93bc171c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de64a7747313b135c868be72b80817e7371e1f8ddf4205963573172cf40d8bd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-799779498d-kptz4" podUID="bb57882a-49b9-47c9-82b1-15ae93bc171c" Sep 5 00:47:07.584323 containerd[1565]: time="2025-09-05T00:47:07.584267234Z" level=error msg="Failed to destroy network for sandbox \"214cceda1b7e7744427cee5ab1575e0b827ad00ddfed02b95e2f53670599d0b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.586407 containerd[1565]: time="2025-09-05T00:47:07.586157603Z" level=error msg="Failed to destroy network for sandbox \"f10fda8c6eb94cda40aca7418ea43be19d428f837b9a3826fba731cad1149f61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.586845 containerd[1565]: time="2025-09-05T00:47:07.586798706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6kn47,Uid:d48f1f52-ce3c-4e49-8a4d-b63e273da579,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"214cceda1b7e7744427cee5ab1575e0b827ad00ddfed02b95e2f53670599d0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.587846 kubelet[2685]: E0905 00:47:07.587792 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"214cceda1b7e7744427cee5ab1575e0b827ad00ddfed02b95e2f53670599d0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.587923 kubelet[2685]: E0905 00:47:07.587860 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"214cceda1b7e7744427cee5ab1575e0b827ad00ddfed02b95e2f53670599d0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-6kn47" Sep 5 00:47:07.587923 kubelet[2685]: E0905 00:47:07.587887 2685 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"214cceda1b7e7744427cee5ab1575e0b827ad00ddfed02b95e2f53670599d0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-6kn47" Sep 5 00:47:07.587998 kubelet[2685]: E0905 00:47:07.587933 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-6kn47_calico-system(d48f1f52-ce3c-4e49-8a4d-b63e273da579)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-6kn47_calico-system(d48f1f52-ce3c-4e49-8a4d-b63e273da579)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"214cceda1b7e7744427cee5ab1575e0b827ad00ddfed02b95e2f53670599d0b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-6kn47" podUID="d48f1f52-ce3c-4e49-8a4d-b63e273da579" Sep 5 00:47:07.589449 containerd[1565]: time="2025-09-05T00:47:07.589397603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76595746b9-qr7sn,Uid:5f689ecb-467b-4e47-a9fb-c61a24f6068d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10fda8c6eb94cda40aca7418ea43be19d428f837b9a3826fba731cad1149f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.589661 kubelet[2685]: E0905 00:47:07.589574 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10fda8c6eb94cda40aca7418ea43be19d428f837b9a3826fba731cad1149f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.589714 kubelet[2685]: E0905 00:47:07.589620 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10fda8c6eb94cda40aca7418ea43be19d428f837b9a3826fba731cad1149f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76595746b9-qr7sn" Sep 5 00:47:07.589714 kubelet[2685]: E0905 00:47:07.589698 2685 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10fda8c6eb94cda40aca7418ea43be19d428f837b9a3826fba731cad1149f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76595746b9-qr7sn" Sep 5 00:47:07.589797 kubelet[2685]: E0905 00:47:07.589742 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76595746b9-qr7sn_calico-apiserver(5f689ecb-467b-4e47-a9fb-c61a24f6068d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76595746b9-qr7sn_calico-apiserver(5f689ecb-467b-4e47-a9fb-c61a24f6068d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f10fda8c6eb94cda40aca7418ea43be19d428f837b9a3826fba731cad1149f61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76595746b9-qr7sn" podUID="5f689ecb-467b-4e47-a9fb-c61a24f6068d" Sep 5 00:47:07.605084 containerd[1565]: time="2025-09-05T00:47:07.605028382Z" level=error msg="Failed to destroy network for sandbox \"0d511f8762417b74b4b761ef95d0f854d4ada0e4f29a208e7856b6dca23097a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.606808 containerd[1565]: time="2025-09-05T00:47:07.606752959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8c957cd9-chp5h,Uid:e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d511f8762417b74b4b761ef95d0f854d4ada0e4f29a208e7856b6dca23097a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.607103 kubelet[2685]: E0905 00:47:07.607057 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d511f8762417b74b4b761ef95d0f854d4ada0e4f29a208e7856b6dca23097a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.607192 kubelet[2685]: E0905 00:47:07.607129 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d511f8762417b74b4b761ef95d0f854d4ada0e4f29a208e7856b6dca23097a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b8c957cd9-chp5h" Sep 5 00:47:07.607192 kubelet[2685]: E0905 00:47:07.607152 2685 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d511f8762417b74b4b761ef95d0f854d4ada0e4f29a208e7856b6dca23097a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b8c957cd9-chp5h" Sep 5 00:47:07.607262 kubelet[2685]: E0905 00:47:07.607198 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b8c957cd9-chp5h_calico-system(e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b8c957cd9-chp5h_calico-system(e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d511f8762417b74b4b761ef95d0f854d4ada0e4f29a208e7856b6dca23097a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b8c957cd9-chp5h" podUID="e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed" Sep 5 00:47:07.618685 containerd[1565]: time="2025-09-05T00:47:07.618619491Z" level=error msg="Failed to destroy network for sandbox \"96ed1532061020f4f8b37f36d54f57a542b31af44b139520148216bad6589b79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.620283 containerd[1565]: time="2025-09-05T00:47:07.620249160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76595746b9-728ws,Uid:0bb9212a-7c18-4b3c-9f26-202f5352bc82,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ed1532061020f4f8b37f36d54f57a542b31af44b139520148216bad6589b79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.620498 kubelet[2685]: E0905 00:47:07.620462 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ed1532061020f4f8b37f36d54f57a542b31af44b139520148216bad6589b79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:47:07.620543 kubelet[2685]: E0905 00:47:07.620517 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ed1532061020f4f8b37f36d54f57a542b31af44b139520148216bad6589b79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76595746b9-728ws" Sep 5 00:47:07.620543 kubelet[2685]: E0905 00:47:07.620535 2685 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ed1532061020f4f8b37f36d54f57a542b31af44b139520148216bad6589b79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76595746b9-728ws" Sep 5 00:47:07.620592 kubelet[2685]: E0905 00:47:07.620575 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76595746b9-728ws_calico-apiserver(0bb9212a-7c18-4b3c-9f26-202f5352bc82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76595746b9-728ws_calico-apiserver(0bb9212a-7c18-4b3c-9f26-202f5352bc82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96ed1532061020f4f8b37f36d54f57a542b31af44b139520148216bad6589b79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76595746b9-728ws" podUID="0bb9212a-7c18-4b3c-9f26-202f5352bc82" Sep 5 00:47:08.058436 systemd[1]: run-netns-cni\x2dd5c58fa4\x2d3c07\x2d7b5b\x2dbdfe\x2d303fd07e025a.mount: Deactivated successfully. Sep 5 00:47:14.016978 kubelet[2685]: I0905 00:47:14.016924 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:47:14.018112 kubelet[2685]: E0905 00:47:14.018057 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:14.276047 kubelet[2685]: E0905 00:47:14.275943 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:16.404427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326548786.mount: Deactivated successfully. Sep 5 00:47:17.430032 containerd[1565]: time="2025-09-05T00:47:17.429948558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:17.431147 containerd[1565]: time="2025-09-05T00:47:17.431068861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 5 00:47:17.432768 containerd[1565]: time="2025-09-05T00:47:17.432714218Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:17.434875 containerd[1565]: time="2025-09-05T00:47:17.434838595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:17.435544 containerd[1565]: time="2025-09-05T00:47:17.435507781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.172069662s" Sep 5 00:47:17.435613 containerd[1565]: time="2025-09-05T00:47:17.435547796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 5 00:47:17.447637 containerd[1565]: time="2025-09-05T00:47:17.447594783Z" level=info msg="CreateContainer within sandbox \"a9c40245333479adbcf57fb04b6bc5c69b862e5c4e1a542c4c9d4cd5b24e78b0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:47:17.462679 containerd[1565]: time="2025-09-05T00:47:17.462558713Z" level=info msg="Container bb3b3456d0d880c7dddb573f5553cf119376b7cadec276e0b75c750c7137640b: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:47:17.503662 containerd[1565]: time="2025-09-05T00:47:17.503557680Z" level=info msg="CreateContainer within sandbox \"a9c40245333479adbcf57fb04b6bc5c69b862e5c4e1a542c4c9d4cd5b24e78b0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bb3b3456d0d880c7dddb573f5553cf119376b7cadec276e0b75c750c7137640b\"" Sep 5 00:47:17.504246 containerd[1565]: time="2025-09-05T00:47:17.504154250Z" level=info msg="StartContainer for \"bb3b3456d0d880c7dddb573f5553cf119376b7cadec276e0b75c750c7137640b\"" Sep 5 00:47:17.505805 containerd[1565]: time="2025-09-05T00:47:17.505756306Z" level=info msg="connecting to shim bb3b3456d0d880c7dddb573f5553cf119376b7cadec276e0b75c750c7137640b" address="unix:///run/containerd/s/9980ce0d45b2125a788e5ea256ffacac398644acde4cf6e4227d78ed61af1d4c" protocol=ttrpc version=3 Sep 5 00:47:17.529820 systemd[1]: Started cri-containerd-bb3b3456d0d880c7dddb573f5553cf119376b7cadec276e0b75c750c7137640b.scope - libcontainer container bb3b3456d0d880c7dddb573f5553cf119376b7cadec276e0b75c750c7137640b. Sep 5 00:47:17.617522 containerd[1565]: time="2025-09-05T00:47:17.615696365Z" level=info msg="StartContainer for \"bb3b3456d0d880c7dddb573f5553cf119376b7cadec276e0b75c750c7137640b\" returns successfully" Sep 5 00:47:17.708464 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:47:17.708626 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:47:17.843676 kubelet[2685]: I0905 00:47:17.842958 2685 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed-whisker-backend-key-pair\") pod \"e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed\" (UID: \"e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed\") " Sep 5 00:47:17.843676 kubelet[2685]: I0905 00:47:17.843096 2685 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25cn2\" (UniqueName: \"kubernetes.io/projected/e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed-kube-api-access-25cn2\") pod \"e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed\" (UID: \"e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed\") " Sep 5 00:47:17.843676 kubelet[2685]: I0905 00:47:17.843117 2685 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed-whisker-ca-bundle\") pod \"e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed\" (UID: \"e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed\") " Sep 5 00:47:17.844458 kubelet[2685]: I0905 00:47:17.844426 2685 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed" (UID: "e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 00:47:17.851481 kubelet[2685]: I0905 00:47:17.851442 2685 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed" (UID: "e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 5 00:47:17.854680 kubelet[2685]: I0905 00:47:17.852786 2685 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed-kube-api-access-25cn2" (OuterVolumeSpecName: "kube-api-access-25cn2") pod "e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed" (UID: "e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed"). InnerVolumeSpecName "kube-api-access-25cn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 00:47:17.944086 kubelet[2685]: I0905 00:47:17.944042 2685 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 00:47:17.944086 kubelet[2685]: I0905 00:47:17.944072 2685 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25cn2\" (UniqueName: \"kubernetes.io/projected/e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed-kube-api-access-25cn2\") on node \"localhost\" DevicePath \"\"" Sep 5 00:47:17.944086 kubelet[2685]: I0905 00:47:17.944080 2685 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 00:47:18.039370 systemd[1]: Started sshd@7-10.0.0.4:22-10.0.0.1:38574.service - OpenSSH per-connection server daemon (10.0.0.1:38574). Sep 5 00:47:18.099263 sshd[3805]: Accepted publickey for core from 10.0.0.1 port 38574 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:18.100863 sshd-session[3805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:18.105584 systemd-logind[1539]: New session 8 of user core. Sep 5 00:47:18.119811 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:47:18.442387 systemd[1]: var-lib-kubelet-pods-e7abbcac\x2d09c8\x2d4c29\x2d9ae9\x2ddbfbd46c7fed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d25cn2.mount: Deactivated successfully. Sep 5 00:47:18.442513 systemd[1]: var-lib-kubelet-pods-e7abbcac\x2d09c8\x2d4c29\x2d9ae9\x2ddbfbd46c7fed-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 00:47:18.580133 systemd[1]: Removed slice kubepods-besteffort-pode7abbcac_09c8_4c29_9ae9_dbfbd46c7fed.slice - libcontainer container kubepods-besteffort-pode7abbcac_09c8_4c29_9ae9_dbfbd46c7fed.slice. Sep 5 00:47:18.591206 kubelet[2685]: I0905 00:47:18.591152 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v4jz2" podStartSLOduration=2.490181498 podStartE2EDuration="23.59113575s" podCreationTimestamp="2025-09-05 00:46:55 +0000 UTC" firstStartedPulling="2025-09-05 00:46:56.335334885 +0000 UTC m=+17.248097992" lastFinishedPulling="2025-09-05 00:47:17.436289137 +0000 UTC m=+38.349052244" observedRunningTime="2025-09-05 00:47:18.59052334 +0000 UTC m=+39.503286467" watchObservedRunningTime="2025-09-05 00:47:18.59113575 +0000 UTC m=+39.503898857" Sep 5 00:47:18.647077 systemd[1]: Created slice kubepods-besteffort-pod497b20bf_7089_488e_a5e2_cf782565c7c7.slice - libcontainer container kubepods-besteffort-pod497b20bf_7089_488e_a5e2_cf782565c7c7.slice. Sep 5 00:47:18.700945 sshd[3807]: Connection closed by 10.0.0.1 port 38574 Sep 5 00:47:18.702884 sshd-session[3805]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:18.707394 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:47:18.708589 systemd[1]: sshd@7-10.0.0.4:22-10.0.0.1:38574.service: Deactivated successfully. Sep 5 00:47:18.711551 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:47:18.714288 systemd-logind[1539]: Removed session 8. Sep 5 00:47:18.735603 containerd[1565]: time="2025-09-05T00:47:18.735539087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb3b3456d0d880c7dddb573f5553cf119376b7cadec276e0b75c750c7137640b\" id:\"3f3b115f7fec03aa6ff1b899f4c42b00b692f1fb2abef0d3121e84dccb10aa1a\" pid:3830 exit_status:1 exited_at:{seconds:1757033238 nanos:735115082}" Sep 5 00:47:18.767913 kubelet[2685]: I0905 00:47:18.767861 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntd26\" (UniqueName: \"kubernetes.io/projected/497b20bf-7089-488e-a5e2-cf782565c7c7-kube-api-access-ntd26\") pod \"whisker-f5655f584-l5wfs\" (UID: \"497b20bf-7089-488e-a5e2-cf782565c7c7\") " pod="calico-system/whisker-f5655f584-l5wfs" Sep 5 00:47:18.767913 kubelet[2685]: I0905 00:47:18.767911 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/497b20bf-7089-488e-a5e2-cf782565c7c7-whisker-backend-key-pair\") pod \"whisker-f5655f584-l5wfs\" (UID: \"497b20bf-7089-488e-a5e2-cf782565c7c7\") " pod="calico-system/whisker-f5655f584-l5wfs" Sep 5 00:47:18.767913 kubelet[2685]: I0905 00:47:18.767926 2685 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/497b20bf-7089-488e-a5e2-cf782565c7c7-whisker-ca-bundle\") pod \"whisker-f5655f584-l5wfs\" (UID: \"497b20bf-7089-488e-a5e2-cf782565c7c7\") " pod="calico-system/whisker-f5655f584-l5wfs" Sep 5 00:47:18.952671 containerd[1565]: time="2025-09-05T00:47:18.952540467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f5655f584-l5wfs,Uid:497b20bf-7089-488e-a5e2-cf782565c7c7,Namespace:calico-system,Attempt:0,}" Sep 5 00:47:19.156303 systemd-networkd[1486]: calied39c1f9b0b: Link UP Sep 5 00:47:19.156522 systemd-networkd[1486]: calied39c1f9b0b: Gained carrier Sep 5 00:47:19.178505 containerd[1565]: 2025-09-05 00:47:18.986 [INFO][3852] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:47:19.178505 containerd[1565]: 2025-09-05 00:47:19.012 [INFO][3852] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--f5655f584--l5wfs-eth0 whisker-f5655f584- calico-system 497b20bf-7089-488e-a5e2-cf782565c7c7 943 0 2025-09-05 00:47:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f5655f584 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-f5655f584-l5wfs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calied39c1f9b0b [] [] }} ContainerID="a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" Namespace="calico-system" Pod="whisker-f5655f584-l5wfs" WorkloadEndpoint="localhost-k8s-whisker--f5655f584--l5wfs-" Sep 5 00:47:19.178505 containerd[1565]: 2025-09-05 00:47:19.012 [INFO][3852] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" Namespace="calico-system" Pod="whisker-f5655f584-l5wfs" WorkloadEndpoint="localhost-k8s-whisker--f5655f584--l5wfs-eth0" Sep 5 00:47:19.178505 containerd[1565]: 2025-09-05 00:47:19.099 [INFO][3958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" HandleID="k8s-pod-network.a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" Workload="localhost-k8s-whisker--f5655f584--l5wfs-eth0" Sep 5 00:47:19.179732 containerd[1565]: 2025-09-05 00:47:19.101 [INFO][3958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" HandleID="k8s-pod-network.a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" Workload="localhost-k8s-whisker--f5655f584--l5wfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f360), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-f5655f584-l5wfs", "timestamp":"2025-09-05 00:47:19.099818978 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:47:19.179732 containerd[1565]: 2025-09-05 00:47:19.101 [INFO][3958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:47:19.179732 containerd[1565]: 2025-09-05 00:47:19.101 [INFO][3958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:47:19.179732 containerd[1565]: 2025-09-05 00:47:19.101 [INFO][3958] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:47:19.179732 containerd[1565]: 2025-09-05 00:47:19.112 [INFO][3958] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" host="localhost" Sep 5 00:47:19.179732 containerd[1565]: 2025-09-05 00:47:19.118 [INFO][3958] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:47:19.179732 containerd[1565]: 2025-09-05 00:47:19.123 [INFO][3958] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:47:19.179732 containerd[1565]: 2025-09-05 00:47:19.125 [INFO][3958] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:19.179732 containerd[1565]: 2025-09-05 00:47:19.126 [INFO][3958] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:19.179732 containerd[1565]: 2025-09-05 00:47:19.126 [INFO][3958] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" host="localhost" Sep 5 00:47:19.179971 containerd[1565]: 2025-09-05 00:47:19.128 [INFO][3958] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e Sep 5 00:47:19.179971 containerd[1565]: 2025-09-05 00:47:19.133 [INFO][3958] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" host="localhost" Sep 5 00:47:19.179971 containerd[1565]: 2025-09-05 00:47:19.142 [INFO][3958] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" host="localhost" Sep 5 00:47:19.179971 containerd[1565]: 2025-09-05 00:47:19.142 [INFO][3958] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" host="localhost" Sep 5 00:47:19.179971 containerd[1565]: 2025-09-05 00:47:19.143 [INFO][3958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:47:19.179971 containerd[1565]: 2025-09-05 00:47:19.143 [INFO][3958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" HandleID="k8s-pod-network.a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" Workload="localhost-k8s-whisker--f5655f584--l5wfs-eth0" Sep 5 00:47:19.180089 containerd[1565]: 2025-09-05 00:47:19.146 [INFO][3852] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" Namespace="calico-system" Pod="whisker-f5655f584-l5wfs" WorkloadEndpoint="localhost-k8s-whisker--f5655f584--l5wfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f5655f584--l5wfs-eth0", GenerateName:"whisker-f5655f584-", Namespace:"calico-system", SelfLink:"", UID:"497b20bf-7089-488e-a5e2-cf782565c7c7", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 47, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f5655f584", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-f5655f584-l5wfs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calied39c1f9b0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:19.180089 containerd[1565]: 2025-09-05 00:47:19.146 [INFO][3852] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" Namespace="calico-system" Pod="whisker-f5655f584-l5wfs" WorkloadEndpoint="localhost-k8s-whisker--f5655f584--l5wfs-eth0" Sep 5 00:47:19.180165 containerd[1565]: 2025-09-05 00:47:19.146 [INFO][3852] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied39c1f9b0b ContainerID="a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" Namespace="calico-system" Pod="whisker-f5655f584-l5wfs" WorkloadEndpoint="localhost-k8s-whisker--f5655f584--l5wfs-eth0" Sep 5 00:47:19.180165 containerd[1565]: 2025-09-05 00:47:19.156 [INFO][3852] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" Namespace="calico-system" Pod="whisker-f5655f584-l5wfs" WorkloadEndpoint="localhost-k8s-whisker--f5655f584--l5wfs-eth0" Sep 5 00:47:19.180209 containerd[1565]: 2025-09-05 00:47:19.157 [INFO][3852] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" Namespace="calico-system" Pod="whisker-f5655f584-l5wfs" WorkloadEndpoint="localhost-k8s-whisker--f5655f584--l5wfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f5655f584--l5wfs-eth0", GenerateName:"whisker-f5655f584-", Namespace:"calico-system", SelfLink:"", UID:"497b20bf-7089-488e-a5e2-cf782565c7c7", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 47, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f5655f584", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e", Pod:"whisker-f5655f584-l5wfs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calied39c1f9b0b", MAC:"d2:b3:b2:e6:76:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:19.180258 containerd[1565]: 2025-09-05 00:47:19.173 [INFO][3852] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" Namespace="calico-system" Pod="whisker-f5655f584-l5wfs" WorkloadEndpoint="localhost-k8s-whisker--f5655f584--l5wfs-eth0" Sep 5 00:47:19.185305 containerd[1565]: time="2025-09-05T00:47:19.185063290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76595746b9-728ws,Uid:0bb9212a-7c18-4b3c-9f26-202f5352bc82,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:47:19.187109 containerd[1565]: time="2025-09-05T00:47:19.185796295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h4th8,Uid:c9538fe6-7e0a-462e-bbc0-1898cf53d69a,Namespace:calico-system,Attempt:0,}" Sep 5 00:47:19.198412 kubelet[2685]: I0905 00:47:19.198358 2685 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed" path="/var/lib/kubelet/pods/e7abbcac-09c8-4c29-9ae9-dbfbd46c7fed/volumes" Sep 5 00:47:19.381837 containerd[1565]: time="2025-09-05T00:47:19.381784008Z" level=info msg="connecting to shim a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e" address="unix:///run/containerd/s/194bef563ea507fed692892644314901a8eba10d4fb872cd6fe9faee337f019c" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:47:19.424874 systemd[1]: Started cri-containerd-a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e.scope - libcontainer container a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e. Sep 5 00:47:19.432637 systemd-networkd[1486]: calic4036f5d864: Link UP Sep 5 00:47:19.433396 systemd-networkd[1486]: calic4036f5d864: Gained carrier Sep 5 00:47:19.446861 containerd[1565]: 2025-09-05 00:47:19.286 [INFO][4015] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--h4th8-eth0 csi-node-driver- calico-system c9538fe6-7e0a-462e-bbc0-1898cf53d69a 703 0 2025-09-05 00:46:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-h4th8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic4036f5d864 [] [] }} ContainerID="a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" Namespace="calico-system" Pod="csi-node-driver-h4th8" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4th8-" Sep 5 00:47:19.446861 containerd[1565]: 2025-09-05 00:47:19.286 [INFO][4015] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" Namespace="calico-system" Pod="csi-node-driver-h4th8" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4th8-eth0" Sep 5 00:47:19.446861 containerd[1565]: 2025-09-05 00:47:19.379 [INFO][4034] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" HandleID="k8s-pod-network.a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" Workload="localhost-k8s-csi--node--driver--h4th8-eth0" Sep 5 00:47:19.447053 containerd[1565]: 2025-09-05 00:47:19.380 [INFO][4034] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" HandleID="k8s-pod-network.a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" Workload="localhost-k8s-csi--node--driver--h4th8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-h4th8", "timestamp":"2025-09-05 00:47:19.379796619 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:47:19.447053 containerd[1565]: 2025-09-05 00:47:19.380 [INFO][4034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:47:19.447053 containerd[1565]: 2025-09-05 00:47:19.380 [INFO][4034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:47:19.447053 containerd[1565]: 2025-09-05 00:47:19.380 [INFO][4034] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:47:19.447053 containerd[1565]: 2025-09-05 00:47:19.386 [INFO][4034] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" host="localhost" Sep 5 00:47:19.447053 containerd[1565]: 2025-09-05 00:47:19.395 [INFO][4034] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:47:19.447053 containerd[1565]: 2025-09-05 00:47:19.399 [INFO][4034] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:47:19.447053 containerd[1565]: 2025-09-05 00:47:19.401 [INFO][4034] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:19.447053 containerd[1565]: 2025-09-05 00:47:19.403 [INFO][4034] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:19.447053 containerd[1565]: 2025-09-05 00:47:19.403 [INFO][4034] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" host="localhost" Sep 5 00:47:19.449947 containerd[1565]: 2025-09-05 00:47:19.406 [INFO][4034] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece Sep 5 00:47:19.449947 containerd[1565]: 2025-09-05 00:47:19.409 [INFO][4034] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" host="localhost" Sep 5 00:47:19.449947 containerd[1565]: 2025-09-05 00:47:19.420 [INFO][4034] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" host="localhost" Sep 5 00:47:19.449947 containerd[1565]: 2025-09-05 00:47:19.420 [INFO][4034] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" host="localhost" Sep 5 00:47:19.449947 containerd[1565]: 2025-09-05 00:47:19.420 [INFO][4034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:47:19.449947 containerd[1565]: 2025-09-05 00:47:19.420 [INFO][4034] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" HandleID="k8s-pod-network.a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" Workload="localhost-k8s-csi--node--driver--h4th8-eth0" Sep 5 00:47:19.450078 containerd[1565]: 2025-09-05 00:47:19.429 [INFO][4015] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" Namespace="calico-system" Pod="csi-node-driver-h4th8" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4th8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h4th8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9538fe6-7e0a-462e-bbc0-1898cf53d69a", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-h4th8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4036f5d864", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:19.450139 containerd[1565]: 2025-09-05 00:47:19.430 [INFO][4015] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" Namespace="calico-system" Pod="csi-node-driver-h4th8" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4th8-eth0" Sep 5 00:47:19.450139 containerd[1565]: 2025-09-05 00:47:19.430 [INFO][4015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4036f5d864 ContainerID="a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" Namespace="calico-system" Pod="csi-node-driver-h4th8" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4th8-eth0" Sep 5 00:47:19.450139 containerd[1565]: 2025-09-05 00:47:19.432 [INFO][4015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" Namespace="calico-system" Pod="csi-node-driver-h4th8" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4th8-eth0" Sep 5 00:47:19.450202 containerd[1565]: 2025-09-05 00:47:19.432 [INFO][4015] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" Namespace="calico-system" Pod="csi-node-driver-h4th8" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4th8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h4th8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9538fe6-7e0a-462e-bbc0-1898cf53d69a", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece", Pod:"csi-node-driver-h4th8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4036f5d864", MAC:"72:3f:1a:31:d1:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:19.450254 containerd[1565]: 2025-09-05 00:47:19.443 [INFO][4015] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" Namespace="calico-system" Pod="csi-node-driver-h4th8" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4th8-eth0" Sep 5 00:47:19.461373 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:47:19.463546 containerd[1565]: time="2025-09-05T00:47:19.463499679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb3b3456d0d880c7dddb573f5553cf119376b7cadec276e0b75c750c7137640b\" id:\"32f5ef8d793f037af0aceab700e11920e5f8c504fad958878ceabfa6b73275cf\" pid:4069 exit_status:1 exited_at:{seconds:1757033239 nanos:463104837}" Sep 5 00:47:19.486629 systemd-networkd[1486]: vxlan.calico: Link UP Sep 5 00:47:19.488383 systemd-networkd[1486]: vxlan.calico: Gained carrier Sep 5 00:47:19.511769 containerd[1565]: time="2025-09-05T00:47:19.511701410Z" level=info msg="connecting to shim a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece" address="unix:///run/containerd/s/89e94215ddadcef08d93a239366c715b024de7d902b7487d1544bd24c28707b3" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:47:19.532747 containerd[1565]: time="2025-09-05T00:47:19.532377608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f5655f584-l5wfs,Uid:497b20bf-7089-488e-a5e2-cf782565c7c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e\"" Sep 5 00:47:19.535819 containerd[1565]: time="2025-09-05T00:47:19.535123522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 00:47:19.567351 systemd-networkd[1486]: caliae51ae9fea5: Link UP Sep 5 00:47:19.568205 systemd-networkd[1486]: caliae51ae9fea5: Gained carrier Sep 5 00:47:19.587954 systemd[1]: Started cri-containerd-a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece.scope - libcontainer container a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece. Sep 5 00:47:19.590262 containerd[1565]: 2025-09-05 00:47:19.337 [INFO][3992] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76595746b9--728ws-eth0 calico-apiserver-76595746b9- calico-apiserver 0bb9212a-7c18-4b3c-9f26-202f5352bc82 821 0 2025-09-05 00:46:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76595746b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76595746b9-728ws eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliae51ae9fea5 [] [] }} ContainerID="9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-728ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--728ws-" Sep 5 00:47:19.590262 containerd[1565]: 2025-09-05 00:47:19.337 [INFO][3992] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-728ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--728ws-eth0" Sep 5 00:47:19.590262 containerd[1565]: 2025-09-05 00:47:19.380 [INFO][4077] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" HandleID="k8s-pod-network.9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" Workload="localhost-k8s-calico--apiserver--76595746b9--728ws-eth0" Sep 5 00:47:19.590407 containerd[1565]: 2025-09-05 00:47:19.380 [INFO][4077] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" HandleID="k8s-pod-network.9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" Workload="localhost-k8s-calico--apiserver--76595746b9--728ws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76595746b9-728ws", "timestamp":"2025-09-05 00:47:19.380381777 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:47:19.590407 containerd[1565]: 2025-09-05 00:47:19.381 [INFO][4077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:47:19.590407 containerd[1565]: 2025-09-05 00:47:19.420 [INFO][4077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:47:19.590407 containerd[1565]: 2025-09-05 00:47:19.421 [INFO][4077] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:47:19.590407 containerd[1565]: 2025-09-05 00:47:19.500 [INFO][4077] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" host="localhost" Sep 5 00:47:19.590407 containerd[1565]: 2025-09-05 00:47:19.514 [INFO][4077] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:47:19.590407 containerd[1565]: 2025-09-05 00:47:19.521 [INFO][4077] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:47:19.590407 containerd[1565]: 2025-09-05 00:47:19.533 [INFO][4077] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:19.590407 containerd[1565]: 2025-09-05 00:47:19.545 [INFO][4077] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:19.590407 containerd[1565]: 2025-09-05 00:47:19.545 [INFO][4077] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" host="localhost" Sep 5 00:47:19.590739 containerd[1565]: 2025-09-05 00:47:19.549 [INFO][4077] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa Sep 5 00:47:19.590739 containerd[1565]: 2025-09-05 00:47:19.553 [INFO][4077] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" host="localhost" Sep 5 00:47:19.590739 containerd[1565]: 2025-09-05 00:47:19.558 [INFO][4077] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" host="localhost" Sep 5 00:47:19.590739 containerd[1565]: 2025-09-05 00:47:19.558 [INFO][4077] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" host="localhost" Sep 5 00:47:19.590739 containerd[1565]: 2025-09-05 00:47:19.558 [INFO][4077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:47:19.590739 containerd[1565]: 2025-09-05 00:47:19.558 [INFO][4077] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" HandleID="k8s-pod-network.9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" Workload="localhost-k8s-calico--apiserver--76595746b9--728ws-eth0" Sep 5 00:47:19.590902 containerd[1565]: 2025-09-05 00:47:19.563 [INFO][3992] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-728ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--728ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76595746b9--728ws-eth0", GenerateName:"calico-apiserver-76595746b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"0bb9212a-7c18-4b3c-9f26-202f5352bc82", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76595746b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76595746b9-728ws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae51ae9fea5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:19.590959 containerd[1565]: 2025-09-05 00:47:19.564 [INFO][3992] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-728ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--728ws-eth0" Sep 5 00:47:19.590959 containerd[1565]: 2025-09-05 00:47:19.564 [INFO][3992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae51ae9fea5 ContainerID="9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-728ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--728ws-eth0" Sep 5 00:47:19.590959 containerd[1565]: 2025-09-05 00:47:19.567 [INFO][3992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-728ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--728ws-eth0" Sep 5 00:47:19.591022 containerd[1565]: 2025-09-05 00:47:19.569 [INFO][3992] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-728ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--728ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76595746b9--728ws-eth0", GenerateName:"calico-apiserver-76595746b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"0bb9212a-7c18-4b3c-9f26-202f5352bc82", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76595746b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa", Pod:"calico-apiserver-76595746b9-728ws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae51ae9fea5", MAC:"56:ae:a2:0e:4c:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:19.591074 containerd[1565]: 2025-09-05 00:47:19.582 [INFO][3992] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-728ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--728ws-eth0" Sep 5 00:47:19.609204 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:47:19.623275 containerd[1565]: time="2025-09-05T00:47:19.623212291Z" level=info msg="connecting to shim 9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa" address="unix:///run/containerd/s/aeeb474fd92422000a1cba241ad5d38269c5b92084816a4503ea89818cbb008d" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:47:19.641696 containerd[1565]: time="2025-09-05T00:47:19.641226965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h4th8,Uid:c9538fe6-7e0a-462e-bbc0-1898cf53d69a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece\"" Sep 5 00:47:19.655170 systemd[1]: Started cri-containerd-9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa.scope - libcontainer container 9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa. Sep 5 00:47:19.671673 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:47:19.703564 containerd[1565]: time="2025-09-05T00:47:19.703505823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76595746b9-728ws,Uid:0bb9212a-7c18-4b3c-9f26-202f5352bc82,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa\"" Sep 5 00:47:20.179205 kubelet[2685]: E0905 00:47:20.179174 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:20.179758 containerd[1565]: time="2025-09-05T00:47:20.179694851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-628rf,Uid:93e5e237-c20c-492d-bb77-960881bf88c6,Namespace:kube-system,Attempt:0,}" Sep 5 00:47:20.302237 systemd-networkd[1486]: calide3aaa5b7c9: Link UP Sep 5 00:47:20.303918 systemd-networkd[1486]: calide3aaa5b7c9: Gained carrier Sep 5 00:47:20.323902 containerd[1565]: 2025-09-05 00:47:20.222 [INFO][4323] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--628rf-eth0 coredns-7c65d6cfc9- kube-system 93e5e237-c20c-492d-bb77-960881bf88c6 809 0 2025-09-05 00:46:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-628rf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calide3aaa5b7c9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-628rf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--628rf-" Sep 5 00:47:20.323902 containerd[1565]: 2025-09-05 00:47:20.222 [INFO][4323] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-628rf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--628rf-eth0" Sep 5 00:47:20.323902 containerd[1565]: 2025-09-05 00:47:20.251 [INFO][4337] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" HandleID="k8s-pod-network.f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" Workload="localhost-k8s-coredns--7c65d6cfc9--628rf-eth0" Sep 5 00:47:20.324387 containerd[1565]: 2025-09-05 00:47:20.252 [INFO][4337] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" HandleID="k8s-pod-network.f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" Workload="localhost-k8s-coredns--7c65d6cfc9--628rf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-628rf", "timestamp":"2025-09-05 00:47:20.251925241 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:47:20.324387 containerd[1565]: 2025-09-05 00:47:20.252 [INFO][4337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:47:20.324387 containerd[1565]: 2025-09-05 00:47:20.252 [INFO][4337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:47:20.324387 containerd[1565]: 2025-09-05 00:47:20.252 [INFO][4337] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:47:20.324387 containerd[1565]: 2025-09-05 00:47:20.259 [INFO][4337] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" host="localhost" Sep 5 00:47:20.324387 containerd[1565]: 2025-09-05 00:47:20.265 [INFO][4337] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:47:20.324387 containerd[1565]: 2025-09-05 00:47:20.271 [INFO][4337] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:47:20.324387 containerd[1565]: 2025-09-05 00:47:20.273 [INFO][4337] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:20.324387 containerd[1565]: 2025-09-05 00:47:20.276 [INFO][4337] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:20.324387 containerd[1565]: 2025-09-05 00:47:20.276 [INFO][4337] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" host="localhost" Sep 5 00:47:20.324767 containerd[1565]: 2025-09-05 00:47:20.279 [INFO][4337] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c Sep 5 00:47:20.324767 containerd[1565]: 2025-09-05 00:47:20.286 [INFO][4337] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" host="localhost" Sep 5 00:47:20.324767 containerd[1565]: 2025-09-05 00:47:20.293 [INFO][4337] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" host="localhost" Sep 5 00:47:20.324767 containerd[1565]: 2025-09-05 00:47:20.293 [INFO][4337] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" host="localhost" Sep 5 00:47:20.324767 containerd[1565]: 2025-09-05 00:47:20.293 [INFO][4337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:47:20.324767 containerd[1565]: 2025-09-05 00:47:20.293 [INFO][4337] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" HandleID="k8s-pod-network.f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" Workload="localhost-k8s-coredns--7c65d6cfc9--628rf-eth0" Sep 5 00:47:20.324951 containerd[1565]: 2025-09-05 00:47:20.298 [INFO][4323] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-628rf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--628rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--628rf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"93e5e237-c20c-492d-bb77-960881bf88c6", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-628rf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide3aaa5b7c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:20.325064 containerd[1565]: 2025-09-05 00:47:20.298 [INFO][4323] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-628rf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--628rf-eth0" Sep 5 00:47:20.325064 containerd[1565]: 2025-09-05 00:47:20.298 [INFO][4323] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide3aaa5b7c9 ContainerID="f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-628rf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--628rf-eth0" Sep 5 00:47:20.325064 containerd[1565]: 2025-09-05 00:47:20.303 [INFO][4323] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-628rf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--628rf-eth0" Sep 5 00:47:20.325165 containerd[1565]: 2025-09-05 00:47:20.305 [INFO][4323] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-628rf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--628rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--628rf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"93e5e237-c20c-492d-bb77-960881bf88c6", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c", Pod:"coredns-7c65d6cfc9-628rf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide3aaa5b7c9", MAC:"4e:db:cd:6a:b7:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:20.325165 containerd[1565]: 2025-09-05 00:47:20.319 [INFO][4323] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-628rf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--628rf-eth0" Sep 5 00:47:20.355946 containerd[1565]: time="2025-09-05T00:47:20.355857067Z" level=info msg="connecting to shim f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c" address="unix:///run/containerd/s/ed20bef699bf4a78be09c94af108aa9b55ac0746cc2adacf1691b67a7de0eb78" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:47:20.397010 systemd[1]: Started cri-containerd-f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c.scope - libcontainer container f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c. Sep 5 00:47:20.416960 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:47:20.461775 containerd[1565]: time="2025-09-05T00:47:20.461596596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-628rf,Uid:93e5e237-c20c-492d-bb77-960881bf88c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c\"" Sep 5 00:47:20.463404 kubelet[2685]: E0905 00:47:20.463320 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:20.465621 containerd[1565]: time="2025-09-05T00:47:20.465578087Z" level=info msg="CreateContainer within sandbox \"f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:47:20.480551 containerd[1565]: time="2025-09-05T00:47:20.480505228Z" level=info msg="Container 311554e3fb95445daaa5873b75c1d775c073cbe89b2203c183bf1d590dd08fd7: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:47:20.495351 containerd[1565]: time="2025-09-05T00:47:20.495260436Z" level=info msg="CreateContainer within sandbox \"f5490729b086f72b775085985e66c96562af699cc20ace2ba915d218cecabe1c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"311554e3fb95445daaa5873b75c1d775c073cbe89b2203c183bf1d590dd08fd7\"" Sep 5 00:47:20.495888 containerd[1565]: time="2025-09-05T00:47:20.495848349Z" level=info msg="StartContainer for \"311554e3fb95445daaa5873b75c1d775c073cbe89b2203c183bf1d590dd08fd7\"" Sep 5 00:47:20.496978 containerd[1565]: time="2025-09-05T00:47:20.496941040Z" level=info msg="connecting to shim 311554e3fb95445daaa5873b75c1d775c073cbe89b2203c183bf1d590dd08fd7" address="unix:///run/containerd/s/ed20bef699bf4a78be09c94af108aa9b55ac0746cc2adacf1691b67a7de0eb78" protocol=ttrpc version=3 Sep 5 00:47:20.527878 systemd[1]: Started cri-containerd-311554e3fb95445daaa5873b75c1d775c073cbe89b2203c183bf1d590dd08fd7.scope - libcontainer container 311554e3fb95445daaa5873b75c1d775c073cbe89b2203c183bf1d590dd08fd7. Sep 5 00:47:20.567324 containerd[1565]: time="2025-09-05T00:47:20.567233713Z" level=info msg="StartContainer for \"311554e3fb95445daaa5873b75c1d775c073cbe89b2203c183bf1d590dd08fd7\" returns successfully" Sep 5 00:47:20.606225 systemd-networkd[1486]: calic4036f5d864: Gained IPv6LL Sep 5 00:47:20.732870 systemd-networkd[1486]: calied39c1f9b0b: Gained IPv6LL Sep 5 00:47:20.796856 systemd-networkd[1486]: caliae51ae9fea5: Gained IPv6LL Sep 5 00:47:21.179808 kubelet[2685]: E0905 00:47:21.179751 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:21.180853 containerd[1565]: time="2025-09-05T00:47:21.180800620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-799779498d-kptz4,Uid:bb57882a-49b9-47c9-82b1-15ae93bc171c,Namespace:calico-system,Attempt:0,}" Sep 5 00:47:21.181282 containerd[1565]: time="2025-09-05T00:47:21.180967373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tqbx2,Uid:3643b61f-e08f-46ef-a337-d8bea75516a4,Namespace:kube-system,Attempt:0,}" Sep 5 00:47:21.181282 containerd[1565]: time="2025-09-05T00:47:21.181213635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6kn47,Uid:d48f1f52-ce3c-4e49-8a4d-b63e273da579,Namespace:calico-system,Attempt:0,}" Sep 5 00:47:21.181900 systemd-networkd[1486]: vxlan.calico: Gained IPv6LL Sep 5 00:47:21.307004 kubelet[2685]: E0905 00:47:21.306076 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:21.397102 kubelet[2685]: I0905 00:47:21.396965 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-628rf" podStartSLOduration=37.396811677 podStartE2EDuration="37.396811677s" podCreationTimestamp="2025-09-05 00:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:47:21.396004482 +0000 UTC m=+42.308767589" watchObservedRunningTime="2025-09-05 00:47:21.396811677 +0000 UTC m=+42.309574784" Sep 5 00:47:21.697479 containerd[1565]: time="2025-09-05T00:47:21.697397716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:21.715060 containerd[1565]: time="2025-09-05T00:47:21.714992903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 5 00:47:21.737504 containerd[1565]: time="2025-09-05T00:47:21.737413725Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:21.749480 containerd[1565]: time="2025-09-05T00:47:21.749399456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:21.750132 containerd[1565]: time="2025-09-05T00:47:21.750076226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.214921626s" Sep 5 00:47:21.750132 containerd[1565]: time="2025-09-05T00:47:21.750121200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 5 00:47:21.751996 containerd[1565]: time="2025-09-05T00:47:21.751946505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 00:47:21.753748 containerd[1565]: time="2025-09-05T00:47:21.753142179Z" level=info msg="CreateContainer within sandbox \"a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 00:47:21.773964 systemd-networkd[1486]: cali2d959a5a3a4: Link UP Sep 5 00:47:21.778877 systemd-networkd[1486]: cali2d959a5a3a4: Gained carrier Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.306 [INFO][4446] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--799779498d--kptz4-eth0 calico-kube-controllers-799779498d- calico-system bb57882a-49b9-47c9-82b1-15ae93bc171c 823 0 2025-09-05 00:46:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:799779498d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-799779498d-kptz4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2d959a5a3a4 [] [] }} ContainerID="b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" Namespace="calico-system" Pod="calico-kube-controllers-799779498d-kptz4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799779498d--kptz4-" Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.306 [INFO][4446] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" Namespace="calico-system" Pod="calico-kube-controllers-799779498d-kptz4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799779498d--kptz4-eth0" Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.438 [INFO][4473] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" HandleID="k8s-pod-network.b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" Workload="localhost-k8s-calico--kube--controllers--799779498d--kptz4-eth0" Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.439 [INFO][4473] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" HandleID="k8s-pod-network.b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" Workload="localhost-k8s-calico--kube--controllers--799779498d--kptz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c72a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-799779498d-kptz4", "timestamp":"2025-09-05 00:47:21.438580084 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.439 [INFO][4473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.439 [INFO][4473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.439 [INFO][4473] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.491 [INFO][4473] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" host="localhost" Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.641 [INFO][4473] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.669 [INFO][4473] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.672 [INFO][4473] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.676 [INFO][4473] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.676 [INFO][4473] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" host="localhost" Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.680 [INFO][4473] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47 Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.697 [INFO][4473] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" host="localhost" Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.756 [INFO][4473] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" host="localhost" Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.756 [INFO][4473] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" host="localhost" Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.756 [INFO][4473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:47:21.808346 containerd[1565]: 2025-09-05 00:47:21.756 [INFO][4473] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" HandleID="k8s-pod-network.b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" Workload="localhost-k8s-calico--kube--controllers--799779498d--kptz4-eth0" Sep 5 00:47:21.809354 containerd[1565]: 2025-09-05 00:47:21.762 [INFO][4446] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" Namespace="calico-system" Pod="calico-kube-controllers-799779498d-kptz4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799779498d--kptz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--799779498d--kptz4-eth0", GenerateName:"calico-kube-controllers-799779498d-", Namespace:"calico-system", SelfLink:"", UID:"bb57882a-49b9-47c9-82b1-15ae93bc171c", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"799779498d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-799779498d-kptz4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2d959a5a3a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:21.809354 containerd[1565]: 2025-09-05 00:47:21.762 [INFO][4446] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" Namespace="calico-system" Pod="calico-kube-controllers-799779498d-kptz4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799779498d--kptz4-eth0" Sep 5 00:47:21.809354 containerd[1565]: 2025-09-05 00:47:21.763 [INFO][4446] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d959a5a3a4 ContainerID="b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" Namespace="calico-system" Pod="calico-kube-controllers-799779498d-kptz4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799779498d--kptz4-eth0" Sep 5 00:47:21.809354 containerd[1565]: 2025-09-05 00:47:21.780 [INFO][4446] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" Namespace="calico-system" Pod="calico-kube-controllers-799779498d-kptz4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799779498d--kptz4-eth0" Sep 5 00:47:21.809354 containerd[1565]: 2025-09-05 00:47:21.782 [INFO][4446] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" Namespace="calico-system" Pod="calico-kube-controllers-799779498d-kptz4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799779498d--kptz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--799779498d--kptz4-eth0", GenerateName:"calico-kube-controllers-799779498d-", Namespace:"calico-system", SelfLink:"", UID:"bb57882a-49b9-47c9-82b1-15ae93bc171c", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"799779498d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47", Pod:"calico-kube-controllers-799779498d-kptz4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2d959a5a3a4", MAC:"3a:5d:16:af:47:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:21.809354 containerd[1565]: 2025-09-05 00:47:21.802 [INFO][4446] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" Namespace="calico-system" Pod="calico-kube-controllers-799779498d-kptz4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799779498d--kptz4-eth0" Sep 5 00:47:21.840809 systemd-networkd[1486]: calia90be60fd34: Link UP Sep 5 00:47:21.842191 systemd-networkd[1486]: calia90be60fd34: Gained carrier Sep 5 00:47:21.850808 containerd[1565]: time="2025-09-05T00:47:21.850737924Z" level=info msg="Container 143ed3831540696d7e61bc1c2119222f305706966f9dbbb4a68ee8c1d9bdf0b1: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:47:21.865080 containerd[1565]: time="2025-09-05T00:47:21.864981833Z" level=info msg="connecting to shim b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47" address="unix:///run/containerd/s/67efa57d0ed510d33832f1d35c09ab671002d70b4a271214c5ae48402c537aa0" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.604 [INFO][4459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--tqbx2-eth0 coredns-7c65d6cfc9- kube-system 3643b61f-e08f-46ef-a337-d8bea75516a4 822 0 2025-09-05 00:46:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-tqbx2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia90be60fd34 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tqbx2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tqbx2-" Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.604 [INFO][4459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tqbx2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tqbx2-eth0" Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.693 [INFO][4498] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" HandleID="k8s-pod-network.9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" Workload="localhost-k8s-coredns--7c65d6cfc9--tqbx2-eth0" Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.693 [INFO][4498] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" HandleID="k8s-pod-network.9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" Workload="localhost-k8s-coredns--7c65d6cfc9--tqbx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019f600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-tqbx2", "timestamp":"2025-09-05 00:47:21.693128896 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.693 [INFO][4498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.757 [INFO][4498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.757 [INFO][4498] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.771 [INFO][4498] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" host="localhost" Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.782 [INFO][4498] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.787 [INFO][4498] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.790 [INFO][4498] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.803 [INFO][4498] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.803 [INFO][4498] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" host="localhost" Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.806 [INFO][4498] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1 Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.818 [INFO][4498] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" host="localhost" Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.827 [INFO][4498] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" host="localhost" Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.827 [INFO][4498] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" host="localhost" Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.827 [INFO][4498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:47:21.868787 containerd[1565]: 2025-09-05 00:47:21.827 [INFO][4498] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" HandleID="k8s-pod-network.9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" Workload="localhost-k8s-coredns--7c65d6cfc9--tqbx2-eth0" Sep 5 00:47:21.869569 containerd[1565]: 2025-09-05 00:47:21.837 [INFO][4459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tqbx2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tqbx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tqbx2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3643b61f-e08f-46ef-a337-d8bea75516a4", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-tqbx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia90be60fd34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:21.869569 containerd[1565]: 2025-09-05 00:47:21.837 [INFO][4459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tqbx2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tqbx2-eth0" Sep 5 00:47:21.869569 containerd[1565]: 2025-09-05 00:47:21.837 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia90be60fd34 ContainerID="9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tqbx2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tqbx2-eth0" Sep 5 00:47:21.869569 containerd[1565]: 2025-09-05 00:47:21.842 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tqbx2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tqbx2-eth0" Sep 5 00:47:21.869569 containerd[1565]: 2025-09-05 00:47:21.843 [INFO][4459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tqbx2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tqbx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tqbx2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3643b61f-e08f-46ef-a337-d8bea75516a4", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1", Pod:"coredns-7c65d6cfc9-tqbx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia90be60fd34", MAC:"b2:fb:58:04:5a:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:21.869569 containerd[1565]: 2025-09-05 00:47:21.863 [INFO][4459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tqbx2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tqbx2-eth0" Sep 5 00:47:21.874507 containerd[1565]: time="2025-09-05T00:47:21.874461912Z" level=info msg="CreateContainer within sandbox \"a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"143ed3831540696d7e61bc1c2119222f305706966f9dbbb4a68ee8c1d9bdf0b1\"" Sep 5 00:47:21.880992 containerd[1565]: time="2025-09-05T00:47:21.880902948Z" level=info msg="StartContainer for \"143ed3831540696d7e61bc1c2119222f305706966f9dbbb4a68ee8c1d9bdf0b1\"" Sep 5 00:47:21.883868 containerd[1565]: time="2025-09-05T00:47:21.883821424Z" level=info msg="connecting to shim 143ed3831540696d7e61bc1c2119222f305706966f9dbbb4a68ee8c1d9bdf0b1" address="unix:///run/containerd/s/194bef563ea507fed692892644314901a8eba10d4fb872cd6fe9faee337f019c" protocol=ttrpc version=3 Sep 5 00:47:21.916926 systemd[1]: Started cri-containerd-b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47.scope - libcontainer container b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47. Sep 5 00:47:21.919500 systemd-networkd[1486]: cali78aefc70245: Link UP Sep 5 00:47:21.920738 systemd-networkd[1486]: cali78aefc70245: Gained carrier Sep 5 00:47:21.922768 systemd[1]: Started cri-containerd-143ed3831540696d7e61bc1c2119222f305706966f9dbbb4a68ee8c1d9bdf0b1.scope - libcontainer container 143ed3831540696d7e61bc1c2119222f305706966f9dbbb4a68ee8c1d9bdf0b1. Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.684 [INFO][4485] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--6kn47-eth0 goldmane-7988f88666- calico-system d48f1f52-ce3c-4e49-8a4d-b63e273da579 817 0 2025-09-05 00:46:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-6kn47 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali78aefc70245 [] [] }} ContainerID="7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" Namespace="calico-system" Pod="goldmane-7988f88666-6kn47" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6kn47-" Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.684 [INFO][4485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" Namespace="calico-system" Pod="goldmane-7988f88666-6kn47" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6kn47-eth0" Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.780 [INFO][4510] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" HandleID="k8s-pod-network.7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" Workload="localhost-k8s-goldmane--7988f88666--6kn47-eth0" Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.781 [INFO][4510] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" HandleID="k8s-pod-network.7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" Workload="localhost-k8s-goldmane--7988f88666--6kn47-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0e20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-6kn47", "timestamp":"2025-09-05 00:47:21.780798495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.781 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.827 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.827 [INFO][4510] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.870 [INFO][4510] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" host="localhost" Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.886 [INFO][4510] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.893 [INFO][4510] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.896 [INFO][4510] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.899 [INFO][4510] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.899 [INFO][4510] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" host="localhost" Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.900 [INFO][4510] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065 Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.905 [INFO][4510] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" host="localhost" Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.912 [INFO][4510] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" host="localhost" Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.912 [INFO][4510] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" host="localhost" Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.913 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:47:21.960471 containerd[1565]: 2025-09-05 00:47:21.913 [INFO][4510] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" HandleID="k8s-pod-network.7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" Workload="localhost-k8s-goldmane--7988f88666--6kn47-eth0" Sep 5 00:47:21.961159 containerd[1565]: 2025-09-05 00:47:21.916 [INFO][4485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" Namespace="calico-system" Pod="goldmane-7988f88666-6kn47" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6kn47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--6kn47-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d48f1f52-ce3c-4e49-8a4d-b63e273da579", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-6kn47", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali78aefc70245", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:21.961159 containerd[1565]: 2025-09-05 00:47:21.916 [INFO][4485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" Namespace="calico-system" Pod="goldmane-7988f88666-6kn47" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6kn47-eth0" Sep 5 00:47:21.961159 containerd[1565]: 2025-09-05 00:47:21.916 [INFO][4485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78aefc70245 ContainerID="7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" Namespace="calico-system" Pod="goldmane-7988f88666-6kn47" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6kn47-eth0" Sep 5 00:47:21.961159 containerd[1565]: 2025-09-05 00:47:21.924 [INFO][4485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" Namespace="calico-system" Pod="goldmane-7988f88666-6kn47" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6kn47-eth0" Sep 5 00:47:21.961159 containerd[1565]: 2025-09-05 00:47:21.928 [INFO][4485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" Namespace="calico-system" Pod="goldmane-7988f88666-6kn47" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6kn47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--6kn47-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d48f1f52-ce3c-4e49-8a4d-b63e273da579", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065", Pod:"goldmane-7988f88666-6kn47", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali78aefc70245", MAC:"c2:61:71:81:50:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:21.961159 containerd[1565]: 2025-09-05 00:47:21.942 [INFO][4485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" Namespace="calico-system" Pod="goldmane-7988f88666-6kn47" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6kn47-eth0" Sep 5 00:47:21.978725 containerd[1565]: time="2025-09-05T00:47:21.978479867Z" level=info msg="connecting to shim 9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1" address="unix:///run/containerd/s/d1e302c9f43b86d0aeff4ae60dde7be2ec7429a09864c81b9da6bfd264e323bf" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:47:21.986062 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:47:22.006848 systemd[1]: Started cri-containerd-9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1.scope - libcontainer container 9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1. Sep 5 00:47:22.011674 containerd[1565]: time="2025-09-05T00:47:22.010451491Z" level=info msg="connecting to shim 7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065" address="unix:///run/containerd/s/2ecbc43381bba885a1dbb2116ddeb1c28294edf9a0b5e17caab383ad9c79d879" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:47:22.031349 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:47:22.046466 containerd[1565]: time="2025-09-05T00:47:22.046425103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-799779498d-kptz4,Uid:bb57882a-49b9-47c9-82b1-15ae93bc171c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47\"" Sep 5 00:47:22.050870 systemd[1]: Started cri-containerd-7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065.scope - libcontainer container 7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065. Sep 5 00:47:22.053810 containerd[1565]: time="2025-09-05T00:47:22.053750920Z" level=info msg="StartContainer for \"143ed3831540696d7e61bc1c2119222f305706966f9dbbb4a68ee8c1d9bdf0b1\" returns successfully" Sep 5 00:47:22.073733 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:47:22.076498 containerd[1565]: time="2025-09-05T00:47:22.076452518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tqbx2,Uid:3643b61f-e08f-46ef-a337-d8bea75516a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1\"" Sep 5 00:47:22.077033 systemd-networkd[1486]: calide3aaa5b7c9: Gained IPv6LL Sep 5 00:47:22.080181 kubelet[2685]: E0905 00:47:22.079904 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:22.082798 containerd[1565]: time="2025-09-05T00:47:22.082757649Z" level=info msg="CreateContainer within sandbox \"9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:47:22.099004 containerd[1565]: time="2025-09-05T00:47:22.098474912Z" level=info msg="Container e90c4ec9abe602c20bf59d5b6ebeb96ff87bc9db24abde71179902ba4b1ea1fb: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:47:22.108444 containerd[1565]: time="2025-09-05T00:47:22.108397011Z" level=info msg="CreateContainer within sandbox \"9a5cbeda773fe27683bad28f53b10911d3b50dc589852607d6aad95c254b71b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e90c4ec9abe602c20bf59d5b6ebeb96ff87bc9db24abde71179902ba4b1ea1fb\"" Sep 5 00:47:22.109277 containerd[1565]: time="2025-09-05T00:47:22.109253587Z" level=info msg="StartContainer for \"e90c4ec9abe602c20bf59d5b6ebeb96ff87bc9db24abde71179902ba4b1ea1fb\"" Sep 5 00:47:22.110559 containerd[1565]: time="2025-09-05T00:47:22.110533008Z" level=info msg="connecting to shim e90c4ec9abe602c20bf59d5b6ebeb96ff87bc9db24abde71179902ba4b1ea1fb" address="unix:///run/containerd/s/d1e302c9f43b86d0aeff4ae60dde7be2ec7429a09864c81b9da6bfd264e323bf" protocol=ttrpc version=3 Sep 5 00:47:22.122196 containerd[1565]: time="2025-09-05T00:47:22.122036394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6kn47,Uid:d48f1f52-ce3c-4e49-8a4d-b63e273da579,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065\"" Sep 5 00:47:22.139865 systemd[1]: Started cri-containerd-e90c4ec9abe602c20bf59d5b6ebeb96ff87bc9db24abde71179902ba4b1ea1fb.scope - libcontainer container e90c4ec9abe602c20bf59d5b6ebeb96ff87bc9db24abde71179902ba4b1ea1fb. Sep 5 00:47:22.173878 containerd[1565]: time="2025-09-05T00:47:22.173821646Z" level=info msg="StartContainer for \"e90c4ec9abe602c20bf59d5b6ebeb96ff87bc9db24abde71179902ba4b1ea1fb\" returns successfully" Sep 5 00:47:22.180847 containerd[1565]: time="2025-09-05T00:47:22.180809809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76595746b9-qr7sn,Uid:5f689ecb-467b-4e47-a9fb-c61a24f6068d,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:47:22.315637 kubelet[2685]: E0905 00:47:22.315587 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:22.318528 kubelet[2685]: E0905 00:47:22.318478 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:22.322049 systemd-networkd[1486]: cali4e2593de4a9: Link UP Sep 5 00:47:22.323522 systemd-networkd[1486]: cali4e2593de4a9: Gained carrier Sep 5 00:47:22.352340 kubelet[2685]: I0905 00:47:22.352172 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tqbx2" podStartSLOduration=38.352149138 podStartE2EDuration="38.352149138s" podCreationTimestamp="2025-09-05 00:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:47:22.344248122 +0000 UTC m=+43.257011229" watchObservedRunningTime="2025-09-05 00:47:22.352149138 +0000 UTC m=+43.264912245" Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.226 [INFO][4753] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76595746b9--qr7sn-eth0 calico-apiserver-76595746b9- calico-apiserver 5f689ecb-467b-4e47-a9fb-c61a24f6068d 813 0 2025-09-05 00:46:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76595746b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76595746b9-qr7sn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e2593de4a9 [] [] }} ContainerID="0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-qr7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--qr7sn-" Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.226 [INFO][4753] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-qr7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--qr7sn-eth0" Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.254 [INFO][4770] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" HandleID="k8s-pod-network.0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" Workload="localhost-k8s-calico--apiserver--76595746b9--qr7sn-eth0" Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.254 [INFO][4770] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" HandleID="k8s-pod-network.0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" Workload="localhost-k8s-calico--apiserver--76595746b9--qr7sn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7940), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76595746b9-qr7sn", "timestamp":"2025-09-05 00:47:22.254338973 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.254 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.254 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.254 [INFO][4770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.260 [INFO][4770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" host="localhost" Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.266 [INFO][4770] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.273 [INFO][4770] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.275 [INFO][4770] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.278 [INFO][4770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.279 [INFO][4770] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" host="localhost" Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.281 [INFO][4770] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31 Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.298 [INFO][4770] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" host="localhost" Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.312 [INFO][4770] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" host="localhost" Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.312 [INFO][4770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" host="localhost" Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.312 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:47:22.363803 containerd[1565]: 2025-09-05 00:47:22.312 [INFO][4770] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" HandleID="k8s-pod-network.0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" Workload="localhost-k8s-calico--apiserver--76595746b9--qr7sn-eth0" Sep 5 00:47:22.365196 containerd[1565]: 2025-09-05 00:47:22.318 [INFO][4753] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-qr7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--qr7sn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76595746b9--qr7sn-eth0", GenerateName:"calico-apiserver-76595746b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f689ecb-467b-4e47-a9fb-c61a24f6068d", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76595746b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76595746b9-qr7sn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e2593de4a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:22.365196 containerd[1565]: 2025-09-05 00:47:22.318 [INFO][4753] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-qr7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--qr7sn-eth0" Sep 5 00:47:22.365196 containerd[1565]: 2025-09-05 00:47:22.318 [INFO][4753] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e2593de4a9 ContainerID="0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-qr7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--qr7sn-eth0" Sep 5 00:47:22.365196 containerd[1565]: 2025-09-05 00:47:22.324 [INFO][4753] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-qr7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--qr7sn-eth0" Sep 5 00:47:22.365196 containerd[1565]: 2025-09-05 00:47:22.324 [INFO][4753] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-qr7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--qr7sn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76595746b9--qr7sn-eth0", GenerateName:"calico-apiserver-76595746b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f689ecb-467b-4e47-a9fb-c61a24f6068d", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76595746b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31", Pod:"calico-apiserver-76595746b9-qr7sn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e2593de4a9", MAC:"92:54:78:88:a6:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:47:22.365196 containerd[1565]: 2025-09-05 00:47:22.353 [INFO][4753] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" Namespace="calico-apiserver" Pod="calico-apiserver-76595746b9-qr7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--76595746b9--qr7sn-eth0" Sep 5 00:47:22.427673 containerd[1565]: time="2025-09-05T00:47:22.427354957Z" level=info msg="connecting to shim 0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31" address="unix:///run/containerd/s/dbd40139ad5ea1ba219a01b76bb158336756c99ef7adb4f02335c0d04a1267e0" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:47:22.462853 systemd[1]: Started cri-containerd-0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31.scope - libcontainer container 0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31. Sep 5 00:47:22.478620 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:47:22.520761 containerd[1565]: time="2025-09-05T00:47:22.520715564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76595746b9-qr7sn,Uid:5f689ecb-467b-4e47-a9fb-c61a24f6068d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31\"" Sep 5 00:47:22.908828 systemd-networkd[1486]: cali2d959a5a3a4: Gained IPv6LL Sep 5 00:47:23.321591 kubelet[2685]: E0905 00:47:23.321548 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:23.322081 kubelet[2685]: E0905 00:47:23.321855 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:23.420886 systemd-networkd[1486]: cali4e2593de4a9: Gained IPv6LL Sep 5 00:47:23.484877 systemd-networkd[1486]: calia90be60fd34: Gained IPv6LL Sep 5 00:47:23.548893 systemd-networkd[1486]: cali78aefc70245: Gained IPv6LL Sep 5 00:47:23.690508 containerd[1565]: time="2025-09-05T00:47:23.690041427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:23.691642 containerd[1565]: time="2025-09-05T00:47:23.691041704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 5 00:47:23.692540 containerd[1565]: time="2025-09-05T00:47:23.692487137Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:23.694666 containerd[1565]: time="2025-09-05T00:47:23.694612104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:23.695249 containerd[1565]: time="2025-09-05T00:47:23.695213051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.94308258s" Sep 5 00:47:23.695249 containerd[1565]: time="2025-09-05T00:47:23.695242948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 5 00:47:23.696394 containerd[1565]: time="2025-09-05T00:47:23.696346899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:47:23.698341 containerd[1565]: time="2025-09-05T00:47:23.698307688Z" level=info msg="CreateContainer within sandbox \"a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:47:23.712978 containerd[1565]: time="2025-09-05T00:47:23.712919867Z" level=info msg="Container f3483d3b6bcdfd7d4f7e1f8d720a3dff70bf157996939179a5ba9b8fb0709f9e: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:47:23.721214 systemd[1]: Started sshd@8-10.0.0.4:22-10.0.0.1:55586.service - OpenSSH per-connection server daemon (10.0.0.1:55586). Sep 5 00:47:23.727522 containerd[1565]: time="2025-09-05T00:47:23.727481241Z" level=info msg="CreateContainer within sandbox \"a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f3483d3b6bcdfd7d4f7e1f8d720a3dff70bf157996939179a5ba9b8fb0709f9e\"" Sep 5 00:47:23.729612 containerd[1565]: time="2025-09-05T00:47:23.729575891Z" level=info msg="StartContainer for \"f3483d3b6bcdfd7d4f7e1f8d720a3dff70bf157996939179a5ba9b8fb0709f9e\"" Sep 5 00:47:23.731163 containerd[1565]: time="2025-09-05T00:47:23.731101253Z" level=info msg="connecting to shim f3483d3b6bcdfd7d4f7e1f8d720a3dff70bf157996939179a5ba9b8fb0709f9e" address="unix:///run/containerd/s/89e94215ddadcef08d93a239366c715b024de7d902b7487d1544bd24c28707b3" protocol=ttrpc version=3 Sep 5 00:47:23.762936 systemd[1]: Started cri-containerd-f3483d3b6bcdfd7d4f7e1f8d720a3dff70bf157996939179a5ba9b8fb0709f9e.scope - libcontainer container f3483d3b6bcdfd7d4f7e1f8d720a3dff70bf157996939179a5ba9b8fb0709f9e. Sep 5 00:47:23.781602 sshd[4842]: Accepted publickey for core from 10.0.0.1 port 55586 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:23.783533 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:23.788166 systemd-logind[1539]: New session 9 of user core. Sep 5 00:47:23.798883 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:47:23.822409 containerd[1565]: time="2025-09-05T00:47:23.822357117Z" level=info msg="StartContainer for \"f3483d3b6bcdfd7d4f7e1f8d720a3dff70bf157996939179a5ba9b8fb0709f9e\" returns successfully" Sep 5 00:47:23.931576 sshd[4863]: Connection closed by 10.0.0.1 port 55586 Sep 5 00:47:23.931911 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:23.936409 systemd[1]: sshd@8-10.0.0.4:22-10.0.0.1:55586.service: Deactivated successfully. Sep 5 00:47:23.938557 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:47:23.940798 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:47:23.942503 systemd-logind[1539]: Removed session 9. Sep 5 00:47:24.325741 kubelet[2685]: E0905 00:47:24.325709 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:26.762313 containerd[1565]: time="2025-09-05T00:47:26.762257287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:26.763069 containerd[1565]: time="2025-09-05T00:47:26.763030008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 5 00:47:26.764400 containerd[1565]: time="2025-09-05T00:47:26.764356375Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:26.766435 containerd[1565]: time="2025-09-05T00:47:26.766388780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:26.766973 containerd[1565]: time="2025-09-05T00:47:26.766944702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.070565063s" Sep 5 00:47:26.766973 containerd[1565]: time="2025-09-05T00:47:26.766971142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:47:26.768780 containerd[1565]: time="2025-09-05T00:47:26.768424760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 00:47:26.769257 containerd[1565]: time="2025-09-05T00:47:26.769228418Z" level=info msg="CreateContainer within sandbox \"9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:47:26.777480 containerd[1565]: time="2025-09-05T00:47:26.777430568Z" level=info msg="Container a93219198517506ff57d11e4ef0662d4e7bc3247024f11de33ed521793c8298e: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:47:26.787707 containerd[1565]: time="2025-09-05T00:47:26.787664631Z" level=info msg="CreateContainer within sandbox \"9da55f6c9aa16fde019ce319bcc8bcc4745cc8b852fb2bcbe2b44189ac33e1aa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a93219198517506ff57d11e4ef0662d4e7bc3247024f11de33ed521793c8298e\"" Sep 5 00:47:26.788201 containerd[1565]: time="2025-09-05T00:47:26.788154149Z" level=info msg="StartContainer for \"a93219198517506ff57d11e4ef0662d4e7bc3247024f11de33ed521793c8298e\"" Sep 5 00:47:26.789204 containerd[1565]: time="2025-09-05T00:47:26.789177139Z" level=info msg="connecting to shim a93219198517506ff57d11e4ef0662d4e7bc3247024f11de33ed521793c8298e" address="unix:///run/containerd/s/aeeb474fd92422000a1cba241ad5d38269c5b92084816a4503ea89818cbb008d" protocol=ttrpc version=3 Sep 5 00:47:26.812825 systemd[1]: Started cri-containerd-a93219198517506ff57d11e4ef0662d4e7bc3247024f11de33ed521793c8298e.scope - libcontainer container a93219198517506ff57d11e4ef0662d4e7bc3247024f11de33ed521793c8298e. Sep 5 00:47:26.870375 containerd[1565]: time="2025-09-05T00:47:26.870329251Z" level=info msg="StartContainer for \"a93219198517506ff57d11e4ef0662d4e7bc3247024f11de33ed521793c8298e\" returns successfully" Sep 5 00:47:26.903316 containerd[1565]: time="2025-09-05T00:47:26.903276523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb3b3456d0d880c7dddb573f5553cf119376b7cadec276e0b75c750c7137640b\" id:\"c9bce932e7674978394630e93458e662d89f7f5c5ab990d916f9b3dab794cdc7\" pid:4932 exited_at:{seconds:1757033246 nanos:902930874}" Sep 5 00:47:27.350685 kubelet[2685]: I0905 00:47:27.350154 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76595746b9-728ws" podStartSLOduration=28.286870069 podStartE2EDuration="35.350111569s" podCreationTimestamp="2025-09-05 00:46:52 +0000 UTC" firstStartedPulling="2025-09-05 00:47:19.704468359 +0000 UTC m=+40.617231466" lastFinishedPulling="2025-09-05 00:47:26.767709859 +0000 UTC m=+47.680472966" observedRunningTime="2025-09-05 00:47:27.3495529 +0000 UTC m=+48.262316027" watchObservedRunningTime="2025-09-05 00:47:27.350111569 +0000 UTC m=+48.262874676" Sep 5 00:47:28.336562 kubelet[2685]: I0905 00:47:28.336513 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:47:28.946110 systemd[1]: Started sshd@9-10.0.0.4:22-10.0.0.1:55596.service - OpenSSH per-connection server daemon (10.0.0.1:55596). Sep 5 00:47:29.014311 sshd[4966]: Accepted publickey for core from 10.0.0.1 port 55596 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:29.016286 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:29.023988 systemd-logind[1539]: New session 10 of user core. Sep 5 00:47:29.031890 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:47:29.167842 sshd[4968]: Connection closed by 10.0.0.1 port 55596 Sep 5 00:47:29.168157 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:29.179879 systemd[1]: sshd@9-10.0.0.4:22-10.0.0.1:55596.service: Deactivated successfully. Sep 5 00:47:29.182318 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:47:29.183357 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:47:29.187921 systemd[1]: Started sshd@10-10.0.0.4:22-10.0.0.1:55608.service - OpenSSH per-connection server daemon (10.0.0.1:55608). Sep 5 00:47:29.188898 systemd-logind[1539]: Removed session 10. Sep 5 00:47:29.246353 sshd[4985]: Accepted publickey for core from 10.0.0.1 port 55608 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:29.248089 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:29.254337 systemd-logind[1539]: New session 11 of user core. Sep 5 00:47:29.263901 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:47:29.458783 sshd[4987]: Connection closed by 10.0.0.1 port 55608 Sep 5 00:47:29.460313 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:29.476856 systemd[1]: sshd@10-10.0.0.4:22-10.0.0.1:55608.service: Deactivated successfully. Sep 5 00:47:29.482423 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:47:29.487524 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:47:29.495484 systemd[1]: Started sshd@11-10.0.0.4:22-10.0.0.1:55612.service - OpenSSH per-connection server daemon (10.0.0.1:55612). Sep 5 00:47:29.498212 systemd-logind[1539]: Removed session 11. Sep 5 00:47:29.552033 sshd[5001]: Accepted publickey for core from 10.0.0.1 port 55612 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:29.554631 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:29.561100 systemd-logind[1539]: New session 12 of user core. Sep 5 00:47:29.575943 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:47:29.993661 sshd[5003]: Connection closed by 10.0.0.1 port 55612 Sep 5 00:47:29.994890 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:30.000161 systemd[1]: sshd@11-10.0.0.4:22-10.0.0.1:55612.service: Deactivated successfully. Sep 5 00:47:30.002259 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:47:30.003290 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:47:30.004590 systemd-logind[1539]: Removed session 12. Sep 5 00:47:30.644271 containerd[1565]: time="2025-09-05T00:47:30.644193369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:30.645426 containerd[1565]: time="2025-09-05T00:47:30.645348687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 5 00:47:30.647211 containerd[1565]: time="2025-09-05T00:47:30.647140840Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:30.649979 containerd[1565]: time="2025-09-05T00:47:30.649932708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:30.650721 containerd[1565]: time="2025-09-05T00:47:30.650683006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.882193204s" Sep 5 00:47:30.650805 containerd[1565]: time="2025-09-05T00:47:30.650722169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 5 00:47:30.652229 containerd[1565]: time="2025-09-05T00:47:30.651557707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 00:47:30.662032 containerd[1565]: time="2025-09-05T00:47:30.661958452Z" level=info msg="CreateContainer within sandbox \"b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:47:30.674856 containerd[1565]: time="2025-09-05T00:47:30.674789297Z" level=info msg="Container 354a05678bb7985e1abe38917b04c46c99aec3e40cc448085f326db9d32852d2: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:47:30.685235 containerd[1565]: time="2025-09-05T00:47:30.685162771Z" level=info msg="CreateContainer within sandbox \"b06e7093c5f9d0e968d1a9b9c00f36146a4a35d29c2f4af99d61a0dc2cd47c47\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"354a05678bb7985e1abe38917b04c46c99aec3e40cc448085f326db9d32852d2\"" Sep 5 00:47:30.685829 containerd[1565]: time="2025-09-05T00:47:30.685791000Z" level=info msg="StartContainer for \"354a05678bb7985e1abe38917b04c46c99aec3e40cc448085f326db9d32852d2\"" Sep 5 00:47:30.687545 containerd[1565]: time="2025-09-05T00:47:30.687480130Z" level=info msg="connecting to shim 354a05678bb7985e1abe38917b04c46c99aec3e40cc448085f326db9d32852d2" address="unix:///run/containerd/s/67efa57d0ed510d33832f1d35c09ab671002d70b4a271214c5ae48402c537aa0" protocol=ttrpc version=3 Sep 5 00:47:30.755966 systemd[1]: Started cri-containerd-354a05678bb7985e1abe38917b04c46c99aec3e40cc448085f326db9d32852d2.scope - libcontainer container 354a05678bb7985e1abe38917b04c46c99aec3e40cc448085f326db9d32852d2. Sep 5 00:47:30.806748 containerd[1565]: time="2025-09-05T00:47:30.806697971Z" level=info msg="StartContainer for \"354a05678bb7985e1abe38917b04c46c99aec3e40cc448085f326db9d32852d2\" returns successfully" Sep 5 00:47:31.525031 kubelet[2685]: I0905 00:47:31.524850 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-799779498d-kptz4" podStartSLOduration=26.921853847 podStartE2EDuration="35.524831748s" podCreationTimestamp="2025-09-05 00:46:56 +0000 UTC" firstStartedPulling="2025-09-05 00:47:22.048468738 +0000 UTC m=+42.961231845" lastFinishedPulling="2025-09-05 00:47:30.651446609 +0000 UTC m=+51.564209746" observedRunningTime="2025-09-05 00:47:31.52439013 +0000 UTC m=+52.437153247" watchObservedRunningTime="2025-09-05 00:47:31.524831748 +0000 UTC m=+52.437594855" Sep 5 00:47:32.347594 kubelet[2685]: I0905 00:47:32.347553 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:47:33.204490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2357849151.mount: Deactivated successfully. Sep 5 00:47:34.316845 containerd[1565]: time="2025-09-05T00:47:34.316764761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:34.317932 containerd[1565]: time="2025-09-05T00:47:34.317873001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 5 00:47:34.319250 containerd[1565]: time="2025-09-05T00:47:34.319179092Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:34.321548 containerd[1565]: time="2025-09-05T00:47:34.321479478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:34.322312 containerd[1565]: time="2025-09-05T00:47:34.322262376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.67066778s" Sep 5 00:47:34.322368 containerd[1565]: time="2025-09-05T00:47:34.322311468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 5 00:47:34.323723 containerd[1565]: time="2025-09-05T00:47:34.323696136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 00:47:34.332772 containerd[1565]: time="2025-09-05T00:47:34.332723694Z" level=info msg="CreateContainer within sandbox \"a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 00:47:34.343086 containerd[1565]: time="2025-09-05T00:47:34.343011958Z" level=info msg="Container 7ba2045c7c7ce064ffdf9dade74a6e3f17d1b3b29ace1de76a031103f70f481b: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:47:34.353121 containerd[1565]: time="2025-09-05T00:47:34.353056355Z" level=info msg="CreateContainer within sandbox \"a8705ccfa9a0423817938e31e07674940799f3ff726ca5078047610ab52e5e8e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7ba2045c7c7ce064ffdf9dade74a6e3f17d1b3b29ace1de76a031103f70f481b\"" Sep 5 00:47:34.353506 containerd[1565]: time="2025-09-05T00:47:34.353482825Z" level=info msg="StartContainer for \"7ba2045c7c7ce064ffdf9dade74a6e3f17d1b3b29ace1de76a031103f70f481b\"" Sep 5 00:47:34.354506 containerd[1565]: time="2025-09-05T00:47:34.354472001Z" level=info msg="connecting to shim 7ba2045c7c7ce064ffdf9dade74a6e3f17d1b3b29ace1de76a031103f70f481b" address="unix:///run/containerd/s/194bef563ea507fed692892644314901a8eba10d4fb872cd6fe9faee337f019c" protocol=ttrpc version=3 Sep 5 00:47:34.391990 systemd[1]: Started cri-containerd-7ba2045c7c7ce064ffdf9dade74a6e3f17d1b3b29ace1de76a031103f70f481b.scope - libcontainer container 7ba2045c7c7ce064ffdf9dade74a6e3f17d1b3b29ace1de76a031103f70f481b. Sep 5 00:47:34.455686 containerd[1565]: time="2025-09-05T00:47:34.455614574Z" level=info msg="StartContainer for \"7ba2045c7c7ce064ffdf9dade74a6e3f17d1b3b29ace1de76a031103f70f481b\" returns successfully" Sep 5 00:47:35.014838 systemd[1]: Started sshd@12-10.0.0.4:22-10.0.0.1:48628.service - OpenSSH per-connection server daemon (10.0.0.1:48628). Sep 5 00:47:35.079024 sshd[5117]: Accepted publickey for core from 10.0.0.1 port 48628 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:35.080682 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:35.084697 systemd-logind[1539]: New session 13 of user core. Sep 5 00:47:35.089981 kubelet[2685]: I0905 00:47:35.089957 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:47:35.091777 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:47:35.123581 kubelet[2685]: I0905 00:47:35.123206 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:47:35.143823 containerd[1565]: time="2025-09-05T00:47:35.143788322Z" level=info msg="TaskExit event in podsandbox handler container_id:\"354a05678bb7985e1abe38917b04c46c99aec3e40cc448085f326db9d32852d2\" id:\"a2cc7ca9244494ebe6a261a90853f07d0384b99748aad49bb34a0ad3903e1f9f\" pid:5133 exited_at:{seconds:1757033255 nanos:143396016}" Sep 5 00:47:35.303131 containerd[1565]: time="2025-09-05T00:47:35.303004395Z" level=info msg="TaskExit event in podsandbox handler container_id:\"354a05678bb7985e1abe38917b04c46c99aec3e40cc448085f326db9d32852d2\" id:\"b3bfcb2ab2a8ed7f62e2adab8793fc06a2e83e08782c0b740fca875db30f06da\" pid:5167 exited_at:{seconds:1757033255 nanos:302690827}" Sep 5 00:47:35.330443 sshd[5119]: Connection closed by 10.0.0.1 port 48628 Sep 5 00:47:35.330778 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:35.335212 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:47:35.336236 systemd[1]: sshd@12-10.0.0.4:22-10.0.0.1:48628.service: Deactivated successfully. Sep 5 00:47:35.340095 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:47:35.342679 systemd-logind[1539]: Removed session 13. Sep 5 00:47:38.391356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount701011392.mount: Deactivated successfully. Sep 5 00:47:39.282724 containerd[1565]: time="2025-09-05T00:47:39.282677067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:39.283481 containerd[1565]: time="2025-09-05T00:47:39.283453584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 5 00:47:39.284699 containerd[1565]: time="2025-09-05T00:47:39.284643216Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:39.286599 containerd[1565]: time="2025-09-05T00:47:39.286571814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:39.287241 containerd[1565]: time="2025-09-05T00:47:39.287203419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.963475243s" Sep 5 00:47:39.287241 containerd[1565]: time="2025-09-05T00:47:39.287236411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 5 00:47:39.288446 containerd[1565]: time="2025-09-05T00:47:39.288189760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:47:39.289443 containerd[1565]: time="2025-09-05T00:47:39.289406173Z" level=info msg="CreateContainer within sandbox \"7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 00:47:39.297856 containerd[1565]: time="2025-09-05T00:47:39.297802937Z" level=info msg="Container 6e68683d2cc179c22b76ead75e228ff5a77415b892523591152bbf790f83c94c: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:47:39.306663 containerd[1565]: time="2025-09-05T00:47:39.306619930Z" level=info msg="CreateContainer within sandbox \"7d7da7bd6f0ea9341baf73dcf3af8d780726c7bf1aa0fc8744d07925b3518065\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6e68683d2cc179c22b76ead75e228ff5a77415b892523591152bbf790f83c94c\"" Sep 5 00:47:39.307137 containerd[1565]: time="2025-09-05T00:47:39.307112744Z" level=info msg="StartContainer for \"6e68683d2cc179c22b76ead75e228ff5a77415b892523591152bbf790f83c94c\"" Sep 5 00:47:39.308216 containerd[1565]: time="2025-09-05T00:47:39.308183183Z" level=info msg="connecting to shim 6e68683d2cc179c22b76ead75e228ff5a77415b892523591152bbf790f83c94c" address="unix:///run/containerd/s/2ecbc43381bba885a1dbb2116ddeb1c28294edf9a0b5e17caab383ad9c79d879" protocol=ttrpc version=3 Sep 5 00:47:39.333781 systemd[1]: Started cri-containerd-6e68683d2cc179c22b76ead75e228ff5a77415b892523591152bbf790f83c94c.scope - libcontainer container 6e68683d2cc179c22b76ead75e228ff5a77415b892523591152bbf790f83c94c. Sep 5 00:47:39.384312 containerd[1565]: time="2025-09-05T00:47:39.384255152Z" level=info msg="StartContainer for \"6e68683d2cc179c22b76ead75e228ff5a77415b892523591152bbf790f83c94c\" returns successfully" Sep 5 00:47:39.670276 containerd[1565]: time="2025-09-05T00:47:39.670160230Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:39.671002 containerd[1565]: time="2025-09-05T00:47:39.670966944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:47:39.672511 containerd[1565]: time="2025-09-05T00:47:39.672468943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 384.247273ms" Sep 5 00:47:39.672511 containerd[1565]: time="2025-09-05T00:47:39.672498218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:47:39.673260 containerd[1565]: time="2025-09-05T00:47:39.673226283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 00:47:39.675205 containerd[1565]: time="2025-09-05T00:47:39.675150974Z" level=info msg="CreateContainer within sandbox \"0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:47:39.684714 containerd[1565]: time="2025-09-05T00:47:39.684681286Z" level=info msg="Container 4c34d89a5b4b5c01e1c1418f8b40e2df0772fb1c3a4ed257ec09270378a04df0: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:47:39.692074 containerd[1565]: time="2025-09-05T00:47:39.692045853Z" level=info msg="CreateContainer within sandbox \"0a3dfedca26958144c7d1024e81ca018c4742da765aba90d4e9fba092132bd31\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4c34d89a5b4b5c01e1c1418f8b40e2df0772fb1c3a4ed257ec09270378a04df0\"" Sep 5 00:47:39.692425 containerd[1565]: time="2025-09-05T00:47:39.692398585Z" level=info msg="StartContainer for \"4c34d89a5b4b5c01e1c1418f8b40e2df0772fb1c3a4ed257ec09270378a04df0\"" Sep 5 00:47:39.693241 containerd[1565]: time="2025-09-05T00:47:39.693215358Z" level=info msg="connecting to shim 4c34d89a5b4b5c01e1c1418f8b40e2df0772fb1c3a4ed257ec09270378a04df0" address="unix:///run/containerd/s/dbd40139ad5ea1ba219a01b76bb158336756c99ef7adb4f02335c0d04a1267e0" protocol=ttrpc version=3 Sep 5 00:47:39.715795 systemd[1]: Started cri-containerd-4c34d89a5b4b5c01e1c1418f8b40e2df0772fb1c3a4ed257ec09270378a04df0.scope - libcontainer container 4c34d89a5b4b5c01e1c1418f8b40e2df0772fb1c3a4ed257ec09270378a04df0. Sep 5 00:47:39.763528 containerd[1565]: time="2025-09-05T00:47:39.763492495Z" level=info msg="StartContainer for \"4c34d89a5b4b5c01e1c1418f8b40e2df0772fb1c3a4ed257ec09270378a04df0\" returns successfully" Sep 5 00:47:40.345413 systemd[1]: Started sshd@13-10.0.0.4:22-10.0.0.1:39300.service - OpenSSH per-connection server daemon (10.0.0.1:39300). Sep 5 00:47:40.387917 kubelet[2685]: I0905 00:47:40.387786 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f5655f584-l5wfs" podStartSLOduration=7.598729088 podStartE2EDuration="22.387769601s" podCreationTimestamp="2025-09-05 00:47:18 +0000 UTC" firstStartedPulling="2025-09-05 00:47:19.534428527 +0000 UTC m=+40.447191634" lastFinishedPulling="2025-09-05 00:47:34.32346902 +0000 UTC m=+55.236232147" observedRunningTime="2025-09-05 00:47:35.374909259 +0000 UTC m=+56.287672366" watchObservedRunningTime="2025-09-05 00:47:40.387769601 +0000 UTC m=+61.300532708" Sep 5 00:47:40.390989 kubelet[2685]: I0905 00:47:40.388198 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76595746b9-qr7sn" podStartSLOduration=31.238383815 podStartE2EDuration="48.388190781s" podCreationTimestamp="2025-09-05 00:46:52 +0000 UTC" firstStartedPulling="2025-09-05 00:47:22.523270037 +0000 UTC m=+43.436033144" lastFinishedPulling="2025-09-05 00:47:39.673077003 +0000 UTC m=+60.585840110" observedRunningTime="2025-09-05 00:47:40.386358863 +0000 UTC m=+61.299121971" watchObservedRunningTime="2025-09-05 00:47:40.388190781 +0000 UTC m=+61.300953888" Sep 5 00:47:40.423317 kubelet[2685]: I0905 00:47:40.423236 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-6kn47" podStartSLOduration=28.258544071 podStartE2EDuration="45.423216869s" podCreationTimestamp="2025-09-05 00:46:55 +0000 UTC" firstStartedPulling="2025-09-05 00:47:22.123349779 +0000 UTC m=+43.036112886" lastFinishedPulling="2025-09-05 00:47:39.288022577 +0000 UTC m=+60.200785684" observedRunningTime="2025-09-05 00:47:40.418177613 +0000 UTC m=+61.330940730" watchObservedRunningTime="2025-09-05 00:47:40.423216869 +0000 UTC m=+61.335979976" Sep 5 00:47:40.445562 sshd[5277]: Accepted publickey for core from 10.0.0.1 port 39300 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:40.447299 sshd-session[5277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:40.453205 systemd-logind[1539]: New session 14 of user core. Sep 5 00:47:40.459908 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:47:40.503092 containerd[1565]: time="2025-09-05T00:47:40.503021892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e68683d2cc179c22b76ead75e228ff5a77415b892523591152bbf790f83c94c\" id:\"ceea9e851cfcd36780f860a04470bda724b4c2f8fae384374f8c08169fda02eb\" pid:5296 exit_status:1 exited_at:{seconds:1757033260 nanos:502108989}" Sep 5 00:47:40.632359 sshd[5308]: Connection closed by 10.0.0.1 port 39300 Sep 5 00:47:40.633903 sshd-session[5277]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:40.640014 systemd[1]: sshd@13-10.0.0.4:22-10.0.0.1:39300.service: Deactivated successfully. Sep 5 00:47:40.642638 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:47:40.644801 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:47:40.646420 systemd-logind[1539]: Removed session 14. Sep 5 00:47:41.384976 containerd[1565]: time="2025-09-05T00:47:41.384923950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:41.386047 containerd[1565]: time="2025-09-05T00:47:41.386008836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 5 00:47:41.388131 containerd[1565]: time="2025-09-05T00:47:41.387285811Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:41.389441 containerd[1565]: time="2025-09-05T00:47:41.389414166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:47:41.389961 containerd[1565]: time="2025-09-05T00:47:41.389839343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.716580509s" Sep 5 00:47:41.390028 containerd[1565]: time="2025-09-05T00:47:41.390015383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 5 00:47:41.392866 containerd[1565]: time="2025-09-05T00:47:41.392840684Z" level=info msg="CreateContainer within sandbox \"a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:47:41.404871 containerd[1565]: time="2025-09-05T00:47:41.404834638Z" level=info msg="Container 6fdc8f90040fe555966f29a30a59e9976000aea7830579e28240f27b9c33ae94: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:47:41.414759 containerd[1565]: time="2025-09-05T00:47:41.414726347Z" level=info msg="CreateContainer within sandbox \"a7adf3cf7566cbd2aa975e095c70ec00a5388f0e3686f1a73c4733af53ef6ece\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6fdc8f90040fe555966f29a30a59e9976000aea7830579e28240f27b9c33ae94\"" Sep 5 00:47:41.415443 containerd[1565]: time="2025-09-05T00:47:41.415421831Z" level=info msg="StartContainer for \"6fdc8f90040fe555966f29a30a59e9976000aea7830579e28240f27b9c33ae94\"" Sep 5 00:47:41.417280 containerd[1565]: time="2025-09-05T00:47:41.417249631Z" level=info msg="connecting to shim 6fdc8f90040fe555966f29a30a59e9976000aea7830579e28240f27b9c33ae94" address="unix:///run/containerd/s/89e94215ddadcef08d93a239366c715b024de7d902b7487d1544bd24c28707b3" protocol=ttrpc version=3 Sep 5 00:47:41.446859 systemd[1]: Started cri-containerd-6fdc8f90040fe555966f29a30a59e9976000aea7830579e28240f27b9c33ae94.scope - libcontainer container 6fdc8f90040fe555966f29a30a59e9976000aea7830579e28240f27b9c33ae94. Sep 5 00:47:41.476042 containerd[1565]: time="2025-09-05T00:47:41.475803763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e68683d2cc179c22b76ead75e228ff5a77415b892523591152bbf790f83c94c\" id:\"b81e3e58f98f2cd9c2adc9d46d30a4d1986143f5bcef243ae2239d35faef9cc1\" pid:5341 exit_status:1 exited_at:{seconds:1757033261 nanos:474076642}" Sep 5 00:47:41.495532 containerd[1565]: time="2025-09-05T00:47:41.495493104Z" level=info msg="StartContainer for \"6fdc8f90040fe555966f29a30a59e9976000aea7830579e28240f27b9c33ae94\" returns successfully" Sep 5 00:47:42.239024 kubelet[2685]: I0905 00:47:42.238970 2685 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:47:42.239024 kubelet[2685]: I0905 00:47:42.239002 2685 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:47:42.395360 kubelet[2685]: I0905 00:47:42.395289 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h4th8" podStartSLOduration=24.647773902 podStartE2EDuration="46.39527299s" podCreationTimestamp="2025-09-05 00:46:56 +0000 UTC" firstStartedPulling="2025-09-05 00:47:19.643635515 +0000 UTC m=+40.556398622" lastFinishedPulling="2025-09-05 00:47:41.391134603 +0000 UTC m=+62.303897710" observedRunningTime="2025-09-05 00:47:42.394923264 +0000 UTC m=+63.307686381" watchObservedRunningTime="2025-09-05 00:47:42.39527299 +0000 UTC m=+63.308036097" Sep 5 00:47:45.656937 systemd[1]: Started sshd@14-10.0.0.4:22-10.0.0.1:39316.service - OpenSSH per-connection server daemon (10.0.0.1:39316). Sep 5 00:47:45.706727 sshd[5392]: Accepted publickey for core from 10.0.0.1 port 39316 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:45.708023 sshd-session[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:45.712839 systemd-logind[1539]: New session 15 of user core. Sep 5 00:47:45.725822 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:47:45.861344 sshd[5394]: Connection closed by 10.0.0.1 port 39316 Sep 5 00:47:45.861623 sshd-session[5392]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:45.864548 systemd[1]: sshd@14-10.0.0.4:22-10.0.0.1:39316.service: Deactivated successfully. Sep 5 00:47:45.866539 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:47:45.868069 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:47:45.869404 systemd-logind[1539]: Removed session 15. Sep 5 00:47:47.017243 containerd[1565]: time="2025-09-05T00:47:47.017191791Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e68683d2cc179c22b76ead75e228ff5a77415b892523591152bbf790f83c94c\" id:\"ae05dbe14762d0eac376845a2a767fbef1ba3022090d49977788941c825f058f\" pid:5417 exited_at:{seconds:1757033267 nanos:16854368}" Sep 5 00:47:50.882081 systemd[1]: Started sshd@15-10.0.0.4:22-10.0.0.1:45384.service - OpenSSH per-connection server daemon (10.0.0.1:45384). Sep 5 00:47:50.934801 sshd[5431]: Accepted publickey for core from 10.0.0.1 port 45384 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:50.936329 sshd-session[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:50.940994 systemd-logind[1539]: New session 16 of user core. Sep 5 00:47:50.950857 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:47:51.090580 sshd[5433]: Connection closed by 10.0.0.1 port 45384 Sep 5 00:47:51.090966 sshd-session[5431]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:51.103044 systemd[1]: sshd@15-10.0.0.4:22-10.0.0.1:45384.service: Deactivated successfully. Sep 5 00:47:51.105936 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:47:51.106965 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:47:51.111042 systemd[1]: Started sshd@16-10.0.0.4:22-10.0.0.1:45394.service - OpenSSH per-connection server daemon (10.0.0.1:45394). Sep 5 00:47:51.112123 systemd-logind[1539]: Removed session 16. Sep 5 00:47:51.179579 sshd[5446]: Accepted publickey for core from 10.0.0.1 port 45394 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:51.181700 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:51.186608 systemd-logind[1539]: New session 17 of user core. Sep 5 00:47:51.194789 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:47:51.441449 sshd[5448]: Connection closed by 10.0.0.1 port 45394 Sep 5 00:47:51.440974 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:51.453848 systemd[1]: sshd@16-10.0.0.4:22-10.0.0.1:45394.service: Deactivated successfully. Sep 5 00:47:51.456028 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:47:51.457040 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:47:51.460280 systemd[1]: Started sshd@17-10.0.0.4:22-10.0.0.1:45396.service - OpenSSH per-connection server daemon (10.0.0.1:45396). Sep 5 00:47:51.461240 systemd-logind[1539]: Removed session 17. Sep 5 00:47:51.528062 sshd[5459]: Accepted publickey for core from 10.0.0.1 port 45396 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:51.529822 sshd-session[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:51.534437 systemd-logind[1539]: New session 18 of user core. Sep 5 00:47:51.544791 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:47:53.066546 sshd[5461]: Connection closed by 10.0.0.1 port 45396 Sep 5 00:47:53.066925 sshd-session[5459]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:53.078193 systemd[1]: sshd@17-10.0.0.4:22-10.0.0.1:45396.service: Deactivated successfully. Sep 5 00:47:53.080156 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:47:53.080373 systemd[1]: session-18.scope: Consumed 582ms CPU time, 73.9M memory peak. Sep 5 00:47:53.081778 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:47:53.086863 systemd[1]: Started sshd@18-10.0.0.4:22-10.0.0.1:45412.service - OpenSSH per-connection server daemon (10.0.0.1:45412). Sep 5 00:47:53.088204 systemd-logind[1539]: Removed session 18. Sep 5 00:47:53.134218 sshd[5480]: Accepted publickey for core from 10.0.0.1 port 45412 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:53.135686 sshd-session[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:53.140323 systemd-logind[1539]: New session 19 of user core. Sep 5 00:47:53.149777 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:47:53.463520 sshd[5483]: Connection closed by 10.0.0.1 port 45412 Sep 5 00:47:53.464344 sshd-session[5480]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:53.478256 systemd[1]: sshd@18-10.0.0.4:22-10.0.0.1:45412.service: Deactivated successfully. Sep 5 00:47:53.480257 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:47:53.481035 systemd-logind[1539]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:47:53.484336 systemd[1]: Started sshd@19-10.0.0.4:22-10.0.0.1:45416.service - OpenSSH per-connection server daemon (10.0.0.1:45416). Sep 5 00:47:53.485456 systemd-logind[1539]: Removed session 19. Sep 5 00:47:53.530297 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 45416 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:53.531557 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:53.536538 systemd-logind[1539]: New session 20 of user core. Sep 5 00:47:53.549781 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:47:53.659909 sshd[5496]: Connection closed by 10.0.0.1 port 45416 Sep 5 00:47:53.660249 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:53.664404 systemd[1]: sshd@19-10.0.0.4:22-10.0.0.1:45416.service: Deactivated successfully. Sep 5 00:47:53.666594 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:47:53.667317 systemd-logind[1539]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:47:53.668607 systemd-logind[1539]: Removed session 20. Sep 5 00:47:56.180205 kubelet[2685]: E0905 00:47:56.180158 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:47:56.894691 containerd[1565]: time="2025-09-05T00:47:56.894638724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb3b3456d0d880c7dddb573f5553cf119376b7cadec276e0b75c750c7137640b\" id:\"94cd67eea533863ff7304a672cf745911681cfed511525166b3ff290bf197a96\" pid:5521 exited_at:{seconds:1757033276 nanos:894367972}" Sep 5 00:47:58.677565 systemd[1]: Started sshd@20-10.0.0.4:22-10.0.0.1:45418.service - OpenSSH per-connection server daemon (10.0.0.1:45418). Sep 5 00:47:58.722509 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 45418 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:47:58.724184 sshd-session[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:47:58.732376 systemd-logind[1539]: New session 21 of user core. Sep 5 00:47:58.737796 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:47:58.848098 sshd[5540]: Connection closed by 10.0.0.1 port 45418 Sep 5 00:47:58.848454 sshd-session[5538]: pam_unix(sshd:session): session closed for user core Sep 5 00:47:58.853555 systemd[1]: sshd@20-10.0.0.4:22-10.0.0.1:45418.service: Deactivated successfully. Sep 5 00:47:58.855830 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:47:58.856981 systemd-logind[1539]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:47:58.858481 systemd-logind[1539]: Removed session 21. Sep 5 00:48:00.179540 kubelet[2685]: E0905 00:48:00.179489 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:48:02.180040 kubelet[2685]: E0905 00:48:02.179992 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:48:03.506089 containerd[1565]: time="2025-09-05T00:48:03.506037129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"354a05678bb7985e1abe38917b04c46c99aec3e40cc448085f326db9d32852d2\" id:\"a428d45e4642fcb9223d5d7975972e1c631fccfb933431d0a6526e4516cdcfff\" pid:5572 exited_at:{seconds:1757033283 nanos:505605430}" Sep 5 00:48:03.861141 systemd[1]: Started sshd@21-10.0.0.4:22-10.0.0.1:35628.service - OpenSSH per-connection server daemon (10.0.0.1:35628). Sep 5 00:48:03.922119 sshd[5583]: Accepted publickey for core from 10.0.0.1 port 35628 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:48:03.923773 sshd-session[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:48:03.929060 systemd-logind[1539]: New session 22 of user core. Sep 5 00:48:03.944931 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:48:04.055218 sshd[5585]: Connection closed by 10.0.0.1 port 35628 Sep 5 00:48:04.055552 sshd-session[5583]: pam_unix(sshd:session): session closed for user core Sep 5 00:48:04.059947 systemd[1]: sshd@21-10.0.0.4:22-10.0.0.1:35628.service: Deactivated successfully. Sep 5 00:48:04.062520 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:48:04.063463 systemd-logind[1539]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:48:04.065081 systemd-logind[1539]: Removed session 22. Sep 5 00:48:05.134083 containerd[1565]: time="2025-09-05T00:48:05.134040308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"354a05678bb7985e1abe38917b04c46c99aec3e40cc448085f326db9d32852d2\" id:\"d6b03707b1378e448a1a3305de308f7058cf889dcef7b75768e7c5ac34d7e5b5\" pid:5612 exited_at:{seconds:1757033285 nanos:133830886}" Sep 5 00:48:09.064768 systemd[1]: Started sshd@22-10.0.0.4:22-10.0.0.1:35644.service - OpenSSH per-connection server daemon (10.0.0.1:35644). Sep 5 00:48:09.115253 containerd[1565]: time="2025-09-05T00:48:09.115207479Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e68683d2cc179c22b76ead75e228ff5a77415b892523591152bbf790f83c94c\" id:\"9ef68598a866d812c4811cd17012b190434327ee242249d486f211c0ba8dccab\" pid:5634 exited_at:{seconds:1757033289 nanos:114793376}" Sep 5 00:48:09.129981 sshd[5641]: Accepted publickey for core from 10.0.0.1 port 35644 ssh2: RSA SHA256:7p4B51KiiBlx4fv/ePp9YOZ3IQI8BrAB9AIyfMJhLIw Sep 5 00:48:09.132588 sshd-session[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:48:09.138096 systemd-logind[1539]: New session 23 of user core. Sep 5 00:48:09.145813 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:48:09.301010 sshd[5651]: Connection closed by 10.0.0.1 port 35644 Sep 5 00:48:09.302853 sshd-session[5641]: pam_unix(sshd:session): session closed for user core Sep 5 00:48:09.307023 systemd[1]: sshd@22-10.0.0.4:22-10.0.0.1:35644.service: Deactivated successfully. Sep 5 00:48:09.309265 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:48:09.312149 systemd-logind[1539]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:48:09.313831 systemd-logind[1539]: Removed session 23.