Sep 12 05:53:53.843073 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 04:02:32 -00 2025 Sep 12 05:53:53.843103 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 05:53:53.843112 kernel: BIOS-provided physical RAM map: Sep 12 05:53:53.843124 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 05:53:53.843130 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 05:53:53.843136 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 05:53:53.843144 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 12 05:53:53.843151 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 12 05:53:53.843167 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 05:53:53.843175 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 05:53:53.843183 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 05:53:53.843191 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 05:53:53.843199 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 05:53:53.843207 kernel: NX (Execute Disable) protection: active Sep 12 05:53:53.843221 kernel: APIC: Static calls initialized Sep 12 05:53:53.843230 kernel: SMBIOS 2.8 present. Sep 12 05:53:53.843241 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 12 05:53:53.843249 kernel: DMI: Memory slots populated: 1/1 Sep 12 05:53:53.843258 kernel: Hypervisor detected: KVM Sep 12 05:53:53.843266 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 05:53:53.843274 kernel: kvm-clock: using sched offset of 4672993320 cycles Sep 12 05:53:53.843283 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 05:53:53.843292 kernel: tsc: Detected 2794.748 MHz processor Sep 12 05:53:53.843305 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 05:53:53.843314 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 05:53:53.843323 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 12 05:53:53.843332 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 05:53:53.843342 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 05:53:53.843351 kernel: Using GB pages for direct mapping Sep 12 05:53:53.843361 kernel: ACPI: Early table checksum verification disabled Sep 12 05:53:53.843370 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 12 05:53:53.843379 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:53:53.843394 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:53:53.843404 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:53:53.843414 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 12 05:53:53.843426 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:53:53.843436 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:53:53.843448 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:53:53.843457 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:53:53.843467 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 12 05:53:53.843486 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 12 05:53:53.843494 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 12 05:53:53.843501 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 12 05:53:53.843508 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 12 05:53:53.843516 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 12 05:53:53.843523 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 12 05:53:53.843535 kernel: No NUMA configuration found Sep 12 05:53:53.843543 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 12 05:53:53.843550 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 12 05:53:53.843557 kernel: Zone ranges: Sep 12 05:53:53.843584 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 05:53:53.843591 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 12 05:53:53.843599 kernel: Normal empty Sep 12 05:53:53.843606 kernel: Device empty Sep 12 05:53:53.843613 kernel: Movable zone start for each node Sep 12 05:53:53.843620 kernel: Early memory node ranges Sep 12 05:53:53.843634 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 05:53:53.843641 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 12 05:53:53.843649 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 12 05:53:53.843656 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 05:53:53.843663 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 05:53:53.843671 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 05:53:53.843679 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 05:53:53.843688 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 05:53:53.843695 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 05:53:53.843708 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 05:53:53.843716 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 05:53:53.843725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 05:53:53.843732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 05:53:53.843740 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 05:53:53.843747 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 05:53:53.843755 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 05:53:53.843762 kernel: TSC deadline timer available Sep 12 05:53:53.843769 kernel: CPU topo: Max. logical packages: 1 Sep 12 05:53:53.843781 kernel: CPU topo: Max. logical dies: 1 Sep 12 05:53:53.843788 kernel: CPU topo: Max. dies per package: 1 Sep 12 05:53:53.843796 kernel: CPU topo: Max. threads per core: 1 Sep 12 05:53:53.843803 kernel: CPU topo: Num. cores per package: 4 Sep 12 05:53:53.843810 kernel: CPU topo: Num. threads per package: 4 Sep 12 05:53:53.843818 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 05:53:53.843825 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 05:53:53.843832 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 05:53:53.843839 kernel: kvm-guest: setup PV sched yield Sep 12 05:53:53.843849 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 05:53:53.843857 kernel: Booting paravirtualized kernel on KVM Sep 12 05:53:53.843864 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 05:53:53.843872 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 05:53:53.843879 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 05:53:53.843887 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 05:53:53.843894 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 05:53:53.843901 kernel: kvm-guest: PV spinlocks enabled Sep 12 05:53:53.843908 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 05:53:53.843919 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 05:53:53.843927 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 05:53:53.843942 kernel: random: crng init done Sep 12 05:53:53.843950 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 05:53:53.843958 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 05:53:53.843965 kernel: Fallback order for Node 0: 0 Sep 12 05:53:53.843972 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 12 05:53:53.843980 kernel: Policy zone: DMA32 Sep 12 05:53:53.843989 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 05:53:53.843997 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 05:53:53.844004 kernel: ftrace: allocating 40123 entries in 157 pages Sep 12 05:53:53.844011 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 05:53:53.844019 kernel: Dynamic Preempt: voluntary Sep 12 05:53:53.844026 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 05:53:53.844034 kernel: rcu: RCU event tracing is enabled. Sep 12 05:53:53.844042 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 05:53:53.844061 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 05:53:53.844071 kernel: Rude variant of Tasks RCU enabled. Sep 12 05:53:53.844090 kernel: Tracing variant of Tasks RCU enabled. Sep 12 05:53:53.844100 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 05:53:53.844108 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 05:53:53.844116 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 05:53:53.844123 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 05:53:53.844131 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 05:53:53.844139 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 05:53:53.844146 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 05:53:53.844164 kernel: Console: colour VGA+ 80x25 Sep 12 05:53:53.844177 kernel: printk: legacy console [ttyS0] enabled Sep 12 05:53:53.844185 kernel: ACPI: Core revision 20240827 Sep 12 05:53:53.844196 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 05:53:53.844204 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 05:53:53.844211 kernel: x2apic enabled Sep 12 05:53:53.844219 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 05:53:53.844230 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 05:53:53.844238 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 05:53:53.844248 kernel: kvm-guest: setup PV IPIs Sep 12 05:53:53.844256 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 05:53:53.844264 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 05:53:53.844272 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 05:53:53.844280 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 05:53:53.844288 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 05:53:53.844296 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 05:53:53.844303 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 05:53:53.844313 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 05:53:53.844321 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 05:53:53.844329 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 05:53:53.844337 kernel: active return thunk: retbleed_return_thunk Sep 12 05:53:53.844345 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 05:53:53.844353 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 05:53:53.844361 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 05:53:53.844369 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 05:53:53.844379 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 05:53:53.844387 kernel: active return thunk: srso_return_thunk Sep 12 05:53:53.844395 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 05:53:53.844403 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 05:53:53.844411 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 05:53:53.844419 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 05:53:53.844426 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 05:53:53.844434 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 05:53:53.844442 kernel: Freeing SMP alternatives memory: 32K Sep 12 05:53:53.844452 kernel: pid_max: default: 32768 minimum: 301 Sep 12 05:53:53.844460 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 05:53:53.844468 kernel: landlock: Up and running. Sep 12 05:53:53.844475 kernel: SELinux: Initializing. Sep 12 05:53:53.844486 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 05:53:53.844494 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 05:53:53.844502 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 05:53:53.844510 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 05:53:53.844518 kernel: ... version: 0 Sep 12 05:53:53.844528 kernel: ... bit width: 48 Sep 12 05:53:53.844535 kernel: ... generic registers: 6 Sep 12 05:53:53.844543 kernel: ... value mask: 0000ffffffffffff Sep 12 05:53:53.844551 kernel: ... max period: 00007fffffffffff Sep 12 05:53:53.844559 kernel: ... fixed-purpose events: 0 Sep 12 05:53:53.844581 kernel: ... event mask: 000000000000003f Sep 12 05:53:53.844588 kernel: signal: max sigframe size: 1776 Sep 12 05:53:53.844596 kernel: rcu: Hierarchical SRCU implementation. Sep 12 05:53:53.844604 kernel: rcu: Max phase no-delay instances is 400. Sep 12 05:53:53.844615 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 05:53:53.844623 kernel: smp: Bringing up secondary CPUs ... Sep 12 05:53:53.844631 kernel: smpboot: x86: Booting SMP configuration: Sep 12 05:53:53.844638 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 05:53:53.844646 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 05:53:53.844654 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 05:53:53.844662 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2432K rwdata, 9988K rodata, 54092K init, 2872K bss, 136912K reserved, 0K cma-reserved) Sep 12 05:53:53.844670 kernel: devtmpfs: initialized Sep 12 05:53:53.844678 kernel: x86/mm: Memory block size: 128MB Sep 12 05:53:53.844688 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 05:53:53.844696 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 05:53:53.844704 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 05:53:53.844711 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 05:53:53.844719 kernel: audit: initializing netlink subsys (disabled) Sep 12 05:53:53.844727 kernel: audit: type=2000 audit(1757656430.045:1): state=initialized audit_enabled=0 res=1 Sep 12 05:53:53.844735 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 05:53:53.844743 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 05:53:53.844751 kernel: cpuidle: using governor menu Sep 12 05:53:53.844761 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 05:53:53.844768 kernel: dca service started, version 1.12.1 Sep 12 05:53:53.844776 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 12 05:53:53.844784 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 05:53:53.844792 kernel: PCI: Using configuration type 1 for base access Sep 12 05:53:53.844800 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 05:53:53.844808 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 05:53:53.844816 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 05:53:53.844823 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 05:53:53.844833 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 05:53:53.844841 kernel: ACPI: Added _OSI(Module Device) Sep 12 05:53:53.844849 kernel: ACPI: Added _OSI(Processor Device) Sep 12 05:53:53.844857 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 05:53:53.844865 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 05:53:53.844873 kernel: ACPI: Interpreter enabled Sep 12 05:53:53.844880 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 05:53:53.844888 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 05:53:53.844896 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 05:53:53.844906 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 05:53:53.844914 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 05:53:53.844921 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 05:53:53.845156 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 05:53:53.845285 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 05:53:53.845407 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 05:53:53.845417 kernel: PCI host bridge to bus 0000:00 Sep 12 05:53:53.845559 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 05:53:53.845727 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 05:53:53.845840 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 05:53:53.845959 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 05:53:53.846071 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 05:53:53.846186 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 05:53:53.846297 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 05:53:53.846460 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 05:53:53.846626 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 05:53:53.846752 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 12 05:53:53.846872 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 12 05:53:53.847003 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 12 05:53:53.847123 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 05:53:53.847260 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 05:53:53.847389 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 12 05:53:53.847510 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 12 05:53:53.847656 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 12 05:53:53.847799 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 05:53:53.847927 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 12 05:53:53.848060 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 12 05:53:53.848187 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 12 05:53:53.848339 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 05:53:53.848462 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 12 05:53:53.848601 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 12 05:53:53.848725 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 12 05:53:53.848846 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 12 05:53:53.848989 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 05:53:53.849117 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 05:53:53.849251 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 05:53:53.849373 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 12 05:53:53.849498 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 12 05:53:53.849657 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 05:53:53.849781 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 12 05:53:53.849793 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 05:53:53.849805 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 05:53:53.849813 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 05:53:53.849821 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 05:53:53.849828 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 05:53:53.849836 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 05:53:53.849844 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 05:53:53.849852 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 05:53:53.849859 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 05:53:53.849867 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 05:53:53.849877 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 05:53:53.849885 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 05:53:53.849892 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 05:53:53.849900 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 05:53:53.849908 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 05:53:53.849916 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 05:53:53.849924 kernel: iommu: Default domain type: Translated Sep 12 05:53:53.849940 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 05:53:53.849948 kernel: PCI: Using ACPI for IRQ routing Sep 12 05:53:53.849959 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 05:53:53.849966 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 05:53:53.849974 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 12 05:53:53.850095 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 05:53:53.850216 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 05:53:53.850335 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 05:53:53.850346 kernel: vgaarb: loaded Sep 12 05:53:53.850354 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 05:53:53.850365 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 05:53:53.850373 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 05:53:53.850381 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 05:53:53.850389 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 05:53:53.850397 kernel: pnp: PnP ACPI init Sep 12 05:53:53.850539 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 05:53:53.850550 kernel: pnp: PnP ACPI: found 6 devices Sep 12 05:53:53.850559 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 05:53:53.850589 kernel: NET: Registered PF_INET protocol family Sep 12 05:53:53.850597 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 05:53:53.850624 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 05:53:53.850642 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 05:53:53.850651 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 05:53:53.850659 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 05:53:53.850666 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 05:53:53.850675 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 05:53:53.850682 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 05:53:53.850694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 05:53:53.850701 kernel: NET: Registered PF_XDP protocol family Sep 12 05:53:53.850823 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 05:53:53.850944 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 05:53:53.851058 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 05:53:53.851173 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 05:53:53.851284 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 05:53:53.851395 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 05:53:53.851409 kernel: PCI: CLS 0 bytes, default 64 Sep 12 05:53:53.851417 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 05:53:53.851425 kernel: Initialise system trusted keyrings Sep 12 05:53:53.851433 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 05:53:53.851441 kernel: Key type asymmetric registered Sep 12 05:53:53.851449 kernel: Asymmetric key parser 'x509' registered Sep 12 05:53:53.851456 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 05:53:53.851464 kernel: io scheduler mq-deadline registered Sep 12 05:53:53.851472 kernel: io scheduler kyber registered Sep 12 05:53:53.851482 kernel: io scheduler bfq registered Sep 12 05:53:53.851490 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 05:53:53.851498 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 05:53:53.851506 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 05:53:53.851514 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 05:53:53.851522 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 05:53:53.851530 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 05:53:53.851538 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 05:53:53.851546 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 05:53:53.851557 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 05:53:53.851705 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 05:53:53.851717 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 05:53:53.851830 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 05:53:53.851954 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T05:53:53 UTC (1757656433) Sep 12 05:53:53.852068 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 05:53:53.852079 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 05:53:53.852087 kernel: NET: Registered PF_INET6 protocol family Sep 12 05:53:53.852099 kernel: Segment Routing with IPv6 Sep 12 05:53:53.852107 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 05:53:53.852115 kernel: NET: Registered PF_PACKET protocol family Sep 12 05:53:53.852123 kernel: Key type dns_resolver registered Sep 12 05:53:53.852130 kernel: IPI shorthand broadcast: enabled Sep 12 05:53:53.852138 kernel: sched_clock: Marking stable (3181002343, 156290747)->(3395498975, -58205885) Sep 12 05:53:53.852146 kernel: registered taskstats version 1 Sep 12 05:53:53.852154 kernel: Loading compiled-in X.509 certificates Sep 12 05:53:53.852162 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: c974434132f0296e0aaf9b1358c8dc50eba5c8b9' Sep 12 05:53:53.852172 kernel: Demotion targets for Node 0: null Sep 12 05:53:53.852180 kernel: Key type .fscrypt registered Sep 12 05:53:53.852187 kernel: Key type fscrypt-provisioning registered Sep 12 05:53:53.852195 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 05:53:53.852203 kernel: ima: Allocated hash algorithm: sha1 Sep 12 05:53:53.852211 kernel: ima: No architecture policies found Sep 12 05:53:53.852218 kernel: clk: Disabling unused clocks Sep 12 05:53:53.852226 kernel: Warning: unable to open an initial console. Sep 12 05:53:53.852235 kernel: Freeing unused kernel image (initmem) memory: 54092K Sep 12 05:53:53.852245 kernel: Write protecting the kernel read-only data: 24576k Sep 12 05:53:53.852253 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 12 05:53:53.852261 kernel: Run /init as init process Sep 12 05:53:53.852268 kernel: with arguments: Sep 12 05:53:53.852276 kernel: /init Sep 12 05:53:53.852284 kernel: with environment: Sep 12 05:53:53.852292 kernel: HOME=/ Sep 12 05:53:53.852299 kernel: TERM=linux Sep 12 05:53:53.852307 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 05:53:53.852322 systemd[1]: Successfully made /usr/ read-only. Sep 12 05:53:53.852346 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 05:53:53.852357 systemd[1]: Detected virtualization kvm. Sep 12 05:53:53.852366 systemd[1]: Detected architecture x86-64. Sep 12 05:53:53.852374 systemd[1]: Running in initrd. Sep 12 05:53:53.852385 systemd[1]: No hostname configured, using default hostname. Sep 12 05:53:53.852394 systemd[1]: Hostname set to . Sep 12 05:53:53.852402 systemd[1]: Initializing machine ID from VM UUID. Sep 12 05:53:53.852411 systemd[1]: Queued start job for default target initrd.target. Sep 12 05:53:53.852420 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 05:53:53.852429 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 05:53:53.852438 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 05:53:53.852447 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 05:53:53.852458 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 05:53:53.852468 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 05:53:53.852477 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 05:53:53.852486 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 05:53:53.852495 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 05:53:53.852504 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 05:53:53.852513 systemd[1]: Reached target paths.target - Path Units. Sep 12 05:53:53.852524 systemd[1]: Reached target slices.target - Slice Units. Sep 12 05:53:53.852533 systemd[1]: Reached target swap.target - Swaps. Sep 12 05:53:53.852541 systemd[1]: Reached target timers.target - Timer Units. Sep 12 05:53:53.852550 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 05:53:53.852559 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 05:53:53.852582 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 05:53:53.852590 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 05:53:53.852599 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 05:53:53.852610 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 05:53:53.852619 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 05:53:53.852627 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 05:53:53.852636 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 05:53:53.852644 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 05:53:53.852655 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 05:53:53.852667 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 05:53:53.852675 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 05:53:53.852684 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 05:53:53.852693 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 05:53:53.852702 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:53:53.852710 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 05:53:53.852721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 05:53:53.852730 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 05:53:53.852739 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 05:53:53.852774 systemd-journald[220]: Collecting audit messages is disabled. Sep 12 05:53:53.852797 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 05:53:53.852806 systemd-journald[220]: Journal started Sep 12 05:53:53.852828 systemd-journald[220]: Runtime Journal (/run/log/journal/2d9c72b3fb9e48b1807b0bbd399418d2) is 6M, max 48.6M, 42.5M free. Sep 12 05:53:53.841506 systemd-modules-load[221]: Inserted module 'overlay' Sep 12 05:53:53.856589 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 05:53:53.859289 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 05:53:53.862727 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 05:53:53.936639 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 05:53:53.938668 kernel: Bridge firewalling registered Sep 12 05:53:53.938606 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 12 05:53:53.940865 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 05:53:53.946748 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 05:53:53.973961 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:53:53.976884 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 05:53:53.979685 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 05:53:53.985143 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 05:53:53.988895 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 05:53:54.007969 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:53:54.011148 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 05:53:54.028861 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 05:53:54.031863 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 05:53:54.074427 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 05:53:54.078295 systemd-resolved[259]: Positive Trust Anchors: Sep 12 05:53:54.078317 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 05:53:54.078357 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 05:53:54.081507 systemd-resolved[259]: Defaulting to hostname 'linux'. Sep 12 05:53:54.083344 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 05:53:54.088765 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 05:53:54.190614 kernel: SCSI subsystem initialized Sep 12 05:53:54.295513 kernel: Loading iSCSI transport class v2.0-870. Sep 12 05:53:54.306597 kernel: iscsi: registered transport (tcp) Sep 12 05:53:54.331759 kernel: iscsi: registered transport (qla4xxx) Sep 12 05:53:54.331857 kernel: QLogic iSCSI HBA Driver Sep 12 05:53:54.355699 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 05:53:54.374286 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 05:53:54.377901 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 05:53:54.496164 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 05:53:54.498539 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 05:53:54.561596 kernel: raid6: avx2x4 gen() 29589 MB/s Sep 12 05:53:54.578590 kernel: raid6: avx2x2 gen() 29037 MB/s Sep 12 05:53:54.595638 kernel: raid6: avx2x1 gen() 25696 MB/s Sep 12 05:53:54.595658 kernel: raid6: using algorithm avx2x4 gen() 29589 MB/s Sep 12 05:53:54.613650 kernel: raid6: .... xor() 7846 MB/s, rmw enabled Sep 12 05:53:54.613675 kernel: raid6: using avx2x2 recovery algorithm Sep 12 05:53:54.634596 kernel: xor: automatically using best checksumming function avx Sep 12 05:53:54.823614 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 05:53:54.833128 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 05:53:54.837362 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 05:53:54.872727 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 12 05:53:54.879666 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 05:53:54.881696 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 05:53:54.908367 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Sep 12 05:53:54.942931 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 05:53:54.945751 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 05:53:55.089036 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 05:53:55.092680 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 05:53:55.129598 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 05:53:55.137060 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 05:53:55.140586 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 05:53:55.148324 kernel: AES CTR mode by8 optimization enabled Sep 12 05:53:55.164633 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 05:53:55.165586 kernel: libata version 3.00 loaded. Sep 12 05:53:55.176959 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 05:53:55.177016 kernel: GPT:9289727 != 19775487 Sep 12 05:53:55.177027 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 05:53:55.177038 kernel: GPT:9289727 != 19775487 Sep 12 05:53:55.177048 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 05:53:55.177058 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 05:53:55.180593 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 05:53:55.181641 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 05:53:55.184866 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 05:53:55.185053 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 05:53:55.185194 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 05:53:55.185448 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 05:53:55.186303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:53:55.189265 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:53:55.191714 kernel: scsi host0: ahci Sep 12 05:53:55.191921 kernel: scsi host1: ahci Sep 12 05:53:55.193588 kernel: scsi host2: ahci Sep 12 05:53:55.193777 kernel: scsi host3: ahci Sep 12 05:53:55.194694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:53:55.197237 kernel: scsi host4: ahci Sep 12 05:53:55.195705 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 05:53:55.201029 kernel: scsi host5: ahci Sep 12 05:53:55.201350 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 12 05:53:55.201362 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 12 05:53:55.202934 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 12 05:53:55.202952 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 12 05:53:55.207035 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 12 05:53:55.207081 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 12 05:53:55.234521 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 05:53:55.272294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:53:55.282693 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 05:53:55.292419 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 05:53:55.299922 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 05:53:55.300192 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 05:53:55.304133 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 05:53:55.341232 disk-uuid[635]: Primary Header is updated. Sep 12 05:53:55.341232 disk-uuid[635]: Secondary Entries is updated. Sep 12 05:53:55.341232 disk-uuid[635]: Secondary Header is updated. Sep 12 05:53:55.344833 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 05:53:55.349590 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 05:53:55.515375 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 05:53:55.515437 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 05:53:55.515448 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 05:53:55.515459 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 05:53:55.516605 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 05:53:55.517597 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 05:53:55.518883 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 05:53:55.518905 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 05:53:55.518915 kernel: ata3.00: applying bridge limits Sep 12 05:53:55.520189 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 05:53:55.520205 kernel: ata3.00: configured for UDMA/100 Sep 12 05:53:55.521593 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 05:53:55.562626 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 05:53:55.562991 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 05:53:55.576611 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 05:53:55.904680 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 05:53:55.906663 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 05:53:55.908061 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 05:53:55.908429 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 05:53:55.909967 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 05:53:55.936779 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 05:53:56.351597 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 05:53:56.352445 disk-uuid[636]: The operation has completed successfully. Sep 12 05:53:56.385644 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 05:53:56.385792 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 05:53:56.420129 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 05:53:56.445383 sh[665]: Success Sep 12 05:53:56.464688 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 05:53:56.464745 kernel: device-mapper: uevent: version 1.0.3 Sep 12 05:53:56.465845 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 05:53:56.475612 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 05:53:56.511547 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 05:53:56.514285 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 05:53:56.529225 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 05:53:56.537602 kernel: BTRFS: device fsid 29ae74b1-0ab1-4a84-96e7-98d98e1ec77f devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (677) Sep 12 05:53:56.537654 kernel: BTRFS info (device dm-0): first mount of filesystem 29ae74b1-0ab1-4a84-96e7-98d98e1ec77f Sep 12 05:53:56.539050 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:53:56.544611 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 05:53:56.544650 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 05:53:56.545797 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 05:53:56.546803 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 05:53:56.548241 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 05:53:56.549330 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 05:53:56.551099 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 05:53:56.578601 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Sep 12 05:53:56.580630 kernel: BTRFS info (device vda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:53:56.580673 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:53:56.583699 kernel: BTRFS info (device vda6): turning on async discard Sep 12 05:53:56.583725 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 05:53:56.589592 kernel: BTRFS info (device vda6): last unmount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:53:56.589904 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 05:53:56.593533 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 05:53:56.861687 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 05:53:56.863549 ignition[753]: Ignition 2.22.0 Sep 12 05:53:56.863560 ignition[753]: Stage: fetch-offline Sep 12 05:53:56.865331 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 05:53:56.863647 ignition[753]: no configs at "/usr/lib/ignition/base.d" Sep 12 05:53:56.863658 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:53:56.863760 ignition[753]: parsed url from cmdline: "" Sep 12 05:53:56.863764 ignition[753]: no config URL provided Sep 12 05:53:56.863769 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 05:53:56.863781 ignition[753]: no config at "/usr/lib/ignition/user.ign" Sep 12 05:53:56.863805 ignition[753]: op(1): [started] loading QEMU firmware config module Sep 12 05:53:56.863810 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 05:53:56.872260 ignition[753]: op(1): [finished] loading QEMU firmware config module Sep 12 05:53:56.911493 ignition[753]: parsing config with SHA512: 8a2baca71e7d5a89eb2e44f9c71583c267ce702134ea69f7de5969a0d3390b1e4e8c256ff14e3a47abc578dac0a3541cc61fb641fe4aa1b6bf4a4b2a65ed4486 Sep 12 05:53:56.918196 unknown[753]: fetched base config from "system" Sep 12 05:53:56.918215 unknown[753]: fetched user config from "qemu" Sep 12 05:53:56.918631 ignition[753]: fetch-offline: fetch-offline passed Sep 12 05:53:56.918705 ignition[753]: Ignition finished successfully Sep 12 05:53:56.923020 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 05:53:56.927724 systemd-networkd[854]: lo: Link UP Sep 12 05:53:56.927735 systemd-networkd[854]: lo: Gained carrier Sep 12 05:53:56.930772 systemd-networkd[854]: Enumeration completed Sep 12 05:53:56.930983 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 05:53:56.931967 systemd[1]: Reached target network.target - Network. Sep 12 05:53:56.933911 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 05:53:56.934842 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 05:53:56.938753 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 05:53:56.938762 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 05:53:56.943293 systemd-networkd[854]: eth0: Link UP Sep 12 05:53:56.944079 systemd-networkd[854]: eth0: Gained carrier Sep 12 05:53:56.944090 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 05:53:56.970654 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 05:53:56.974143 ignition[858]: Ignition 2.22.0 Sep 12 05:53:56.974163 ignition[858]: Stage: kargs Sep 12 05:53:56.974379 ignition[858]: no configs at "/usr/lib/ignition/base.d" Sep 12 05:53:56.974394 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:53:56.975643 ignition[858]: kargs: kargs passed Sep 12 05:53:56.975707 ignition[858]: Ignition finished successfully Sep 12 05:53:56.981493 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 05:53:56.983681 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 05:53:57.032631 ignition[869]: Ignition 2.22.0 Sep 12 05:53:57.032647 ignition[869]: Stage: disks Sep 12 05:53:57.032879 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 12 05:53:57.032895 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:53:57.034198 ignition[869]: disks: disks passed Sep 12 05:53:57.034265 ignition[869]: Ignition finished successfully Sep 12 05:53:57.037893 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 05:53:57.040001 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 05:53:57.040283 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 05:53:57.040757 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 05:53:57.041093 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 05:53:57.041399 systemd[1]: Reached target basic.target - Basic System. Sep 12 05:53:57.042942 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 05:53:57.074783 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 05:53:57.083210 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 05:53:57.086638 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 05:53:57.232596 kernel: EXT4-fs (vda9): mounted filesystem 2b8062f9-897a-46cb-bde4-2b62ba4cc712 r/w with ordered data mode. Quota mode: none. Sep 12 05:53:57.232994 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 05:53:57.234600 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 05:53:57.237238 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 05:53:57.239006 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 05:53:57.240122 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 05:53:57.240163 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 05:53:57.240188 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 05:53:57.256538 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 05:53:57.260632 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 12 05:53:57.260718 kernel: BTRFS info (device vda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:53:57.261416 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 05:53:57.264099 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:53:57.268244 kernel: BTRFS info (device vda6): turning on async discard Sep 12 05:53:57.268299 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 05:53:57.271074 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 05:53:57.321606 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 05:53:57.326998 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 12 05:53:57.332723 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 05:53:57.338945 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 05:53:57.446166 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 05:53:57.447803 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 05:53:57.449880 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 05:53:57.484610 kernel: BTRFS info (device vda6): last unmount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:53:57.501763 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 05:53:57.536896 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 05:53:57.577905 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 05:53:57.582291 ignition[1000]: INFO : Ignition 2.22.0 Sep 12 05:53:57.582291 ignition[1000]: INFO : Stage: mount Sep 12 05:53:57.582291 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 05:53:57.582291 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:53:57.582291 ignition[1000]: INFO : mount: mount passed Sep 12 05:53:57.582291 ignition[1000]: INFO : Ignition finished successfully Sep 12 05:53:57.582320 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 05:53:57.619112 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 05:53:57.648178 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Sep 12 05:53:57.648226 kernel: BTRFS info (device vda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:53:57.648244 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:53:57.652601 kernel: BTRFS info (device vda6): turning on async discard Sep 12 05:53:57.652629 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 05:53:57.654413 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 05:53:57.726062 ignition[1029]: INFO : Ignition 2.22.0 Sep 12 05:53:57.726062 ignition[1029]: INFO : Stage: files Sep 12 05:53:57.728115 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 05:53:57.728115 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:53:57.728115 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Sep 12 05:53:57.728115 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 05:53:57.728115 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 05:53:57.735200 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 05:53:57.735200 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 05:53:57.735200 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 05:53:57.735200 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 05:53:57.735200 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 05:53:57.731167 unknown[1029]: wrote ssh authorized keys file for user: core Sep 12 05:53:57.789785 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 05:53:58.082749 systemd-networkd[854]: eth0: Gained IPv6LL Sep 12 05:53:58.135760 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 05:53:58.138084 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 05:53:58.138084 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 05:53:58.138084 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 05:53:58.138084 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 05:53:58.138084 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 05:53:58.138084 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 05:53:58.138084 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 05:53:58.138084 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 05:53:58.193030 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 05:53:58.195226 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 05:53:58.195226 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 05:53:58.213825 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 05:53:58.213825 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 05:53:58.218969 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 05:53:58.516414 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 05:53:59.325740 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 05:53:59.325740 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 05:53:59.329737 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 05:53:59.594879 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 05:53:59.594879 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 05:53:59.594879 ignition[1029]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 05:53:59.600033 ignition[1029]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 05:53:59.600033 ignition[1029]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 05:53:59.600033 ignition[1029]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 05:53:59.600033 ignition[1029]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 05:53:59.615925 ignition[1029]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 05:53:59.619760 ignition[1029]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 05:53:59.621345 ignition[1029]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 05:53:59.621345 ignition[1029]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 05:53:59.621345 ignition[1029]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 05:53:59.621345 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 05:53:59.621345 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 05:53:59.621345 ignition[1029]: INFO : files: files passed Sep 12 05:53:59.621345 ignition[1029]: INFO : Ignition finished successfully Sep 12 05:53:59.623231 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 05:53:59.625940 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 05:53:59.628424 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 05:53:59.657543 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 05:53:59.657704 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 05:53:59.669206 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 05:53:59.674251 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 05:53:59.674251 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 05:53:59.677438 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 05:53:59.680261 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 05:53:59.682089 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 05:53:59.685044 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 05:53:59.759956 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 05:53:59.760089 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 05:53:59.761091 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 05:53:59.780079 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 05:53:59.780421 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 05:53:59.783483 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 05:53:59.817013 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 05:53:59.819453 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 05:53:59.871286 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 05:53:59.873700 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 05:53:59.874095 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 05:53:59.874407 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 05:53:59.874553 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 05:53:59.879789 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 05:53:59.880352 systemd[1]: Stopped target basic.target - Basic System. Sep 12 05:53:59.880864 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 05:53:59.898963 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 05:53:59.905923 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 05:53:59.906375 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 05:53:59.907268 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 05:53:59.912302 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 05:53:59.912893 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 05:53:59.913387 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 05:53:59.913886 systemd[1]: Stopped target swap.target - Swaps. Sep 12 05:53:59.914180 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 05:53:59.914348 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 05:53:59.922728 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 05:53:59.924544 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 05:53:59.925638 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 05:53:59.925793 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 05:53:59.926227 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 05:53:59.926375 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 05:53:59.930946 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 05:53:59.931099 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 05:53:59.936188 systemd[1]: Stopped target paths.target - Path Units. Sep 12 05:53:59.936934 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 05:53:59.939481 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 05:53:59.940294 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 05:53:59.940653 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 05:53:59.941321 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 05:53:59.941441 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 05:53:59.947294 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 05:53:59.947407 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 05:53:59.949475 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 05:53:59.949656 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 05:53:59.951283 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 05:53:59.951417 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 05:53:59.954878 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 05:53:59.956132 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 05:53:59.998840 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 05:54:00.000023 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 05:54:00.002307 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 05:54:00.003321 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 05:54:00.009776 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 05:54:00.009906 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 05:54:00.042489 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 05:54:00.066827 ignition[1084]: INFO : Ignition 2.22.0 Sep 12 05:54:00.066827 ignition[1084]: INFO : Stage: umount Sep 12 05:54:00.153651 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 05:54:00.153651 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:54:00.153651 ignition[1084]: INFO : umount: umount passed Sep 12 05:54:00.153651 ignition[1084]: INFO : Ignition finished successfully Sep 12 05:54:00.212884 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 05:54:00.213060 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 05:54:00.213724 systemd[1]: Stopped target network.target - Network. Sep 12 05:54:00.214074 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 05:54:00.214129 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 05:54:00.214473 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 05:54:00.214515 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 05:54:00.215089 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 05:54:00.215140 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 05:54:00.215461 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 05:54:00.215502 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 05:54:00.216134 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 05:54:00.216406 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 05:54:00.231438 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 05:54:00.231659 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 05:54:00.235506 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 05:54:00.235876 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 05:54:00.235930 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 05:54:00.239430 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 05:54:00.243070 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 05:54:00.243196 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 05:54:00.246218 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 05:54:00.246413 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 05:54:00.247024 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 05:54:00.247063 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 05:54:00.251464 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 05:54:00.251981 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 05:54:00.252034 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 05:54:00.252339 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 05:54:00.252382 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:54:00.258105 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 05:54:00.258157 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 05:54:00.258472 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 05:54:00.259554 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 05:54:00.285278 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 05:54:00.285488 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 05:54:00.329135 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 05:54:00.329244 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 05:54:00.332204 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 05:54:00.332284 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 05:54:00.333687 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 05:54:00.333741 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 05:54:00.334215 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 05:54:00.334274 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 05:54:00.334992 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 05:54:00.335047 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 05:54:00.335786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 05:54:00.335837 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 05:54:00.345633 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 05:54:00.346044 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 05:54:00.346126 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 05:54:00.350707 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 05:54:00.350787 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 05:54:00.354186 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 05:54:00.354248 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 05:54:00.357426 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 05:54:00.357476 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 05:54:00.358076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 05:54:00.358122 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:54:00.378509 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 05:54:00.378656 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 05:54:00.660865 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 05:54:00.661026 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 05:54:00.662289 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 05:54:00.681512 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 05:54:00.681588 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 05:54:00.684359 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 05:54:00.714628 systemd[1]: Switching root. Sep 12 05:54:00.772424 systemd-journald[220]: Journal stopped Sep 12 05:54:02.492717 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 12 05:54:02.492783 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 05:54:02.492798 kernel: SELinux: policy capability open_perms=1 Sep 12 05:54:02.492809 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 05:54:02.492838 kernel: SELinux: policy capability always_check_network=0 Sep 12 05:54:02.492850 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 05:54:02.492861 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 05:54:02.492873 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 05:54:02.492895 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 05:54:02.492907 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 05:54:02.492924 kernel: audit: type=1403 audit(1757656441.683:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 05:54:02.492936 systemd[1]: Successfully loaded SELinux policy in 76.920ms. Sep 12 05:54:02.492955 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.866ms. Sep 12 05:54:02.492977 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 05:54:02.492998 systemd[1]: Detected virtualization kvm. Sep 12 05:54:02.493010 systemd[1]: Detected architecture x86-64. Sep 12 05:54:02.493022 systemd[1]: Detected first boot. Sep 12 05:54:02.493034 systemd[1]: Initializing machine ID from VM UUID. Sep 12 05:54:02.493047 zram_generator::config[1128]: No configuration found. Sep 12 05:54:02.493060 kernel: Guest personality initialized and is inactive Sep 12 05:54:02.493071 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 05:54:02.493090 kernel: Initialized host personality Sep 12 05:54:02.493102 kernel: NET: Registered PF_VSOCK protocol family Sep 12 05:54:02.493113 systemd[1]: Populated /etc with preset unit settings. Sep 12 05:54:02.493127 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 05:54:02.493139 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 05:54:02.493152 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 05:54:02.493164 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 05:54:02.493176 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 05:54:02.493189 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 05:54:02.493209 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 05:54:02.493229 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 05:54:02.493243 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 05:54:02.493255 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 05:54:02.493267 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 05:54:02.493279 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 05:54:02.493292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 05:54:02.493304 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 05:54:02.493317 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 05:54:02.493336 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 05:54:02.493349 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 05:54:02.493362 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 05:54:02.493374 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 05:54:02.493387 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 05:54:02.493399 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 05:54:02.493411 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 05:54:02.493431 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 05:54:02.493443 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 05:54:02.493455 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 05:54:02.493468 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 05:54:02.493480 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 05:54:02.493492 systemd[1]: Reached target slices.target - Slice Units. Sep 12 05:54:02.493510 systemd[1]: Reached target swap.target - Swaps. Sep 12 05:54:02.493523 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 05:54:02.493535 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 05:54:02.493554 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 05:54:02.493615 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 05:54:02.493628 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 05:54:02.493641 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 05:54:02.493653 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 05:54:02.493665 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 05:54:02.493678 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 05:54:02.493690 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 05:54:02.493712 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:54:02.493736 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 05:54:02.493751 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 05:54:02.493764 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 05:54:02.493777 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 05:54:02.493789 systemd[1]: Reached target machines.target - Containers. Sep 12 05:54:02.493802 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 05:54:02.493814 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 05:54:02.493826 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 05:54:02.493838 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 05:54:02.493858 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 05:54:02.493873 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 05:54:02.493885 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 05:54:02.493897 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 05:54:02.493910 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 05:54:02.493922 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 05:54:02.493935 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 05:54:02.493947 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 05:54:02.493967 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 05:54:02.493979 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 05:54:02.493993 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:54:02.494005 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 05:54:02.494017 kernel: fuse: init (API version 7.41) Sep 12 05:54:02.494029 kernel: ACPI: bus type drm_connector registered Sep 12 05:54:02.494042 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 05:54:02.494054 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 05:54:02.494066 kernel: loop: module loaded Sep 12 05:54:02.494108 systemd-journald[1199]: Collecting audit messages is disabled. Sep 12 05:54:02.494131 systemd-journald[1199]: Journal started Sep 12 05:54:02.494153 systemd-journald[1199]: Runtime Journal (/run/log/journal/2d9c72b3fb9e48b1807b0bbd399418d2) is 6M, max 48.6M, 42.5M free. Sep 12 05:54:02.240458 systemd[1]: Queued start job for default target multi-user.target. Sep 12 05:54:02.251787 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 05:54:02.252282 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 05:54:02.497672 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 05:54:02.503621 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 05:54:02.508957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 05:54:02.508992 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 05:54:02.509008 systemd[1]: Stopped verity-setup.service. Sep 12 05:54:02.513288 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:54:02.519348 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 05:54:02.518390 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 05:54:02.519680 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 05:54:02.521015 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 05:54:02.524922 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 05:54:02.526375 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 05:54:02.527944 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 05:54:02.529264 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 05:54:02.530881 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 05:54:02.531116 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 05:54:02.532747 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 05:54:02.534160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 05:54:02.534380 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 05:54:02.535864 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 05:54:02.536078 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 05:54:02.537459 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 05:54:02.537690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 05:54:02.539222 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 05:54:02.539454 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 05:54:02.540855 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 05:54:02.541088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 05:54:02.542488 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 05:54:02.543944 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 05:54:02.545510 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 05:54:02.547168 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 05:54:02.562286 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 05:54:02.564832 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 05:54:02.566935 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 05:54:02.568046 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 05:54:02.568147 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 05:54:02.570094 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 05:54:02.578688 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 05:54:02.581048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:54:02.583072 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 05:54:02.587015 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 05:54:02.590587 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 05:54:02.594711 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 05:54:02.597532 systemd-journald[1199]: Time spent on flushing to /var/log/journal/2d9c72b3fb9e48b1807b0bbd399418d2 is 24.738ms for 978 entries. Sep 12 05:54:02.597532 systemd-journald[1199]: System Journal (/var/log/journal/2d9c72b3fb9e48b1807b0bbd399418d2) is 8M, max 195.6M, 187.6M free. Sep 12 05:54:02.644811 systemd-journald[1199]: Received client request to flush runtime journal. Sep 12 05:54:02.644869 kernel: loop0: detected capacity change from 0 to 128016 Sep 12 05:54:02.595944 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 05:54:02.598689 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 05:54:02.601076 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 05:54:02.605714 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 05:54:02.608763 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 05:54:02.610112 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 05:54:02.629816 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 05:54:02.631753 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 05:54:02.635080 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 05:54:02.651925 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 05:54:02.655922 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 05:54:02.658390 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:54:02.665581 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 05:54:02.666177 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 12 05:54:02.666198 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 12 05:54:02.676989 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 05:54:02.679026 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 05:54:02.683161 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 05:54:02.687677 kernel: loop1: detected capacity change from 0 to 110984 Sep 12 05:54:02.713585 kernel: loop2: detected capacity change from 0 to 221472 Sep 12 05:54:02.718019 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 05:54:02.723704 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 05:54:02.736615 kernel: loop3: detected capacity change from 0 to 128016 Sep 12 05:54:03.171923 kernel: loop4: detected capacity change from 0 to 110984 Sep 12 05:54:03.181885 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 12 05:54:03.181907 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 12 05:54:03.187178 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 05:54:03.188600 kernel: loop5: detected capacity change from 0 to 221472 Sep 12 05:54:03.194941 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 05:54:03.195547 (sd-merge)[1272]: Merged extensions into '/usr'. Sep 12 05:54:03.200447 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 05:54:03.200467 systemd[1]: Reloading... Sep 12 05:54:03.278646 zram_generator::config[1296]: No configuration found. Sep 12 05:54:03.489056 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 05:54:03.529120 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 05:54:03.529506 systemd[1]: Reloading finished in 328 ms. Sep 12 05:54:03.564019 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 05:54:03.565589 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 05:54:03.581137 systemd[1]: Starting ensure-sysext.service... Sep 12 05:54:03.583246 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 05:54:03.686943 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... Sep 12 05:54:03.687102 systemd[1]: Reloading... Sep 12 05:54:03.700153 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 05:54:03.700191 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 05:54:03.700498 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 05:54:03.700785 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 05:54:03.701717 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 05:54:03.701985 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 12 05:54:03.702053 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 12 05:54:03.706396 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 05:54:03.706410 systemd-tmpfiles[1338]: Skipping /boot Sep 12 05:54:03.718186 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 05:54:03.718360 systemd-tmpfiles[1338]: Skipping /boot Sep 12 05:54:03.756720 zram_generator::config[1362]: No configuration found. Sep 12 05:54:04.074793 systemd[1]: Reloading finished in 387 ms. Sep 12 05:54:04.098994 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 05:54:04.124404 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 05:54:04.134286 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 05:54:04.136816 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 05:54:04.139293 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 05:54:04.155264 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 05:54:04.158879 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 05:54:04.162987 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 05:54:04.168200 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:54:04.168686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 05:54:04.175426 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 05:54:04.179759 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 05:54:04.185963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 05:54:04.187170 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:54:04.187278 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:54:04.187370 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:54:04.195143 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 05:54:04.197457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 05:54:04.197701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 05:54:04.199676 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 05:54:04.200006 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 05:54:04.201821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 05:54:04.202099 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 05:54:04.210512 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:54:04.210925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 05:54:04.212388 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 05:54:04.214496 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 05:54:04.216666 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 05:54:04.217730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:54:04.217837 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:54:04.219954 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 05:54:04.220907 systemd-udevd[1408]: Using default interface naming scheme 'v255'. Sep 12 05:54:04.230460 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 05:54:04.231553 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:54:04.233677 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 05:54:04.235704 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 05:54:04.235947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 05:54:04.237640 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 05:54:04.237869 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 05:54:04.239759 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 05:54:04.239988 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 05:54:04.248977 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:54:04.249215 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 05:54:04.250536 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 05:54:04.252645 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 05:54:04.256793 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 05:54:04.261882 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 05:54:04.263106 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:54:04.263222 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:54:04.263373 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:54:04.264894 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 05:54:04.265177 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 05:54:04.266920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 05:54:04.267151 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 05:54:04.268708 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 05:54:04.268930 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 05:54:04.270990 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 05:54:04.271211 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 05:54:04.276299 systemd[1]: Finished ensure-sysext.service. Sep 12 05:54:04.289329 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 05:54:04.289424 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 05:54:04.291925 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 05:54:04.356414 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 05:54:04.372917 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 05:54:04.374693 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 05:54:04.380820 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 05:54:04.382829 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 05:54:04.444618 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 05:54:04.456228 augenrules[1493]: No rules Sep 12 05:54:04.458969 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 05:54:04.459361 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 05:54:04.492548 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 05:54:04.534119 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 05:54:04.538030 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 05:54:04.549594 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 05:54:04.563194 systemd-resolved[1407]: Positive Trust Anchors: Sep 12 05:54:04.563212 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 05:54:04.563242 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 05:54:04.564814 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 05:54:04.565159 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 05:54:04.565404 systemd-networkd[1475]: lo: Link UP Sep 12 05:54:04.565410 systemd-networkd[1475]: lo: Gained carrier Sep 12 05:54:04.567270 systemd-networkd[1475]: Enumeration completed Sep 12 05:54:04.567455 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 05:54:04.567712 systemd-networkd[1475]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 05:54:04.567716 systemd-networkd[1475]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 05:54:04.568475 systemd-networkd[1475]: eth0: Link UP Sep 12 05:54:04.568777 systemd-networkd[1475]: eth0: Gained carrier Sep 12 05:54:04.568797 systemd-networkd[1475]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 05:54:04.569360 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 05:54:04.570721 systemd-resolved[1407]: Defaulting to hostname 'linux'. Sep 12 05:54:04.573274 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 05:54:04.575761 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 05:54:04.576946 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 05:54:04.578415 systemd[1]: Reached target network.target - Network. Sep 12 05:54:04.579304 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 05:54:04.580465 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 05:54:04.581638 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 05:54:04.582902 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 05:54:04.584109 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 05:54:04.585355 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 05:54:04.586514 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 05:54:04.587738 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 05:54:04.588915 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 05:54:04.588947 systemd[1]: Reached target paths.target - Path Units. Sep 12 05:54:04.589869 systemd[1]: Reached target timers.target - Timer Units. Sep 12 05:54:04.592118 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 05:54:04.593703 systemd-networkd[1475]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 05:54:04.595144 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 05:54:04.596634 systemd-timesyncd[1452]: Network configuration changed, trying to establish connection. Sep 12 05:54:04.597504 systemd-timesyncd[1452]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 05:54:04.597592 systemd-timesyncd[1452]: Initial clock synchronization to Fri 2025-09-12 05:54:04.401148 UTC. Sep 12 05:54:04.601936 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 05:54:04.603593 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 05:54:04.605137 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 05:54:04.606586 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 05:54:04.617631 kernel: ACPI: button: Power Button [PWRF] Sep 12 05:54:04.617615 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 05:54:04.619068 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 05:54:04.621220 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 05:54:04.623599 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 05:54:04.623870 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 05:54:04.629895 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 05:54:04.630896 systemd[1]: Reached target basic.target - Basic System. Sep 12 05:54:04.631842 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 05:54:04.631873 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 05:54:04.633757 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 05:54:04.638488 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 05:54:04.641860 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 05:54:04.646774 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 05:54:04.650364 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 05:54:04.651346 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 05:54:04.652832 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 05:54:04.659783 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 05:54:04.662436 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 05:54:04.664599 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 05:54:04.668839 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 05:54:04.680082 jq[1533]: false Sep 12 05:54:04.682835 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 05:54:04.682169 oslogin_cache_refresh[1535]: Refreshing passwd entry cache Sep 12 05:54:04.683159 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing passwd entry cache Sep 12 05:54:04.684798 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 05:54:04.685409 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 05:54:04.686789 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 05:54:04.690418 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 05:54:04.692931 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 05:54:04.695400 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 05:54:04.697840 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 05:54:04.698132 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 05:54:04.698468 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 05:54:04.699638 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 05:54:04.700585 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting users, quitting Sep 12 05:54:04.700585 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 05:54:04.699825 oslogin_cache_refresh[1535]: Failure getting users, quitting Sep 12 05:54:04.699851 oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 05:54:04.710876 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing group entry cache Sep 12 05:54:04.710865 oslogin_cache_refresh[1535]: Refreshing group entry cache Sep 12 05:54:04.716189 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 05:54:04.716480 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 05:54:04.723437 oslogin_cache_refresh[1535]: Failure getting groups, quitting Sep 12 05:54:04.723694 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting groups, quitting Sep 12 05:54:04.723694 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 05:54:04.723450 oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 05:54:04.725532 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 05:54:04.727016 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 05:54:04.730654 update_engine[1545]: I20250912 05:54:04.730410 1545 main.cc:92] Flatcar Update Engine starting Sep 12 05:54:04.731812 jq[1546]: true Sep 12 05:54:04.743617 tar[1559]: linux-amd64/helm Sep 12 05:54:04.745338 extend-filesystems[1534]: Found /dev/vda6 Sep 12 05:54:04.747036 (ntainerd)[1564]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 05:54:04.751786 jq[1570]: true Sep 12 05:54:04.784110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:54:04.800384 extend-filesystems[1534]: Found /dev/vda9 Sep 12 05:54:04.809583 extend-filesystems[1534]: Checking size of /dev/vda9 Sep 12 05:54:04.808729 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 05:54:04.808293 dbus-daemon[1531]: [system] SELinux support is enabled Sep 12 05:54:04.812414 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 05:54:04.812447 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 05:54:04.814083 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 05:54:04.814111 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 05:54:04.831738 systemd[1]: Started update-engine.service - Update Engine. Sep 12 05:54:04.832516 update_engine[1545]: I20250912 05:54:04.832172 1545 update_check_scheduler.cc:74] Next update check in 3m33s Sep 12 05:54:04.866959 extend-filesystems[1534]: Resized partition /dev/vda9 Sep 12 05:54:04.875145 extend-filesystems[1595]: resize2fs 1.47.3 (8-Jul-2025) Sep 12 05:54:04.876551 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 05:54:04.907837 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 05:54:05.076777 systemd-logind[1543]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 05:54:05.076808 systemd-logind[1543]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 05:54:05.083650 bash[1594]: Updated "/home/core/.ssh/authorized_keys" Sep 12 05:54:05.106390 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 05:54:05.090146 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 05:54:05.107896 extend-filesystems[1595]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 05:54:05.107896 extend-filesystems[1595]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 05:54:05.107896 extend-filesystems[1595]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 05:54:05.098593 systemd-logind[1543]: New seat seat0. Sep 12 05:54:05.109425 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Sep 12 05:54:05.102329 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 05:54:05.103944 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 05:54:05.115749 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 05:54:05.116344 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 05:54:05.189612 sshd_keygen[1566]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 05:54:05.200935 kernel: kvm_amd: TSC scaling supported Sep 12 05:54:05.200997 kernel: kvm_amd: Nested Virtualization enabled Sep 12 05:54:05.201016 kernel: kvm_amd: Nested Paging enabled Sep 12 05:54:05.201858 kernel: kvm_amd: LBR virtualization supported Sep 12 05:54:05.201911 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 05:54:05.202832 kernel: kvm_amd: Virtual GIF supported Sep 12 05:54:05.248500 locksmithd[1597]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 05:54:05.358460 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:54:05.612518 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 05:54:05.619526 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 05:54:05.623593 kernel: EDAC MC: Ver: 3.0.0 Sep 12 05:54:05.641434 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 05:54:05.641784 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 05:54:05.645169 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 05:54:05.667661 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 05:54:05.673409 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 05:54:05.677629 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 05:54:05.679339 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 05:54:05.687364 tar[1559]: linux-amd64/LICENSE Sep 12 05:54:05.687519 tar[1559]: linux-amd64/README.md Sep 12 05:54:05.690175 containerd[1564]: time="2025-09-12T05:54:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 05:54:05.690955 containerd[1564]: time="2025-09-12T05:54:05.690912770Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 05:54:05.702182 containerd[1564]: time="2025-09-12T05:54:05.702129845Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.128µs" Sep 12 05:54:05.702182 containerd[1564]: time="2025-09-12T05:54:05.702176463Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 05:54:05.702238 containerd[1564]: time="2025-09-12T05:54:05.702199752Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 05:54:05.702496 containerd[1564]: time="2025-09-12T05:54:05.702408289Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 05:54:05.702496 containerd[1564]: time="2025-09-12T05:54:05.702429066Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 05:54:05.702496 containerd[1564]: time="2025-09-12T05:54:05.702459460Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 05:54:05.702598 containerd[1564]: time="2025-09-12T05:54:05.702539227Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 05:54:05.702598 containerd[1564]: time="2025-09-12T05:54:05.702572299Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 05:54:05.703842 containerd[1564]: time="2025-09-12T05:54:05.703788259Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 05:54:05.703878 containerd[1564]: time="2025-09-12T05:54:05.703844522Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 05:54:05.703878 containerd[1564]: time="2025-09-12T05:54:05.703867069Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 05:54:05.703939 containerd[1564]: time="2025-09-12T05:54:05.703880507Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 05:54:05.704049 containerd[1564]: time="2025-09-12T05:54:05.704010878Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 05:54:05.704437 containerd[1564]: time="2025-09-12T05:54:05.704388127Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 05:54:05.704640 containerd[1564]: time="2025-09-12T05:54:05.704611685Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 05:54:05.704701 containerd[1564]: time="2025-09-12T05:54:05.704687572Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 05:54:05.704803 containerd[1564]: time="2025-09-12T05:54:05.704784256Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 05:54:05.705418 containerd[1564]: time="2025-09-12T05:54:05.705398823Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 05:54:05.705610 containerd[1564]: time="2025-09-12T05:54:05.705594020Z" level=info msg="metadata content store policy set" policy=shared Sep 12 05:54:05.707360 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 05:54:05.713198 containerd[1564]: time="2025-09-12T05:54:05.713160979Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 05:54:05.713249 containerd[1564]: time="2025-09-12T05:54:05.713220135Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 05:54:05.713271 containerd[1564]: time="2025-09-12T05:54:05.713256813Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 05:54:05.713291 containerd[1564]: time="2025-09-12T05:54:05.713273554Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 05:54:05.713291 containerd[1564]: time="2025-09-12T05:54:05.713287363Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 05:54:05.713357 containerd[1564]: time="2025-09-12T05:54:05.713300313Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 05:54:05.713357 containerd[1564]: time="2025-09-12T05:54:05.713311981Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 05:54:05.713357 containerd[1564]: time="2025-09-12T05:54:05.713326377Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 05:54:05.713357 containerd[1564]: time="2025-09-12T05:54:05.713340069Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 05:54:05.713357 containerd[1564]: time="2025-09-12T05:54:05.713350370Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 05:54:05.713357 containerd[1564]: time="2025-09-12T05:54:05.713358872Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 05:54:05.713468 containerd[1564]: time="2025-09-12T05:54:05.713371099Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 05:54:05.713584 containerd[1564]: time="2025-09-12T05:54:05.713548450Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 05:54:05.713783 containerd[1564]: time="2025-09-12T05:54:05.713757084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 05:54:05.713783 containerd[1564]: time="2025-09-12T05:54:05.713780227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 05:54:05.713836 containerd[1564]: time="2025-09-12T05:54:05.713797955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 05:54:05.713836 containerd[1564]: time="2025-09-12T05:54:05.713809908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 05:54:05.713836 containerd[1564]: time="2025-09-12T05:54:05.713821214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 05:54:05.713836 containerd[1564]: time="2025-09-12T05:54:05.713832551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 05:54:05.713927 containerd[1564]: time="2025-09-12T05:54:05.713848178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 05:54:05.713927 containerd[1564]: time="2025-09-12T05:54:05.713872220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 05:54:05.713927 containerd[1564]: time="2025-09-12T05:54:05.713884065Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 05:54:05.713927 containerd[1564]: time="2025-09-12T05:54:05.713906787Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 05:54:05.714208 containerd[1564]: time="2025-09-12T05:54:05.714000676Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 05:54:05.714208 containerd[1564]: time="2025-09-12T05:54:05.714018914Z" level=info msg="Start snapshots syncer" Sep 12 05:54:05.714208 containerd[1564]: time="2025-09-12T05:54:05.714051360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 05:54:05.714365 containerd[1564]: time="2025-09-12T05:54:05.714316834Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 05:54:05.714519 containerd[1564]: time="2025-09-12T05:54:05.714371201Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 05:54:05.714519 containerd[1564]: time="2025-09-12T05:54:05.714465697Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 05:54:05.714619 containerd[1564]: time="2025-09-12T05:54:05.714599489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 05:54:05.714642 containerd[1564]: time="2025-09-12T05:54:05.714634751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 05:54:05.714673 containerd[1564]: time="2025-09-12T05:54:05.714648042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 05:54:05.714673 containerd[1564]: time="2025-09-12T05:54:05.714669631Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 05:54:05.714724 containerd[1564]: time="2025-09-12T05:54:05.714682052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 05:54:05.714724 containerd[1564]: time="2025-09-12T05:54:05.714692860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 05:54:05.714724 containerd[1564]: time="2025-09-12T05:54:05.714706318Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 05:54:05.714782 containerd[1564]: time="2025-09-12T05:54:05.714728747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 05:54:05.714782 containerd[1564]: time="2025-09-12T05:54:05.714740856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 05:54:05.714782 containerd[1564]: time="2025-09-12T05:54:05.714750092Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 05:54:05.714834 containerd[1564]: time="2025-09-12T05:54:05.714795888Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 05:54:05.714834 containerd[1564]: time="2025-09-12T05:54:05.714810127Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 05:54:05.714834 containerd[1564]: time="2025-09-12T05:54:05.714819235Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 05:54:05.714834 containerd[1564]: time="2025-09-12T05:54:05.714827963Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 05:54:05.714834 containerd[1564]: time="2025-09-12T05:54:05.714836289Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 05:54:05.714926 containerd[1564]: time="2025-09-12T05:54:05.714846072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 05:54:05.714926 containerd[1564]: time="2025-09-12T05:54:05.714860155Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 05:54:05.714926 containerd[1564]: time="2025-09-12T05:54:05.714887012Z" level=info msg="runtime interface created" Sep 12 05:54:05.714926 containerd[1564]: time="2025-09-12T05:54:05.714893462Z" level=info msg="created NRI interface" Sep 12 05:54:05.714926 containerd[1564]: time="2025-09-12T05:54:05.714901603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 05:54:05.714926 containerd[1564]: time="2025-09-12T05:54:05.714912031Z" level=info msg="Connect containerd service" Sep 12 05:54:05.715032 containerd[1564]: time="2025-09-12T05:54:05.714943363Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 05:54:05.715827 containerd[1564]: time="2025-09-12T05:54:05.715795023Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 05:54:05.885127 containerd[1564]: time="2025-09-12T05:54:05.884980464Z" level=info msg="Start subscribing containerd event" Sep 12 05:54:05.885127 containerd[1564]: time="2025-09-12T05:54:05.885076054Z" level=info msg="Start recovering state" Sep 12 05:54:05.885253 containerd[1564]: time="2025-09-12T05:54:05.885218153Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 05:54:05.885253 containerd[1564]: time="2025-09-12T05:54:05.885244257Z" level=info msg="Start event monitor" Sep 12 05:54:05.885315 containerd[1564]: time="2025-09-12T05:54:05.885264292Z" level=info msg="Start cni network conf syncer for default" Sep 12 05:54:05.885315 containerd[1564]: time="2025-09-12T05:54:05.885290522Z" level=info msg="Start streaming server" Sep 12 05:54:05.885315 containerd[1564]: time="2025-09-12T05:54:05.885302983Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 05:54:05.885379 containerd[1564]: time="2025-09-12T05:54:05.885319021Z" level=info msg="runtime interface starting up..." Sep 12 05:54:05.885379 containerd[1564]: time="2025-09-12T05:54:05.885326341Z" level=info msg="starting plugins..." Sep 12 05:54:05.885379 containerd[1564]: time="2025-09-12T05:54:05.885346336Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 05:54:05.885479 containerd[1564]: time="2025-09-12T05:54:05.885290474Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 05:54:05.886417 containerd[1564]: time="2025-09-12T05:54:05.885621574Z" level=info msg="containerd successfully booted in 0.196035s" Sep 12 05:54:05.885755 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 05:54:06.659196 systemd-networkd[1475]: eth0: Gained IPv6LL Sep 12 05:54:06.662873 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 05:54:06.665072 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 05:54:06.668287 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 05:54:06.671198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:54:06.673787 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 05:54:06.703792 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 05:54:06.707491 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 05:54:06.707867 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 05:54:06.709793 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 05:54:07.641148 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 05:54:07.643802 systemd[1]: Started sshd@0-10.0.0.78:22-10.0.0.1:55026.service - OpenSSH per-connection server daemon (10.0.0.1:55026). Sep 12 05:54:07.746930 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 55026 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:54:07.748856 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:54:07.757680 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 05:54:07.761636 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 05:54:07.782805 systemd-logind[1543]: New session 1 of user core. Sep 12 05:54:07.932270 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 05:54:07.936843 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 05:54:07.988432 (systemd)[1677]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 05:54:07.992007 systemd-logind[1543]: New session c1 of user core. Sep 12 05:54:08.157244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:54:08.158914 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 05:54:08.173932 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 05:54:08.189436 systemd[1677]: Queued start job for default target default.target. Sep 12 05:54:08.200849 systemd[1677]: Created slice app.slice - User Application Slice. Sep 12 05:54:08.200875 systemd[1677]: Reached target paths.target - Paths. Sep 12 05:54:08.200916 systemd[1677]: Reached target timers.target - Timers. Sep 12 05:54:08.202582 systemd[1677]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 05:54:08.216738 systemd[1677]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 05:54:08.216867 systemd[1677]: Reached target sockets.target - Sockets. Sep 12 05:54:08.216905 systemd[1677]: Reached target basic.target - Basic System. Sep 12 05:54:08.216944 systemd[1677]: Reached target default.target - Main User Target. Sep 12 05:54:08.216977 systemd[1677]: Startup finished in 215ms. Sep 12 05:54:08.217646 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 05:54:08.220806 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 05:54:08.222227 systemd[1]: Startup finished in 3.250s (kernel) + 8.033s (initrd) + 6.603s (userspace) = 17.887s. Sep 12 05:54:08.296729 systemd[1]: Started sshd@1-10.0.0.78:22-10.0.0.1:55042.service - OpenSSH per-connection server daemon (10.0.0.1:55042). Sep 12 05:54:08.355849 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 55042 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:54:08.357251 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:54:08.362860 systemd-logind[1543]: New session 2 of user core. Sep 12 05:54:08.377716 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 05:54:08.434865 sshd[1707]: Connection closed by 10.0.0.1 port 55042 Sep 12 05:54:08.435233 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Sep 12 05:54:08.506645 systemd[1]: sshd@1-10.0.0.78:22-10.0.0.1:55042.service: Deactivated successfully. Sep 12 05:54:08.508973 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 05:54:08.509791 systemd-logind[1543]: Session 2 logged out. Waiting for processes to exit. Sep 12 05:54:08.512909 systemd[1]: Started sshd@2-10.0.0.78:22-10.0.0.1:55050.service - OpenSSH per-connection server daemon (10.0.0.1:55050). Sep 12 05:54:08.513532 systemd-logind[1543]: Removed session 2. Sep 12 05:54:08.566352 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 55050 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:54:08.568224 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:54:08.573165 systemd-logind[1543]: New session 3 of user core. Sep 12 05:54:08.582705 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 05:54:08.634551 sshd[1716]: Connection closed by 10.0.0.1 port 55050 Sep 12 05:54:08.678578 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Sep 12 05:54:08.696862 systemd[1]: sshd@2-10.0.0.78:22-10.0.0.1:55050.service: Deactivated successfully. Sep 12 05:54:08.699688 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 05:54:08.702885 systemd-logind[1543]: Session 3 logged out. Waiting for processes to exit. Sep 12 05:54:08.704914 systemd[1]: Started sshd@3-10.0.0.78:22-10.0.0.1:55058.service - OpenSSH per-connection server daemon (10.0.0.1:55058). Sep 12 05:54:08.706069 systemd-logind[1543]: Removed session 3. Sep 12 05:54:08.762031 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 55058 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:54:08.763729 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:54:08.770345 systemd-logind[1543]: New session 4 of user core. Sep 12 05:54:08.777698 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 05:54:08.815822 kubelet[1688]: E0912 05:54:08.815717 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 05:54:08.820287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 05:54:08.820490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 05:54:08.820935 systemd[1]: kubelet.service: Consumed 1.989s CPU time, 265.2M memory peak. Sep 12 05:54:08.834711 sshd[1725]: Connection closed by 10.0.0.1 port 55058 Sep 12 05:54:08.835065 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Sep 12 05:54:08.848428 systemd[1]: sshd@3-10.0.0.78:22-10.0.0.1:55058.service: Deactivated successfully. Sep 12 05:54:08.850437 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 05:54:08.851250 systemd-logind[1543]: Session 4 logged out. Waiting for processes to exit. Sep 12 05:54:08.854398 systemd[1]: Started sshd@4-10.0.0.78:22-10.0.0.1:55070.service - OpenSSH per-connection server daemon (10.0.0.1:55070). Sep 12 05:54:08.855109 systemd-logind[1543]: Removed session 4. Sep 12 05:54:08.907961 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 55070 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:54:08.909199 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:54:08.913886 systemd-logind[1543]: New session 5 of user core. Sep 12 05:54:08.923709 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 05:54:08.985516 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 05:54:08.985891 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:54:09.002713 sudo[1736]: pam_unix(sudo:session): session closed for user root Sep 12 05:54:09.004470 sshd[1735]: Connection closed by 10.0.0.1 port 55070 Sep 12 05:54:09.004955 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Sep 12 05:54:09.018498 systemd[1]: sshd@4-10.0.0.78:22-10.0.0.1:55070.service: Deactivated successfully. Sep 12 05:54:09.020623 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 05:54:09.021372 systemd-logind[1543]: Session 5 logged out. Waiting for processes to exit. Sep 12 05:54:09.024388 systemd[1]: Started sshd@5-10.0.0.78:22-10.0.0.1:55084.service - OpenSSH per-connection server daemon (10.0.0.1:55084). Sep 12 05:54:09.025095 systemd-logind[1543]: Removed session 5. Sep 12 05:54:09.093189 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 55084 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:54:09.094931 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:54:09.100075 systemd-logind[1543]: New session 6 of user core. Sep 12 05:54:09.114732 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 05:54:09.169587 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 05:54:09.170081 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:54:09.387215 sudo[1747]: pam_unix(sudo:session): session closed for user root Sep 12 05:54:09.395593 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 05:54:09.395968 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:54:09.407913 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 05:54:09.447642 augenrules[1769]: No rules Sep 12 05:54:09.448680 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 05:54:09.449003 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 05:54:09.450383 sudo[1746]: pam_unix(sudo:session): session closed for user root Sep 12 05:54:09.452141 sshd[1745]: Connection closed by 10.0.0.1 port 55084 Sep 12 05:54:09.452486 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Sep 12 05:54:09.463183 systemd[1]: sshd@5-10.0.0.78:22-10.0.0.1:55084.service: Deactivated successfully. Sep 12 05:54:09.465544 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 05:54:09.466380 systemd-logind[1543]: Session 6 logged out. Waiting for processes to exit. Sep 12 05:54:09.469916 systemd[1]: Started sshd@6-10.0.0.78:22-10.0.0.1:55100.service - OpenSSH per-connection server daemon (10.0.0.1:55100). Sep 12 05:54:09.470598 systemd-logind[1543]: Removed session 6. Sep 12 05:54:09.519535 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 55100 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:54:09.521161 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:54:09.526015 systemd-logind[1543]: New session 7 of user core. Sep 12 05:54:09.535706 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 05:54:09.588779 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 05:54:09.589089 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:54:10.455863 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 05:54:10.480930 (dockerd)[1802]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 05:54:10.974928 dockerd[1802]: time="2025-09-12T05:54:10.974838533Z" level=info msg="Starting up" Sep 12 05:54:10.976116 dockerd[1802]: time="2025-09-12T05:54:10.976073829Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 05:54:10.998326 dockerd[1802]: time="2025-09-12T05:54:10.998260632Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 05:54:11.572644 dockerd[1802]: time="2025-09-12T05:54:11.572537154Z" level=info msg="Loading containers: start." Sep 12 05:54:11.592597 kernel: Initializing XFRM netlink socket Sep 12 05:54:11.875317 systemd-networkd[1475]: docker0: Link UP Sep 12 05:54:11.884302 dockerd[1802]: time="2025-09-12T05:54:11.884229988Z" level=info msg="Loading containers: done." Sep 12 05:54:11.903806 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1382323145-merged.mount: Deactivated successfully. Sep 12 05:54:11.905524 dockerd[1802]: time="2025-09-12T05:54:11.905463808Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 05:54:11.905640 dockerd[1802]: time="2025-09-12T05:54:11.905611187Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 05:54:11.905784 dockerd[1802]: time="2025-09-12T05:54:11.905752016Z" level=info msg="Initializing buildkit" Sep 12 05:54:11.939660 dockerd[1802]: time="2025-09-12T05:54:11.939589150Z" level=info msg="Completed buildkit initialization" Sep 12 05:54:11.946010 dockerd[1802]: time="2025-09-12T05:54:11.944645136Z" level=info msg="Daemon has completed initialization" Sep 12 05:54:11.946010 dockerd[1802]: time="2025-09-12T05:54:11.944771391Z" level=info msg="API listen on /run/docker.sock" Sep 12 05:54:11.944882 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 05:54:12.953964 containerd[1564]: time="2025-09-12T05:54:12.953908783Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 05:54:15.274957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1905195597.mount: Deactivated successfully. Sep 12 05:54:16.995035 containerd[1564]: time="2025-09-12T05:54:16.994959025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:16.995752 containerd[1564]: time="2025-09-12T05:54:16.995688887Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 05:54:16.996826 containerd[1564]: time="2025-09-12T05:54:16.996772885Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:16.999698 containerd[1564]: time="2025-09-12T05:54:16.999654547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:17.000508 containerd[1564]: time="2025-09-12T05:54:17.000450608Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 4.046495267s" Sep 12 05:54:17.000604 containerd[1564]: time="2025-09-12T05:54:17.000516299Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 05:54:17.001235 containerd[1564]: time="2025-09-12T05:54:17.001188061Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 05:54:18.583074 containerd[1564]: time="2025-09-12T05:54:18.582990219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:18.583744 containerd[1564]: time="2025-09-12T05:54:18.583679404Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 05:54:18.584771 containerd[1564]: time="2025-09-12T05:54:18.584724665Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:18.588062 containerd[1564]: time="2025-09-12T05:54:18.588014339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:18.588955 containerd[1564]: time="2025-09-12T05:54:18.588906206Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.587684372s" Sep 12 05:54:18.588955 containerd[1564]: time="2025-09-12T05:54:18.588955714Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 05:54:18.589586 containerd[1564]: time="2025-09-12T05:54:18.589504825Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 05:54:18.843127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 05:54:18.844816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:54:19.115964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:54:19.121506 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 05:54:19.307358 kubelet[2093]: E0912 05:54:19.307277 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 05:54:19.314217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 05:54:19.314422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 05:54:19.314908 systemd[1]: kubelet.service: Consumed 433ms CPU time, 111.3M memory peak. Sep 12 05:54:20.696939 containerd[1564]: time="2025-09-12T05:54:20.696862018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:20.699583 containerd[1564]: time="2025-09-12T05:54:20.697958261Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 05:54:20.699583 containerd[1564]: time="2025-09-12T05:54:20.699262743Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:20.703367 containerd[1564]: time="2025-09-12T05:54:20.703312425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:20.704124 containerd[1564]: time="2025-09-12T05:54:20.704096407Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 2.114562886s" Sep 12 05:54:20.704168 containerd[1564]: time="2025-09-12T05:54:20.704126923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 05:54:20.704596 containerd[1564]: time="2025-09-12T05:54:20.704550533Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 05:54:21.603189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3869032430.mount: Deactivated successfully. Sep 12 05:54:21.990924 containerd[1564]: time="2025-09-12T05:54:21.990781497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:21.991608 containerd[1564]: time="2025-09-12T05:54:21.991518313Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 05:54:21.992794 containerd[1564]: time="2025-09-12T05:54:21.992748404Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:21.994860 containerd[1564]: time="2025-09-12T05:54:21.994811142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:21.995159 containerd[1564]: time="2025-09-12T05:54:21.995127050Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.290536799s" Sep 12 05:54:21.995199 containerd[1564]: time="2025-09-12T05:54:21.995158127Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 05:54:21.995743 containerd[1564]: time="2025-09-12T05:54:21.995707894Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 05:54:22.539631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2168266865.mount: Deactivated successfully. Sep 12 05:54:23.799591 containerd[1564]: time="2025-09-12T05:54:23.799494704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:23.800618 containerd[1564]: time="2025-09-12T05:54:23.800172232Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 05:54:23.801450 containerd[1564]: time="2025-09-12T05:54:23.801387565Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:23.804021 containerd[1564]: time="2025-09-12T05:54:23.803963454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:23.804802 containerd[1564]: time="2025-09-12T05:54:23.804770057Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.809028312s" Sep 12 05:54:23.804853 containerd[1564]: time="2025-09-12T05:54:23.804807145Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 05:54:23.805350 containerd[1564]: time="2025-09-12T05:54:23.805328225Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 05:54:24.314287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1430737058.mount: Deactivated successfully. Sep 12 05:54:24.322402 containerd[1564]: time="2025-09-12T05:54:24.322322829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 05:54:24.323202 containerd[1564]: time="2025-09-12T05:54:24.323129907Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 05:54:24.324531 containerd[1564]: time="2025-09-12T05:54:24.324474590Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 05:54:24.326350 containerd[1564]: time="2025-09-12T05:54:24.326293812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 05:54:24.326883 containerd[1564]: time="2025-09-12T05:54:24.326838847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 521.486458ms" Sep 12 05:54:24.326883 containerd[1564]: time="2025-09-12T05:54:24.326870975Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 05:54:24.327555 containerd[1564]: time="2025-09-12T05:54:24.327498645Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 05:54:24.937744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4198621218.mount: Deactivated successfully. Sep 12 05:54:27.384584 containerd[1564]: time="2025-09-12T05:54:27.384483489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:27.385219 containerd[1564]: time="2025-09-12T05:54:27.385158829Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 05:54:27.386450 containerd[1564]: time="2025-09-12T05:54:27.386409060Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:27.389026 containerd[1564]: time="2025-09-12T05:54:27.388987046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:27.390081 containerd[1564]: time="2025-09-12T05:54:27.390025185Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.062483706s" Sep 12 05:54:27.390081 containerd[1564]: time="2025-09-12T05:54:27.390062006Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 05:54:29.343159 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 05:54:29.344881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:54:29.564461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:54:29.577881 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 05:54:29.687151 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:54:29.782693 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 05:54:29.782984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:54:29.783210 systemd[1]: kubelet.service: Consumed 198ms CPU time, 107M memory peak. Sep 12 05:54:29.785757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:54:29.810654 systemd[1]: Reload requested from client PID 2269 ('systemctl') (unit session-7.scope)... Sep 12 05:54:29.810672 systemd[1]: Reloading... Sep 12 05:54:29.896718 zram_generator::config[2316]: No configuration found. Sep 12 05:54:30.177620 systemd[1]: Reloading finished in 366 ms. Sep 12 05:54:30.248283 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 05:54:30.248382 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 05:54:30.248738 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:54:30.248805 systemd[1]: kubelet.service: Consumed 177ms CPU time, 98.3M memory peak. Sep 12 05:54:30.250856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:54:30.425406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:54:30.435903 (kubelet)[2361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 05:54:30.482480 kubelet[2361]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:54:30.482480 kubelet[2361]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 05:54:30.482480 kubelet[2361]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:54:30.482923 kubelet[2361]: I0912 05:54:30.482545 2361 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 05:54:30.850151 kubelet[2361]: I0912 05:54:30.850099 2361 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 05:54:30.850151 kubelet[2361]: I0912 05:54:30.850130 2361 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 05:54:30.850444 kubelet[2361]: I0912 05:54:30.850417 2361 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 05:54:30.874859 kubelet[2361]: E0912 05:54:30.874792 2361 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:54:30.876600 kubelet[2361]: I0912 05:54:30.876546 2361 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 05:54:30.884066 kubelet[2361]: I0912 05:54:30.884029 2361 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 05:54:30.891745 kubelet[2361]: I0912 05:54:30.891683 2361 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 05:54:30.892415 kubelet[2361]: I0912 05:54:30.892379 2361 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 05:54:30.892647 kubelet[2361]: I0912 05:54:30.892594 2361 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 05:54:30.892853 kubelet[2361]: I0912 05:54:30.892633 2361 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 05:54:30.892961 kubelet[2361]: I0912 05:54:30.892855 2361 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 05:54:30.892961 kubelet[2361]: I0912 05:54:30.892864 2361 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 05:54:30.893047 kubelet[2361]: I0912 05:54:30.893029 2361 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:54:30.895744 kubelet[2361]: I0912 05:54:30.895710 2361 kubelet.go:408] "Attempting to sync node with API server" Sep 12 05:54:30.895744 kubelet[2361]: I0912 05:54:30.895735 2361 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 05:54:30.895828 kubelet[2361]: I0912 05:54:30.895787 2361 kubelet.go:314] "Adding apiserver pod source" Sep 12 05:54:30.895828 kubelet[2361]: I0912 05:54:30.895819 2361 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 05:54:30.900826 kubelet[2361]: I0912 05:54:30.900784 2361 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 05:54:30.901245 kubelet[2361]: I0912 05:54:30.901220 2361 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 05:54:30.901890 kubelet[2361]: W0912 05:54:30.901808 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 12 05:54:30.901890 kubelet[2361]: E0912 05:54:30.901875 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:54:30.902341 kubelet[2361]: W0912 05:54:30.902297 2361 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 05:54:30.903226 kubelet[2361]: W0912 05:54:30.903151 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 12 05:54:30.903291 kubelet[2361]: E0912 05:54:30.903223 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:54:30.904448 kubelet[2361]: I0912 05:54:30.904412 2361 server.go:1274] "Started kubelet" Sep 12 05:54:30.905157 kubelet[2361]: I0912 05:54:30.904924 2361 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 05:54:30.905868 kubelet[2361]: I0912 05:54:30.905843 2361 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 05:54:30.906078 kubelet[2361]: I0912 05:54:30.905842 2361 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 05:54:30.906239 kubelet[2361]: I0912 05:54:30.906211 2361 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 05:54:30.907630 kubelet[2361]: I0912 05:54:30.907505 2361 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 05:54:30.907765 kubelet[2361]: I0912 05:54:30.907747 2361 server.go:449] "Adding debug handlers to kubelet server" Sep 12 05:54:30.910924 kubelet[2361]: E0912 05:54:30.908637 2361 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.78:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.78:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864733f716c20bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 05:54:30.904381628 +0000 UTC m=+0.464495059,LastTimestamp:2025-09-12 05:54:30.904381628 +0000 UTC m=+0.464495059,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 05:54:30.910924 kubelet[2361]: I0912 05:54:30.909951 2361 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 05:54:30.910924 kubelet[2361]: I0912 05:54:30.910103 2361 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 05:54:30.910924 kubelet[2361]: I0912 05:54:30.910177 2361 reconciler.go:26] "Reconciler: start to sync state" Sep 12 05:54:30.910924 kubelet[2361]: W0912 05:54:30.910906 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 12 05:54:30.911174 kubelet[2361]: E0912 05:54:30.910940 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:54:30.911174 kubelet[2361]: I0912 05:54:30.911023 2361 factory.go:221] Registration of the systemd container factory successfully Sep 12 05:54:30.911243 kubelet[2361]: I0912 05:54:30.911198 2361 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 05:54:30.911402 kubelet[2361]: E0912 05:54:30.911374 2361 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 05:54:30.911584 kubelet[2361]: E0912 05:54:30.911536 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:30.912074 kubelet[2361]: E0912 05:54:30.912041 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="200ms" Sep 12 05:54:30.912630 kubelet[2361]: I0912 05:54:30.912597 2361 factory.go:221] Registration of the containerd container factory successfully Sep 12 05:54:30.931545 kubelet[2361]: I0912 05:54:30.931405 2361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 05:54:30.932723 kubelet[2361]: I0912 05:54:30.932695 2361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 05:54:30.932764 kubelet[2361]: I0912 05:54:30.932736 2361 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 05:54:30.932799 kubelet[2361]: I0912 05:54:30.932769 2361 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 05:54:30.932985 kubelet[2361]: E0912 05:54:30.932813 2361 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 05:54:30.933112 kubelet[2361]: I0912 05:54:30.933097 2361 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 05:54:30.933180 kubelet[2361]: I0912 05:54:30.933169 2361 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 05:54:30.933242 kubelet[2361]: I0912 05:54:30.933233 2361 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:54:30.935145 kubelet[2361]: W0912 05:54:30.935082 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 12 05:54:30.935200 kubelet[2361]: E0912 05:54:30.935154 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:54:31.011893 kubelet[2361]: E0912 05:54:31.011830 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:31.033017 kubelet[2361]: E0912 05:54:31.032955 2361 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 05:54:31.112412 kubelet[2361]: E0912 05:54:31.112228 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:31.112729 kubelet[2361]: E0912 05:54:31.112672 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="400ms" Sep 12 05:54:31.213166 kubelet[2361]: E0912 05:54:31.213060 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:31.233297 kubelet[2361]: E0912 05:54:31.233254 2361 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 05:54:31.313861 kubelet[2361]: E0912 05:54:31.313789 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:31.382739 kubelet[2361]: E0912 05:54:31.382503 2361 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.78:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.78:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864733f716c20bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 05:54:30.904381628 +0000 UTC m=+0.464495059,LastTimestamp:2025-09-12 05:54:30.904381628 +0000 UTC m=+0.464495059,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 05:54:31.388585 kubelet[2361]: I0912 05:54:31.388540 2361 policy_none.go:49] "None policy: Start" Sep 12 05:54:31.389398 kubelet[2361]: I0912 05:54:31.389372 2361 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 05:54:31.389398 kubelet[2361]: I0912 05:54:31.389396 2361 state_mem.go:35] "Initializing new in-memory state store" Sep 12 05:54:31.397497 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 05:54:31.411185 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 05:54:31.414139 kubelet[2361]: E0912 05:54:31.414100 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:31.415155 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 05:54:31.434915 kubelet[2361]: I0912 05:54:31.434855 2361 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 05:54:31.435273 kubelet[2361]: I0912 05:54:31.435163 2361 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 05:54:31.435273 kubelet[2361]: I0912 05:54:31.435185 2361 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 05:54:31.435614 kubelet[2361]: I0912 05:54:31.435595 2361 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 05:54:31.436903 kubelet[2361]: E0912 05:54:31.436740 2361 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 05:54:31.513928 kubelet[2361]: E0912 05:54:31.513867 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="800ms" Sep 12 05:54:31.537335 kubelet[2361]: I0912 05:54:31.537292 2361 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 05:54:31.537939 kubelet[2361]: E0912 05:54:31.537876 2361 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Sep 12 05:54:31.643545 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 12 05:54:31.666587 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 12 05:54:31.689517 systemd[1]: Created slice kubepods-burstable-pod6ea0471c0bca591e371cb7cff55085f3.slice - libcontainer container kubepods-burstable-pod6ea0471c0bca591e371cb7cff55085f3.slice. Sep 12 05:54:31.715133 kubelet[2361]: I0912 05:54:31.715076 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:54:31.715133 kubelet[2361]: I0912 05:54:31.715128 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:54:31.715299 kubelet[2361]: I0912 05:54:31.715153 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:54:31.715299 kubelet[2361]: I0912 05:54:31.715175 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ea0471c0bca591e371cb7cff55085f3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ea0471c0bca591e371cb7cff55085f3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:54:31.715299 kubelet[2361]: I0912 05:54:31.715196 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ea0471c0bca591e371cb7cff55085f3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ea0471c0bca591e371cb7cff55085f3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:54:31.715299 kubelet[2361]: I0912 05:54:31.715212 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:54:31.715299 kubelet[2361]: I0912 05:54:31.715228 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:54:31.715415 kubelet[2361]: I0912 05:54:31.715246 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 05:54:31.715415 kubelet[2361]: I0912 05:54:31.715260 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ea0471c0bca591e371cb7cff55085f3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ea0471c0bca591e371cb7cff55085f3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:54:31.739380 kubelet[2361]: I0912 05:54:31.739341 2361 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 05:54:31.739845 kubelet[2361]: E0912 05:54:31.739792 2361 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Sep 12 05:54:31.792293 kubelet[2361]: W0912 05:54:31.792214 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 12 05:54:31.792293 kubelet[2361]: E0912 05:54:31.792284 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:54:31.850396 kubelet[2361]: W0912 05:54:31.850333 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 12 05:54:31.850502 kubelet[2361]: E0912 05:54:31.850396 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:54:31.912436 kubelet[2361]: W0912 05:54:31.912252 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 12 05:54:31.912436 kubelet[2361]: E0912 05:54:31.912312 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:54:31.964272 kubelet[2361]: E0912 05:54:31.964230 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:31.964900 containerd[1564]: time="2025-09-12T05:54:31.964834554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 05:54:31.988435 kubelet[2361]: E0912 05:54:31.988099 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:31.988674 containerd[1564]: time="2025-09-12T05:54:31.988628085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 05:54:31.992783 containerd[1564]: time="2025-09-12T05:54:31.992718945Z" level=info msg="connecting to shim 050d22b0e70a6fe23a3306ad775780f2a5a3e12a41736e00a71bef0891f49992" address="unix:///run/containerd/s/8e54ab09e3200c643aab71ec4c612e391dfe16ce2057bd78a3dbf3333e11be78" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:54:31.993110 kubelet[2361]: E0912 05:54:31.993068 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:31.993933 containerd[1564]: time="2025-09-12T05:54:31.993655399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ea0471c0bca591e371cb7cff55085f3,Namespace:kube-system,Attempt:0,}" Sep 12 05:54:32.015674 kubelet[2361]: W0912 05:54:32.015621 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 12 05:54:32.015781 kubelet[2361]: E0912 05:54:32.015691 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:54:32.030727 systemd[1]: Started cri-containerd-050d22b0e70a6fe23a3306ad775780f2a5a3e12a41736e00a71bef0891f49992.scope - libcontainer container 050d22b0e70a6fe23a3306ad775780f2a5a3e12a41736e00a71bef0891f49992. Sep 12 05:54:32.034900 containerd[1564]: time="2025-09-12T05:54:32.034851431Z" level=info msg="connecting to shim 11c537da8924020b7eea633a6191b311ad84e09d8b335ee8981901e720296a85" address="unix:///run/containerd/s/592258907afa212435b2b16dc3353a948863cc6997688c56a9d4b0c3018e7409" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:54:32.040589 containerd[1564]: time="2025-09-12T05:54:32.040516625Z" level=info msg="connecting to shim ac215ac5da8d37fa92a6046c9639e4071814ee474d83995da5c7e50779b2a218" address="unix:///run/containerd/s/1311d96aa68c742842f2d1b10dc6854b708f375cf8bb8b1e029fb05fa12692d4" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:54:32.066724 systemd[1]: Started cri-containerd-11c537da8924020b7eea633a6191b311ad84e09d8b335ee8981901e720296a85.scope - libcontainer container 11c537da8924020b7eea633a6191b311ad84e09d8b335ee8981901e720296a85. Sep 12 05:54:32.070644 systemd[1]: Started cri-containerd-ac215ac5da8d37fa92a6046c9639e4071814ee474d83995da5c7e50779b2a218.scope - libcontainer container ac215ac5da8d37fa92a6046c9639e4071814ee474d83995da5c7e50779b2a218. Sep 12 05:54:32.103423 containerd[1564]: time="2025-09-12T05:54:32.103376634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"050d22b0e70a6fe23a3306ad775780f2a5a3e12a41736e00a71bef0891f49992\"" Sep 12 05:54:32.104817 kubelet[2361]: E0912 05:54:32.104783 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:32.108940 containerd[1564]: time="2025-09-12T05:54:32.107889443Z" level=info msg="CreateContainer within sandbox \"050d22b0e70a6fe23a3306ad775780f2a5a3e12a41736e00a71bef0891f49992\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 05:54:32.120445 containerd[1564]: time="2025-09-12T05:54:32.120420073Z" level=info msg="Container 502f20fbb5521d8737c3b97c99be468a406a5ed1b252d2c8e9d90008fb8571f6: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:54:32.130160 containerd[1564]: time="2025-09-12T05:54:32.130115103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ea0471c0bca591e371cb7cff55085f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"11c537da8924020b7eea633a6191b311ad84e09d8b335ee8981901e720296a85\"" Sep 12 05:54:32.130884 kubelet[2361]: E0912 05:54:32.130837 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:32.131015 containerd[1564]: time="2025-09-12T05:54:32.130983453Z" level=info msg="CreateContainer within sandbox \"050d22b0e70a6fe23a3306ad775780f2a5a3e12a41736e00a71bef0891f49992\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"502f20fbb5521d8737c3b97c99be468a406a5ed1b252d2c8e9d90008fb8571f6\"" Sep 12 05:54:32.131429 containerd[1564]: time="2025-09-12T05:54:32.131399146Z" level=info msg="StartContainer for \"502f20fbb5521d8737c3b97c99be468a406a5ed1b252d2c8e9d90008fb8571f6\"" Sep 12 05:54:32.132288 containerd[1564]: time="2025-09-12T05:54:32.132257244Z" level=info msg="CreateContainer within sandbox \"11c537da8924020b7eea633a6191b311ad84e09d8b335ee8981901e720296a85\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 05:54:32.132552 containerd[1564]: time="2025-09-12T05:54:32.132522836Z" level=info msg="connecting to shim 502f20fbb5521d8737c3b97c99be468a406a5ed1b252d2c8e9d90008fb8571f6" address="unix:///run/containerd/s/8e54ab09e3200c643aab71ec4c612e391dfe16ce2057bd78a3dbf3333e11be78" protocol=ttrpc version=3 Sep 12 05:54:32.141920 kubelet[2361]: I0912 05:54:32.141888 2361 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 05:54:32.142283 kubelet[2361]: E0912 05:54:32.142246 2361 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Sep 12 05:54:32.144167 containerd[1564]: time="2025-09-12T05:54:32.144133133Z" level=info msg="Container 3be796e4e151041306e19f6521617df1da25e762a310814a3395edc308c813cc: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:54:32.144922 containerd[1564]: time="2025-09-12T05:54:32.144889638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac215ac5da8d37fa92a6046c9639e4071814ee474d83995da5c7e50779b2a218\"" Sep 12 05:54:32.145541 kubelet[2361]: E0912 05:54:32.145505 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:32.147340 containerd[1564]: time="2025-09-12T05:54:32.147239039Z" level=info msg="CreateContainer within sandbox \"ac215ac5da8d37fa92a6046c9639e4071814ee474d83995da5c7e50779b2a218\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 05:54:32.151885 containerd[1564]: time="2025-09-12T05:54:32.151841648Z" level=info msg="CreateContainer within sandbox \"11c537da8924020b7eea633a6191b311ad84e09d8b335ee8981901e720296a85\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3be796e4e151041306e19f6521617df1da25e762a310814a3395edc308c813cc\"" Sep 12 05:54:32.153303 containerd[1564]: time="2025-09-12T05:54:32.153003234Z" level=info msg="StartContainer for \"3be796e4e151041306e19f6521617df1da25e762a310814a3395edc308c813cc\"" Sep 12 05:54:32.156501 containerd[1564]: time="2025-09-12T05:54:32.156468466Z" level=info msg="connecting to shim 3be796e4e151041306e19f6521617df1da25e762a310814a3395edc308c813cc" address="unix:///run/containerd/s/592258907afa212435b2b16dc3353a948863cc6997688c56a9d4b0c3018e7409" protocol=ttrpc version=3 Sep 12 05:54:32.159071 systemd[1]: Started cri-containerd-502f20fbb5521d8737c3b97c99be468a406a5ed1b252d2c8e9d90008fb8571f6.scope - libcontainer container 502f20fbb5521d8737c3b97c99be468a406a5ed1b252d2c8e9d90008fb8571f6. Sep 12 05:54:32.160332 containerd[1564]: time="2025-09-12T05:54:32.160200501Z" level=info msg="Container b1f64f41dbded7a0272d645a50d786c4986405d09a522a878ef245b8c1ad1cd1: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:54:32.169871 containerd[1564]: time="2025-09-12T05:54:32.169184009Z" level=info msg="CreateContainer within sandbox \"ac215ac5da8d37fa92a6046c9639e4071814ee474d83995da5c7e50779b2a218\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b1f64f41dbded7a0272d645a50d786c4986405d09a522a878ef245b8c1ad1cd1\"" Sep 12 05:54:32.169871 containerd[1564]: time="2025-09-12T05:54:32.169811468Z" level=info msg="StartContainer for \"b1f64f41dbded7a0272d645a50d786c4986405d09a522a878ef245b8c1ad1cd1\"" Sep 12 05:54:32.171356 containerd[1564]: time="2025-09-12T05:54:32.171314607Z" level=info msg="connecting to shim b1f64f41dbded7a0272d645a50d786c4986405d09a522a878ef245b8c1ad1cd1" address="unix:///run/containerd/s/1311d96aa68c742842f2d1b10dc6854b708f375cf8bb8b1e029fb05fa12692d4" protocol=ttrpc version=3 Sep 12 05:54:32.181759 systemd[1]: Started cri-containerd-3be796e4e151041306e19f6521617df1da25e762a310814a3395edc308c813cc.scope - libcontainer container 3be796e4e151041306e19f6521617df1da25e762a310814a3395edc308c813cc. Sep 12 05:54:32.247988 systemd[1]: Started cri-containerd-b1f64f41dbded7a0272d645a50d786c4986405d09a522a878ef245b8c1ad1cd1.scope - libcontainer container b1f64f41dbded7a0272d645a50d786c4986405d09a522a878ef245b8c1ad1cd1. Sep 12 05:54:32.283049 containerd[1564]: time="2025-09-12T05:54:32.282926958Z" level=info msg="StartContainer for \"502f20fbb5521d8737c3b97c99be468a406a5ed1b252d2c8e9d90008fb8571f6\" returns successfully" Sep 12 05:54:32.284375 containerd[1564]: time="2025-09-12T05:54:32.284330396Z" level=info msg="StartContainer for \"3be796e4e151041306e19f6521617df1da25e762a310814a3395edc308c813cc\" returns successfully" Sep 12 05:54:32.315230 kubelet[2361]: E0912 05:54:32.315184 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="1.6s" Sep 12 05:54:32.322736 containerd[1564]: time="2025-09-12T05:54:32.322654882Z" level=info msg="StartContainer for \"b1f64f41dbded7a0272d645a50d786c4986405d09a522a878ef245b8c1ad1cd1\" returns successfully" Sep 12 05:54:32.943912 kubelet[2361]: I0912 05:54:32.943847 2361 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 05:54:32.945054 kubelet[2361]: E0912 05:54:32.944085 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:32.947252 kubelet[2361]: E0912 05:54:32.947214 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:32.950148 kubelet[2361]: E0912 05:54:32.950077 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:33.653267 kubelet[2361]: I0912 05:54:33.653219 2361 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 05:54:33.653267 kubelet[2361]: E0912 05:54:33.653264 2361 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 05:54:33.667897 kubelet[2361]: E0912 05:54:33.667851 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:33.768210 kubelet[2361]: E0912 05:54:33.768161 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:33.869261 kubelet[2361]: E0912 05:54:33.869195 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:33.951613 kubelet[2361]: E0912 05:54:33.951473 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:33.969512 kubelet[2361]: E0912 05:54:33.969439 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:34.069799 kubelet[2361]: E0912 05:54:34.069724 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:34.170964 kubelet[2361]: E0912 05:54:34.170879 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:34.271694 kubelet[2361]: E0912 05:54:34.271500 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:34.372197 kubelet[2361]: E0912 05:54:34.372123 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:34.473376 kubelet[2361]: E0912 05:54:34.473283 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:34.573972 kubelet[2361]: E0912 05:54:34.573823 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:34.674739 kubelet[2361]: E0912 05:54:34.674670 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:34.775245 kubelet[2361]: E0912 05:54:34.775193 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:34.875835 kubelet[2361]: E0912 05:54:34.875782 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:34.976556 kubelet[2361]: E0912 05:54:34.976491 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:54:35.373619 systemd[1]: Reload requested from client PID 2633 ('systemctl') (unit session-7.scope)... Sep 12 05:54:35.373642 systemd[1]: Reloading... Sep 12 05:54:35.466638 zram_generator::config[2679]: No configuration found. Sep 12 05:54:35.710207 systemd[1]: Reloading finished in 336 ms. Sep 12 05:54:35.743837 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:54:35.758067 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 05:54:35.758447 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:54:35.758516 systemd[1]: kubelet.service: Consumed 903ms CPU time, 131.4M memory peak. Sep 12 05:54:35.761090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:54:35.977238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:54:35.988998 (kubelet)[2721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 05:54:36.039674 kubelet[2721]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:54:36.039674 kubelet[2721]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 05:54:36.039674 kubelet[2721]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:54:36.040190 kubelet[2721]: I0912 05:54:36.039720 2721 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 05:54:36.051292 kubelet[2721]: I0912 05:54:36.051227 2721 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 05:54:36.051292 kubelet[2721]: I0912 05:54:36.051261 2721 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 05:54:36.051515 kubelet[2721]: I0912 05:54:36.051504 2721 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 05:54:36.052787 kubelet[2721]: I0912 05:54:36.052744 2721 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 05:54:36.054694 kubelet[2721]: I0912 05:54:36.054635 2721 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 05:54:36.058423 kubelet[2721]: I0912 05:54:36.058384 2721 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 05:54:36.063007 kubelet[2721]: I0912 05:54:36.062971 2721 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 05:54:36.063089 kubelet[2721]: I0912 05:54:36.063075 2721 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 05:54:36.063228 kubelet[2721]: I0912 05:54:36.063191 2721 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 05:54:36.063386 kubelet[2721]: I0912 05:54:36.063221 2721 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 05:54:36.063490 kubelet[2721]: I0912 05:54:36.063393 2721 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 05:54:36.063490 kubelet[2721]: I0912 05:54:36.063401 2721 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 05:54:36.063490 kubelet[2721]: I0912 05:54:36.063429 2721 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:54:36.063624 kubelet[2721]: I0912 05:54:36.063530 2721 kubelet.go:408] "Attempting to sync node with API server" Sep 12 05:54:36.063624 kubelet[2721]: I0912 05:54:36.063543 2721 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 05:54:36.063624 kubelet[2721]: I0912 05:54:36.063597 2721 kubelet.go:314] "Adding apiserver pod source" Sep 12 05:54:36.063624 kubelet[2721]: I0912 05:54:36.063610 2721 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 05:54:36.065867 kubelet[2721]: I0912 05:54:36.065836 2721 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 05:54:36.066349 kubelet[2721]: I0912 05:54:36.066321 2721 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 05:54:36.069028 kubelet[2721]: I0912 05:54:36.069001 2721 server.go:1274] "Started kubelet" Sep 12 05:54:36.071436 kubelet[2721]: I0912 05:54:36.071362 2721 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 05:54:36.072791 kubelet[2721]: I0912 05:54:36.072724 2721 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 05:54:36.073109 kubelet[2721]: I0912 05:54:36.073081 2721 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 05:54:36.074699 kubelet[2721]: I0912 05:54:36.074677 2721 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 05:54:36.075740 kubelet[2721]: I0912 05:54:36.075703 2721 server.go:449] "Adding debug handlers to kubelet server" Sep 12 05:54:36.077345 kubelet[2721]: I0912 05:54:36.077042 2721 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 05:54:36.077754 kubelet[2721]: I0912 05:54:36.077718 2721 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 05:54:36.079381 kubelet[2721]: I0912 05:54:36.079341 2721 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 05:54:36.079534 kubelet[2721]: I0912 05:54:36.079505 2721 reconciler.go:26] "Reconciler: start to sync state" Sep 12 05:54:36.082826 kubelet[2721]: I0912 05:54:36.082784 2721 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 05:54:36.083949 kubelet[2721]: E0912 05:54:36.083855 2721 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 05:54:36.084236 kubelet[2721]: I0912 05:54:36.084213 2721 factory.go:221] Registration of the containerd container factory successfully Sep 12 05:54:36.084236 kubelet[2721]: I0912 05:54:36.084229 2721 factory.go:221] Registration of the systemd container factory successfully Sep 12 05:54:36.091816 kubelet[2721]: I0912 05:54:36.091637 2721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 05:54:36.093017 kubelet[2721]: I0912 05:54:36.092996 2721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 05:54:36.093132 kubelet[2721]: I0912 05:54:36.093117 2721 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 05:54:36.093240 kubelet[2721]: I0912 05:54:36.093225 2721 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 05:54:36.093378 kubelet[2721]: E0912 05:54:36.093343 2721 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 05:54:36.127990 kubelet[2721]: I0912 05:54:36.127950 2721 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 05:54:36.127990 kubelet[2721]: I0912 05:54:36.127968 2721 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 05:54:36.127990 kubelet[2721]: I0912 05:54:36.127991 2721 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:54:36.128239 kubelet[2721]: I0912 05:54:36.128183 2721 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 05:54:36.128239 kubelet[2721]: I0912 05:54:36.128193 2721 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 05:54:36.128239 kubelet[2721]: I0912 05:54:36.128211 2721 policy_none.go:49] "None policy: Start" Sep 12 05:54:36.129099 kubelet[2721]: I0912 05:54:36.129067 2721 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 05:54:36.129163 kubelet[2721]: I0912 05:54:36.129096 2721 state_mem.go:35] "Initializing new in-memory state store" Sep 12 05:54:36.129344 kubelet[2721]: I0912 05:54:36.129319 2721 state_mem.go:75] "Updated machine memory state" Sep 12 05:54:36.134359 kubelet[2721]: I0912 05:54:36.134130 2721 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 05:54:36.134359 kubelet[2721]: I0912 05:54:36.134354 2721 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 05:54:36.134482 kubelet[2721]: I0912 05:54:36.134369 2721 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 05:54:36.134636 kubelet[2721]: I0912 05:54:36.134608 2721 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 05:54:36.242024 kubelet[2721]: I0912 05:54:36.241895 2721 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 05:54:36.248947 kubelet[2721]: I0912 05:54:36.248902 2721 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 05:54:36.249143 kubelet[2721]: I0912 05:54:36.248997 2721 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 05:54:36.381105 kubelet[2721]: I0912 05:54:36.381053 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:54:36.381105 kubelet[2721]: I0912 05:54:36.381099 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:54:36.381105 kubelet[2721]: I0912 05:54:36.381119 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ea0471c0bca591e371cb7cff55085f3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ea0471c0bca591e371cb7cff55085f3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:54:36.381328 kubelet[2721]: I0912 05:54:36.381133 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ea0471c0bca591e371cb7cff55085f3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ea0471c0bca591e371cb7cff55085f3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:54:36.381328 kubelet[2721]: I0912 05:54:36.381148 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:54:36.381328 kubelet[2721]: I0912 05:54:36.381167 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:54:36.381328 kubelet[2721]: I0912 05:54:36.381182 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:54:36.381328 kubelet[2721]: I0912 05:54:36.381199 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 05:54:36.381437 kubelet[2721]: I0912 05:54:36.381256 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ea0471c0bca591e371cb7cff55085f3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ea0471c0bca591e371cb7cff55085f3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:54:36.503150 kubelet[2721]: E0912 05:54:36.502953 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:36.503150 kubelet[2721]: E0912 05:54:36.502962 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:36.503150 kubelet[2721]: E0912 05:54:36.503024 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:37.064272 kubelet[2721]: I0912 05:54:37.064227 2721 apiserver.go:52] "Watching apiserver" Sep 12 05:54:37.080437 kubelet[2721]: I0912 05:54:37.080405 2721 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 05:54:37.107697 kubelet[2721]: E0912 05:54:37.107665 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:37.107830 kubelet[2721]: E0912 05:54:37.107790 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:37.108039 kubelet[2721]: E0912 05:54:37.108009 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:37.127340 kubelet[2721]: I0912 05:54:37.127202 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.127178804 podStartE2EDuration="1.127178804s" podCreationTimestamp="2025-09-12 05:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:54:37.126964942 +0000 UTC m=+1.133416921" watchObservedRunningTime="2025-09-12 05:54:37.127178804 +0000 UTC m=+1.133630784" Sep 12 05:54:37.140738 kubelet[2721]: I0912 05:54:37.140673 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.140654519 podStartE2EDuration="1.140654519s" podCreationTimestamp="2025-09-12 05:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:54:37.139094042 +0000 UTC m=+1.145546031" watchObservedRunningTime="2025-09-12 05:54:37.140654519 +0000 UTC m=+1.147106498" Sep 12 05:54:37.140900 kubelet[2721]: I0912 05:54:37.140774 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.140769591 podStartE2EDuration="1.140769591s" podCreationTimestamp="2025-09-12 05:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:54:37.133843947 +0000 UTC m=+1.140295926" watchObservedRunningTime="2025-09-12 05:54:37.140769591 +0000 UTC m=+1.147221570" Sep 12 05:54:38.108766 kubelet[2721]: E0912 05:54:38.108734 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:38.340255 kubelet[2721]: E0912 05:54:38.340205 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:40.184372 kubelet[2721]: I0912 05:54:40.184326 2721 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 05:54:40.184944 kubelet[2721]: I0912 05:54:40.184823 2721 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 05:54:40.184976 containerd[1564]: time="2025-09-12T05:54:40.184669001Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 05:54:41.184329 systemd[1]: Created slice kubepods-besteffort-pod482395d3_caa5_412e_a162_299f219cca1d.slice - libcontainer container kubepods-besteffort-pod482395d3_caa5_412e_a162_299f219cca1d.slice. Sep 12 05:54:41.252585 kubelet[2721]: I0912 05:54:41.252516 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/482395d3-caa5-412e-a162-299f219cca1d-kube-proxy\") pod \"kube-proxy-52bd7\" (UID: \"482395d3-caa5-412e-a162-299f219cca1d\") " pod="kube-system/kube-proxy-52bd7" Sep 12 05:54:41.252585 kubelet[2721]: I0912 05:54:41.252551 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs2gs\" (UniqueName: \"kubernetes.io/projected/482395d3-caa5-412e-a162-299f219cca1d-kube-api-access-vs2gs\") pod \"kube-proxy-52bd7\" (UID: \"482395d3-caa5-412e-a162-299f219cca1d\") " pod="kube-system/kube-proxy-52bd7" Sep 12 05:54:41.252585 kubelet[2721]: I0912 05:54:41.252588 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/482395d3-caa5-412e-a162-299f219cca1d-xtables-lock\") pod \"kube-proxy-52bd7\" (UID: \"482395d3-caa5-412e-a162-299f219cca1d\") " pod="kube-system/kube-proxy-52bd7" Sep 12 05:54:41.253076 kubelet[2721]: I0912 05:54:41.252608 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/482395d3-caa5-412e-a162-299f219cca1d-lib-modules\") pod \"kube-proxy-52bd7\" (UID: \"482395d3-caa5-412e-a162-299f219cca1d\") " pod="kube-system/kube-proxy-52bd7" Sep 12 05:54:41.299369 systemd[1]: Created slice kubepods-besteffort-pod09442705_c4da_44a8_bb5d_7e94a4a257eb.slice - libcontainer container kubepods-besteffort-pod09442705_c4da_44a8_bb5d_7e94a4a257eb.slice. Sep 12 05:54:41.352979 kubelet[2721]: I0912 05:54:41.352814 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5mjd\" (UniqueName: \"kubernetes.io/projected/09442705-c4da-44a8-bb5d-7e94a4a257eb-kube-api-access-p5mjd\") pod \"tigera-operator-58fc44c59b-5ndvs\" (UID: \"09442705-c4da-44a8-bb5d-7e94a4a257eb\") " pod="tigera-operator/tigera-operator-58fc44c59b-5ndvs" Sep 12 05:54:41.352979 kubelet[2721]: I0912 05:54:41.352860 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/09442705-c4da-44a8-bb5d-7e94a4a257eb-var-lib-calico\") pod \"tigera-operator-58fc44c59b-5ndvs\" (UID: \"09442705-c4da-44a8-bb5d-7e94a4a257eb\") " pod="tigera-operator/tigera-operator-58fc44c59b-5ndvs" Sep 12 05:54:41.495124 kubelet[2721]: E0912 05:54:41.495001 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:41.495611 containerd[1564]: time="2025-09-12T05:54:41.495558272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-52bd7,Uid:482395d3-caa5-412e-a162-299f219cca1d,Namespace:kube-system,Attempt:0,}" Sep 12 05:54:41.557510 containerd[1564]: time="2025-09-12T05:54:41.557436025Z" level=info msg="connecting to shim 1cc8373b0906c45a7eac5b3b89ceafbd7eec99a75f3937c724e38d66c3419628" address="unix:///run/containerd/s/198f8a47946e61d0c587f79dc7e367b09c16130dc19f06c8fe49804cf9d643ab" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:54:41.591729 systemd[1]: Started cri-containerd-1cc8373b0906c45a7eac5b3b89ceafbd7eec99a75f3937c724e38d66c3419628.scope - libcontainer container 1cc8373b0906c45a7eac5b3b89ceafbd7eec99a75f3937c724e38d66c3419628. Sep 12 05:54:41.604238 containerd[1564]: time="2025-09-12T05:54:41.604169185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-5ndvs,Uid:09442705-c4da-44a8-bb5d-7e94a4a257eb,Namespace:tigera-operator,Attempt:0,}" Sep 12 05:54:41.627199 containerd[1564]: time="2025-09-12T05:54:41.627135478Z" level=info msg="connecting to shim 4ed50892363a2b8bc0db3e309fe2e74d31a300d809f076d5ab2a9dc75645cf5c" address="unix:///run/containerd/s/1aeb7bc3950211e7f77231b3c3b2edd4ad1e439a47421c9c9a65ada38b85b207" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:54:41.627867 containerd[1564]: time="2025-09-12T05:54:41.627785564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-52bd7,Uid:482395d3-caa5-412e-a162-299f219cca1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cc8373b0906c45a7eac5b3b89ceafbd7eec99a75f3937c724e38d66c3419628\"" Sep 12 05:54:41.628766 kubelet[2721]: E0912 05:54:41.628725 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:41.631454 containerd[1564]: time="2025-09-12T05:54:41.631420570Z" level=info msg="CreateContainer within sandbox \"1cc8373b0906c45a7eac5b3b89ceafbd7eec99a75f3937c724e38d66c3419628\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 05:54:41.645151 containerd[1564]: time="2025-09-12T05:54:41.645103652Z" level=info msg="Container 05c83bfc2081e81dfd20424519e37cfb908dd3fe41e98ee4b7c95bb699368ffb: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:54:41.653988 containerd[1564]: time="2025-09-12T05:54:41.653914418Z" level=info msg="CreateContainer within sandbox \"1cc8373b0906c45a7eac5b3b89ceafbd7eec99a75f3937c724e38d66c3419628\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"05c83bfc2081e81dfd20424519e37cfb908dd3fe41e98ee4b7c95bb699368ffb\"" Sep 12 05:54:41.654801 containerd[1564]: time="2025-09-12T05:54:41.654765488Z" level=info msg="StartContainer for \"05c83bfc2081e81dfd20424519e37cfb908dd3fe41e98ee4b7c95bb699368ffb\"" Sep 12 05:54:41.656341 containerd[1564]: time="2025-09-12T05:54:41.656244664Z" level=info msg="connecting to shim 05c83bfc2081e81dfd20424519e37cfb908dd3fe41e98ee4b7c95bb699368ffb" address="unix:///run/containerd/s/198f8a47946e61d0c587f79dc7e367b09c16130dc19f06c8fe49804cf9d643ab" protocol=ttrpc version=3 Sep 12 05:54:41.659864 systemd[1]: Started cri-containerd-4ed50892363a2b8bc0db3e309fe2e74d31a300d809f076d5ab2a9dc75645cf5c.scope - libcontainer container 4ed50892363a2b8bc0db3e309fe2e74d31a300d809f076d5ab2a9dc75645cf5c. Sep 12 05:54:41.681711 systemd[1]: Started cri-containerd-05c83bfc2081e81dfd20424519e37cfb908dd3fe41e98ee4b7c95bb699368ffb.scope - libcontainer container 05c83bfc2081e81dfd20424519e37cfb908dd3fe41e98ee4b7c95bb699368ffb. Sep 12 05:54:41.722491 containerd[1564]: time="2025-09-12T05:54:41.722441975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-5ndvs,Uid:09442705-c4da-44a8-bb5d-7e94a4a257eb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4ed50892363a2b8bc0db3e309fe2e74d31a300d809f076d5ab2a9dc75645cf5c\"" Sep 12 05:54:41.724699 containerd[1564]: time="2025-09-12T05:54:41.724019668Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 05:54:41.733704 containerd[1564]: time="2025-09-12T05:54:41.733674441Z" level=info msg="StartContainer for \"05c83bfc2081e81dfd20424519e37cfb908dd3fe41e98ee4b7c95bb699368ffb\" returns successfully" Sep 12 05:54:42.117039 kubelet[2721]: E0912 05:54:42.116363 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:42.394021 kubelet[2721]: E0912 05:54:42.393906 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:42.406692 kubelet[2721]: I0912 05:54:42.406432 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-52bd7" podStartSLOduration=1.406408337 podStartE2EDuration="1.406408337s" podCreationTimestamp="2025-09-12 05:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:54:42.126060438 +0000 UTC m=+6.132512417" watchObservedRunningTime="2025-09-12 05:54:42.406408337 +0000 UTC m=+6.412860316" Sep 12 05:54:42.651320 kubelet[2721]: E0912 05:54:42.651178 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:43.122772 kubelet[2721]: E0912 05:54:43.122741 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:43.122946 kubelet[2721]: E0912 05:54:43.122827 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:43.347527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3944735582.mount: Deactivated successfully. Sep 12 05:54:43.691362 containerd[1564]: time="2025-09-12T05:54:43.691298939Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:43.692086 containerd[1564]: time="2025-09-12T05:54:43.692043963Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 05:54:43.693343 containerd[1564]: time="2025-09-12T05:54:43.693315776Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:43.695275 containerd[1564]: time="2025-09-12T05:54:43.695235947Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:43.695830 containerd[1564]: time="2025-09-12T05:54:43.695795858Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.971139459s" Sep 12 05:54:43.695830 containerd[1564]: time="2025-09-12T05:54:43.695826707Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 05:54:43.697956 containerd[1564]: time="2025-09-12T05:54:43.697928676Z" level=info msg="CreateContainer within sandbox \"4ed50892363a2b8bc0db3e309fe2e74d31a300d809f076d5ab2a9dc75645cf5c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 05:54:43.705795 containerd[1564]: time="2025-09-12T05:54:43.705763989Z" level=info msg="Container f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:54:43.711832 containerd[1564]: time="2025-09-12T05:54:43.711786144Z" level=info msg="CreateContainer within sandbox \"4ed50892363a2b8bc0db3e309fe2e74d31a300d809f076d5ab2a9dc75645cf5c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d\"" Sep 12 05:54:43.712367 containerd[1564]: time="2025-09-12T05:54:43.712333341Z" level=info msg="StartContainer for \"f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d\"" Sep 12 05:54:43.713191 containerd[1564]: time="2025-09-12T05:54:43.713166413Z" level=info msg="connecting to shim f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d" address="unix:///run/containerd/s/1aeb7bc3950211e7f77231b3c3b2edd4ad1e439a47421c9c9a65ada38b85b207" protocol=ttrpc version=3 Sep 12 05:54:43.746721 systemd[1]: Started cri-containerd-f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d.scope - libcontainer container f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d. Sep 12 05:54:43.796670 containerd[1564]: time="2025-09-12T05:54:43.796558844Z" level=info msg="StartContainer for \"f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d\" returns successfully" Sep 12 05:54:44.125754 kubelet[2721]: E0912 05:54:44.125647 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:44.133987 kubelet[2721]: I0912 05:54:44.133920 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-5ndvs" podStartSLOduration=1.160821904 podStartE2EDuration="3.133897931s" podCreationTimestamp="2025-09-12 05:54:41 +0000 UTC" firstStartedPulling="2025-09-12 05:54:41.723541331 +0000 UTC m=+5.729993300" lastFinishedPulling="2025-09-12 05:54:43.696617348 +0000 UTC m=+7.703069327" observedRunningTime="2025-09-12 05:54:44.13374071 +0000 UTC m=+8.140192689" watchObservedRunningTime="2025-09-12 05:54:44.133897931 +0000 UTC m=+8.140349980" Sep 12 05:54:45.691298 systemd[1]: cri-containerd-f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d.scope: Deactivated successfully. Sep 12 05:54:45.691674 systemd[1]: cri-containerd-f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d.scope: Consumed 382ms CPU time, 42.1M memory peak, 2.7M read from disk. Sep 12 05:54:45.694587 containerd[1564]: time="2025-09-12T05:54:45.693506487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d\" id:\"f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d\" pid:3044 exit_status:1 exited_at:{seconds:1757656485 nanos:692906211}" Sep 12 05:54:45.694587 containerd[1564]: time="2025-09-12T05:54:45.693527436Z" level=info msg="received exit event container_id:\"f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d\" id:\"f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d\" pid:3044 exit_status:1 exited_at:{seconds:1757656485 nanos:692906211}" Sep 12 05:54:45.731335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d-rootfs.mount: Deactivated successfully. Sep 12 05:54:46.131014 kubelet[2721]: I0912 05:54:46.130961 2721 scope.go:117] "RemoveContainer" containerID="f539036f908cd6bdb055f1299ff627f29dd9b2ae5f94a053d84bbf73c6d5fc3d" Sep 12 05:54:46.133745 containerd[1564]: time="2025-09-12T05:54:46.133579506Z" level=info msg="CreateContainer within sandbox \"4ed50892363a2b8bc0db3e309fe2e74d31a300d809f076d5ab2a9dc75645cf5c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 05:54:46.151343 containerd[1564]: time="2025-09-12T05:54:46.149484964Z" level=info msg="Container 3d531f7b4bde7f4c4d3d627b6cec6e0d358561de88a6ebf545becbdee52627a2: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:54:46.154622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount979363991.mount: Deactivated successfully. Sep 12 05:54:46.169825 containerd[1564]: time="2025-09-12T05:54:46.169769801Z" level=info msg="CreateContainer within sandbox \"4ed50892363a2b8bc0db3e309fe2e74d31a300d809f076d5ab2a9dc75645cf5c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"3d531f7b4bde7f4c4d3d627b6cec6e0d358561de88a6ebf545becbdee52627a2\"" Sep 12 05:54:46.171709 containerd[1564]: time="2025-09-12T05:54:46.171634897Z" level=info msg="StartContainer for \"3d531f7b4bde7f4c4d3d627b6cec6e0d358561de88a6ebf545becbdee52627a2\"" Sep 12 05:54:46.172945 containerd[1564]: time="2025-09-12T05:54:46.172912173Z" level=info msg="connecting to shim 3d531f7b4bde7f4c4d3d627b6cec6e0d358561de88a6ebf545becbdee52627a2" address="unix:///run/containerd/s/1aeb7bc3950211e7f77231b3c3b2edd4ad1e439a47421c9c9a65ada38b85b207" protocol=ttrpc version=3 Sep 12 05:54:46.246783 systemd[1]: Started cri-containerd-3d531f7b4bde7f4c4d3d627b6cec6e0d358561de88a6ebf545becbdee52627a2.scope - libcontainer container 3d531f7b4bde7f4c4d3d627b6cec6e0d358561de88a6ebf545becbdee52627a2. Sep 12 05:54:46.289901 containerd[1564]: time="2025-09-12T05:54:46.289851056Z" level=info msg="StartContainer for \"3d531f7b4bde7f4c4d3d627b6cec6e0d358561de88a6ebf545becbdee52627a2\" returns successfully" Sep 12 05:54:48.344807 kubelet[2721]: E0912 05:54:48.344761 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:49.105227 sudo[1782]: pam_unix(sudo:session): session closed for user root Sep 12 05:54:49.107445 sshd[1781]: Connection closed by 10.0.0.1 port 55100 Sep 12 05:54:49.109138 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Sep 12 05:54:49.115977 systemd[1]: sshd@6-10.0.0.78:22-10.0.0.1:55100.service: Deactivated successfully. Sep 12 05:54:49.123287 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 05:54:49.124837 systemd[1]: session-7.scope: Consumed 5.093s CPU time, 225.1M memory peak. Sep 12 05:54:49.127801 systemd-logind[1543]: Session 7 logged out. Waiting for processes to exit. Sep 12 05:54:49.131517 systemd-logind[1543]: Removed session 7. Sep 12 05:54:50.336392 update_engine[1545]: I20250912 05:54:50.335641 1545 update_attempter.cc:509] Updating boot flags... Sep 12 05:54:51.966073 systemd[1]: Created slice kubepods-besteffort-pod00cde3a1_1f43_4b88_ac29_e4a50becaaaf.slice - libcontainer container kubepods-besteffort-pod00cde3a1_1f43_4b88_ac29_e4a50becaaaf.slice. Sep 12 05:54:52.021011 kubelet[2721]: I0912 05:54:52.020938 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/00cde3a1-1f43-4b88-ac29-e4a50becaaaf-typha-certs\") pod \"calico-typha-b74565fcd-gbbrc\" (UID: \"00cde3a1-1f43-4b88-ac29-e4a50becaaaf\") " pod="calico-system/calico-typha-b74565fcd-gbbrc" Sep 12 05:54:52.021699 kubelet[2721]: I0912 05:54:52.021028 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00cde3a1-1f43-4b88-ac29-e4a50becaaaf-tigera-ca-bundle\") pod \"calico-typha-b74565fcd-gbbrc\" (UID: \"00cde3a1-1f43-4b88-ac29-e4a50becaaaf\") " pod="calico-system/calico-typha-b74565fcd-gbbrc" Sep 12 05:54:52.021699 kubelet[2721]: I0912 05:54:52.021079 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h242p\" (UniqueName: \"kubernetes.io/projected/00cde3a1-1f43-4b88-ac29-e4a50becaaaf-kube-api-access-h242p\") pod \"calico-typha-b74565fcd-gbbrc\" (UID: \"00cde3a1-1f43-4b88-ac29-e4a50becaaaf\") " pod="calico-system/calico-typha-b74565fcd-gbbrc" Sep 12 05:54:52.271465 kubelet[2721]: E0912 05:54:52.271272 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:52.273814 containerd[1564]: time="2025-09-12T05:54:52.273742986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b74565fcd-gbbrc,Uid:00cde3a1-1f43-4b88-ac29-e4a50becaaaf,Namespace:calico-system,Attempt:0,}" Sep 12 05:54:52.323311 kubelet[2721]: I0912 05:54:52.323259 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d9a700a-8743-431c-b449-1d3ad10398e1-xtables-lock\") pod \"calico-node-sjjqp\" (UID: \"1d9a700a-8743-431c-b449-1d3ad10398e1\") " pod="calico-system/calico-node-sjjqp" Sep 12 05:54:52.323311 kubelet[2721]: I0912 05:54:52.323300 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1d9a700a-8743-431c-b449-1d3ad10398e1-cni-net-dir\") pod \"calico-node-sjjqp\" (UID: \"1d9a700a-8743-431c-b449-1d3ad10398e1\") " pod="calico-system/calico-node-sjjqp" Sep 12 05:54:52.323311 kubelet[2721]: I0912 05:54:52.323320 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1d9a700a-8743-431c-b449-1d3ad10398e1-flexvol-driver-host\") pod \"calico-node-sjjqp\" (UID: \"1d9a700a-8743-431c-b449-1d3ad10398e1\") " pod="calico-system/calico-node-sjjqp" Sep 12 05:54:52.323771 kubelet[2721]: I0912 05:54:52.323352 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1d9a700a-8743-431c-b449-1d3ad10398e1-node-certs\") pod \"calico-node-sjjqp\" (UID: \"1d9a700a-8743-431c-b449-1d3ad10398e1\") " pod="calico-system/calico-node-sjjqp" Sep 12 05:54:52.329702 kubelet[2721]: I0912 05:54:52.329645 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptjsx\" (UniqueName: \"kubernetes.io/projected/1d9a700a-8743-431c-b449-1d3ad10398e1-kube-api-access-ptjsx\") pod \"calico-node-sjjqp\" (UID: \"1d9a700a-8743-431c-b449-1d3ad10398e1\") " pod="calico-system/calico-node-sjjqp" Sep 12 05:54:52.329825 kubelet[2721]: I0912 05:54:52.329708 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1d9a700a-8743-431c-b449-1d3ad10398e1-cni-bin-dir\") pod \"calico-node-sjjqp\" (UID: \"1d9a700a-8743-431c-b449-1d3ad10398e1\") " pod="calico-system/calico-node-sjjqp" Sep 12 05:54:52.329825 kubelet[2721]: I0912 05:54:52.329746 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1d9a700a-8743-431c-b449-1d3ad10398e1-policysync\") pod \"calico-node-sjjqp\" (UID: \"1d9a700a-8743-431c-b449-1d3ad10398e1\") " pod="calico-system/calico-node-sjjqp" Sep 12 05:54:52.329825 kubelet[2721]: I0912 05:54:52.329766 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d9a700a-8743-431c-b449-1d3ad10398e1-lib-modules\") pod \"calico-node-sjjqp\" (UID: \"1d9a700a-8743-431c-b449-1d3ad10398e1\") " pod="calico-system/calico-node-sjjqp" Sep 12 05:54:52.329825 kubelet[2721]: I0912 05:54:52.329793 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1d9a700a-8743-431c-b449-1d3ad10398e1-var-lib-calico\") pod \"calico-node-sjjqp\" (UID: \"1d9a700a-8743-431c-b449-1d3ad10398e1\") " pod="calico-system/calico-node-sjjqp" Sep 12 05:54:52.329825 kubelet[2721]: I0912 05:54:52.329814 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1d9a700a-8743-431c-b449-1d3ad10398e1-var-run-calico\") pod \"calico-node-sjjqp\" (UID: \"1d9a700a-8743-431c-b449-1d3ad10398e1\") " pod="calico-system/calico-node-sjjqp" Sep 12 05:54:52.329970 kubelet[2721]: I0912 05:54:52.329835 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1d9a700a-8743-431c-b449-1d3ad10398e1-cni-log-dir\") pod \"calico-node-sjjqp\" (UID: \"1d9a700a-8743-431c-b449-1d3ad10398e1\") " pod="calico-system/calico-node-sjjqp" Sep 12 05:54:52.329970 kubelet[2721]: I0912 05:54:52.329862 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d9a700a-8743-431c-b449-1d3ad10398e1-tigera-ca-bundle\") pod \"calico-node-sjjqp\" (UID: \"1d9a700a-8743-431c-b449-1d3ad10398e1\") " pod="calico-system/calico-node-sjjqp" Sep 12 05:54:52.332745 systemd[1]: Created slice kubepods-besteffort-pod1d9a700a_8743_431c_b449_1d3ad10398e1.slice - libcontainer container kubepods-besteffort-pod1d9a700a_8743_431c_b449_1d3ad10398e1.slice. Sep 12 05:54:52.334945 containerd[1564]: time="2025-09-12T05:54:52.334861196Z" level=info msg="connecting to shim 8164411be5ca8e2d7b9704bd9fd27a375d94458f71e667aa1f34592628dccac6" address="unix:///run/containerd/s/e2005112963fd53e93671fbeda7bfa1146ebd61d77b81e1cbba9133a226eab69" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:54:52.389163 systemd[1]: Started cri-containerd-8164411be5ca8e2d7b9704bd9fd27a375d94458f71e667aa1f34592628dccac6.scope - libcontainer container 8164411be5ca8e2d7b9704bd9fd27a375d94458f71e667aa1f34592628dccac6. Sep 12 05:54:52.434186 kubelet[2721]: E0912 05:54:52.434111 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.434186 kubelet[2721]: W0912 05:54:52.434151 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.434482 kubelet[2721]: E0912 05:54:52.434219 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.434870 kubelet[2721]: E0912 05:54:52.434828 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.435070 kubelet[2721]: W0912 05:54:52.435039 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.435136 kubelet[2721]: E0912 05:54:52.435076 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.436421 kubelet[2721]: E0912 05:54:52.436368 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.436522 kubelet[2721]: W0912 05:54:52.436432 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.436522 kubelet[2721]: E0912 05:54:52.436471 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.443717 kubelet[2721]: E0912 05:54:52.437007 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.443717 kubelet[2721]: W0912 05:54:52.437038 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.443717 kubelet[2721]: E0912 05:54:52.437195 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.443717 kubelet[2721]: E0912 05:54:52.441003 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.443717 kubelet[2721]: W0912 05:54:52.441022 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.443717 kubelet[2721]: E0912 05:54:52.441322 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.443717 kubelet[2721]: E0912 05:54:52.441436 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.443717 kubelet[2721]: W0912 05:54:52.441446 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.443717 kubelet[2721]: E0912 05:54:52.442071 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.443717 kubelet[2721]: E0912 05:54:52.442347 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.444146 kubelet[2721]: W0912 05:54:52.442358 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.444146 kubelet[2721]: E0912 05:54:52.442424 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.444146 kubelet[2721]: E0912 05:54:52.442785 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.444146 kubelet[2721]: W0912 05:54:52.442799 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.444689 kubelet[2721]: E0912 05:54:52.444647 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.445376 kubelet[2721]: E0912 05:54:52.445346 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.445446 kubelet[2721]: W0912 05:54:52.445393 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.445639 kubelet[2721]: E0912 05:54:52.445441 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.447601 kubelet[2721]: E0912 05:54:52.445905 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.447601 kubelet[2721]: W0912 05:54:52.445935 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.447601 kubelet[2721]: E0912 05:54:52.445962 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.447601 kubelet[2721]: E0912 05:54:52.446456 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.447601 kubelet[2721]: W0912 05:54:52.446470 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.447601 kubelet[2721]: E0912 05:54:52.446495 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.450654 kubelet[2721]: E0912 05:54:52.450628 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.450654 kubelet[2721]: W0912 05:54:52.450646 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.450833 kubelet[2721]: E0912 05:54:52.450674 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.451119 kubelet[2721]: E0912 05:54:52.451090 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.451119 kubelet[2721]: W0912 05:54:52.451114 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.451247 kubelet[2721]: E0912 05:54:52.451139 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.451533 kubelet[2721]: E0912 05:54:52.451503 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.451533 kubelet[2721]: W0912 05:54:52.451526 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.451719 kubelet[2721]: E0912 05:54:52.451605 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.451878 kubelet[2721]: E0912 05:54:52.451861 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.451878 kubelet[2721]: W0912 05:54:52.451873 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.451951 kubelet[2721]: E0912 05:54:52.451883 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.487306 containerd[1564]: time="2025-09-12T05:54:52.487231407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b74565fcd-gbbrc,Uid:00cde3a1-1f43-4b88-ac29-e4a50becaaaf,Namespace:calico-system,Attempt:0,} returns sandbox id \"8164411be5ca8e2d7b9704bd9fd27a375d94458f71e667aa1f34592628dccac6\"" Sep 12 05:54:52.488328 kubelet[2721]: E0912 05:54:52.488303 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:52.489800 containerd[1564]: time="2025-09-12T05:54:52.489271470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 05:54:52.573008 kubelet[2721]: E0912 05:54:52.572745 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k69w8" podUID="856f1557-0f2c-4284-9c75-f29b82c7d285" Sep 12 05:54:52.619600 kubelet[2721]: E0912 05:54:52.619515 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.619600 kubelet[2721]: W0912 05:54:52.619547 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.619880 kubelet[2721]: E0912 05:54:52.619607 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.619954 kubelet[2721]: E0912 05:54:52.619937 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.619954 kubelet[2721]: W0912 05:54:52.619949 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.620025 kubelet[2721]: E0912 05:54:52.619961 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.620304 kubelet[2721]: E0912 05:54:52.620273 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.620304 kubelet[2721]: W0912 05:54:52.620300 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.620404 kubelet[2721]: E0912 05:54:52.620330 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.621136 kubelet[2721]: E0912 05:54:52.620744 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.621136 kubelet[2721]: W0912 05:54:52.620769 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.621136 kubelet[2721]: E0912 05:54:52.620823 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.621344 kubelet[2721]: E0912 05:54:52.621331 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.621401 kubelet[2721]: W0912 05:54:52.621354 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.621401 kubelet[2721]: E0912 05:54:52.621370 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.622267 kubelet[2721]: E0912 05:54:52.621928 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.622728 kubelet[2721]: W0912 05:54:52.622428 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.622728 kubelet[2721]: E0912 05:54:52.622456 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.622842 kubelet[2721]: E0912 05:54:52.622817 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.622842 kubelet[2721]: W0912 05:54:52.622828 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.622842 kubelet[2721]: E0912 05:54:52.622837 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.623310 kubelet[2721]: E0912 05:54:52.623212 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.623310 kubelet[2721]: W0912 05:54:52.623237 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.623310 kubelet[2721]: E0912 05:54:52.623257 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.624809 kubelet[2721]: E0912 05:54:52.623752 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.624809 kubelet[2721]: W0912 05:54:52.623778 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.624809 kubelet[2721]: E0912 05:54:52.623799 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.624809 kubelet[2721]: E0912 05:54:52.624409 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.624809 kubelet[2721]: W0912 05:54:52.624424 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.624809 kubelet[2721]: E0912 05:54:52.624449 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.624809 kubelet[2721]: E0912 05:54:52.624726 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.624809 kubelet[2721]: W0912 05:54:52.624735 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.624809 kubelet[2721]: E0912 05:54:52.624744 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.625073 kubelet[2721]: E0912 05:54:52.624951 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.625073 kubelet[2721]: W0912 05:54:52.624959 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.625073 kubelet[2721]: E0912 05:54:52.624968 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.625254 kubelet[2721]: E0912 05:54:52.625216 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.625254 kubelet[2721]: W0912 05:54:52.625245 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.625378 kubelet[2721]: E0912 05:54:52.625292 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.625728 kubelet[2721]: E0912 05:54:52.625688 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.625728 kubelet[2721]: W0912 05:54:52.625714 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.625898 kubelet[2721]: E0912 05:54:52.625734 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.626084 kubelet[2721]: E0912 05:54:52.626058 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.626084 kubelet[2721]: W0912 05:54:52.626079 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.626198 kubelet[2721]: E0912 05:54:52.626103 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.626800 kubelet[2721]: E0912 05:54:52.626774 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.626800 kubelet[2721]: W0912 05:54:52.626798 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.627110 kubelet[2721]: E0912 05:54:52.626820 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.627281 kubelet[2721]: E0912 05:54:52.627266 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.627281 kubelet[2721]: W0912 05:54:52.627278 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.627414 kubelet[2721]: E0912 05:54:52.627287 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.627894 kubelet[2721]: E0912 05:54:52.627845 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.627894 kubelet[2721]: W0912 05:54:52.627872 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.627894 kubelet[2721]: E0912 05:54:52.627889 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.628985 kubelet[2721]: E0912 05:54:52.628206 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.628985 kubelet[2721]: W0912 05:54:52.628229 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.628985 kubelet[2721]: E0912 05:54:52.628255 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.628985 kubelet[2721]: E0912 05:54:52.628691 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.628985 kubelet[2721]: W0912 05:54:52.628715 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.628985 kubelet[2721]: E0912 05:54:52.628734 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.632809 kubelet[2721]: E0912 05:54:52.632771 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.632918 kubelet[2721]: W0912 05:54:52.632861 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.632918 kubelet[2721]: E0912 05:54:52.632886 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.633052 kubelet[2721]: I0912 05:54:52.632938 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/856f1557-0f2c-4284-9c75-f29b82c7d285-registration-dir\") pod \"csi-node-driver-k69w8\" (UID: \"856f1557-0f2c-4284-9c75-f29b82c7d285\") " pod="calico-system/csi-node-driver-k69w8" Sep 12 05:54:52.633436 kubelet[2721]: E0912 05:54:52.633376 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.633436 kubelet[2721]: W0912 05:54:52.633410 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.633436 kubelet[2721]: E0912 05:54:52.633441 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.633668 kubelet[2721]: I0912 05:54:52.633477 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86q7g\" (UniqueName: \"kubernetes.io/projected/856f1557-0f2c-4284-9c75-f29b82c7d285-kube-api-access-86q7g\") pod \"csi-node-driver-k69w8\" (UID: \"856f1557-0f2c-4284-9c75-f29b82c7d285\") " pod="calico-system/csi-node-driver-k69w8" Sep 12 05:54:52.634502 kubelet[2721]: E0912 05:54:52.634334 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.634808 kubelet[2721]: W0912 05:54:52.634445 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.635076 kubelet[2721]: E0912 05:54:52.635006 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.635258 kubelet[2721]: E0912 05:54:52.635200 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.635258 kubelet[2721]: W0912 05:54:52.635255 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.635481 kubelet[2721]: E0912 05:54:52.635350 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.635538 kubelet[2721]: I0912 05:54:52.635525 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/856f1557-0f2c-4284-9c75-f29b82c7d285-kubelet-dir\") pod \"csi-node-driver-k69w8\" (UID: \"856f1557-0f2c-4284-9c75-f29b82c7d285\") " pod="calico-system/csi-node-driver-k69w8" Sep 12 05:54:52.637013 kubelet[2721]: E0912 05:54:52.635905 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.637013 kubelet[2721]: W0912 05:54:52.635944 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.637013 kubelet[2721]: E0912 05:54:52.636045 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.637306 kubelet[2721]: E0912 05:54:52.637276 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.637306 kubelet[2721]: W0912 05:54:52.637296 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.637306 kubelet[2721]: E0912 05:54:52.637307 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.638205 kubelet[2721]: E0912 05:54:52.638151 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.638303 kubelet[2721]: W0912 05:54:52.638205 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.638303 kubelet[2721]: E0912 05:54:52.638274 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.638522 kubelet[2721]: I0912 05:54:52.638471 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/856f1557-0f2c-4284-9c75-f29b82c7d285-varrun\") pod \"csi-node-driver-k69w8\" (UID: \"856f1557-0f2c-4284-9c75-f29b82c7d285\") " pod="calico-system/csi-node-driver-k69w8" Sep 12 05:54:52.639836 kubelet[2721]: E0912 05:54:52.639634 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.639836 kubelet[2721]: W0912 05:54:52.639661 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.639836 kubelet[2721]: E0912 05:54:52.639701 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.639983 kubelet[2721]: E0912 05:54:52.639955 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.639983 kubelet[2721]: W0912 05:54:52.639969 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.639983 kubelet[2721]: E0912 05:54:52.639980 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.640602 kubelet[2721]: E0912 05:54:52.640533 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.640698 kubelet[2721]: W0912 05:54:52.640627 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.641070 kubelet[2721]: E0912 05:54:52.640964 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.641295 kubelet[2721]: E0912 05:54:52.641249 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.641295 kubelet[2721]: W0912 05:54:52.641284 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.641398 kubelet[2721]: E0912 05:54:52.641312 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.641957 kubelet[2721]: E0912 05:54:52.641934 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.641957 kubelet[2721]: W0912 05:54:52.641948 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.641957 kubelet[2721]: E0912 05:54:52.641965 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.642332 kubelet[2721]: E0912 05:54:52.642313 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.642332 kubelet[2721]: W0912 05:54:52.642327 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.642398 kubelet[2721]: E0912 05:54:52.642337 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.642398 kubelet[2721]: I0912 05:54:52.642366 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/856f1557-0f2c-4284-9c75-f29b82c7d285-socket-dir\") pod \"csi-node-driver-k69w8\" (UID: \"856f1557-0f2c-4284-9c75-f29b82c7d285\") " pod="calico-system/csi-node-driver-k69w8" Sep 12 05:54:52.643978 kubelet[2721]: E0912 05:54:52.643615 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.643978 kubelet[2721]: W0912 05:54:52.643649 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.643978 kubelet[2721]: E0912 05:54:52.643675 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.644330 containerd[1564]: time="2025-09-12T05:54:52.644221329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sjjqp,Uid:1d9a700a-8743-431c-b449-1d3ad10398e1,Namespace:calico-system,Attempt:0,}" Sep 12 05:54:52.644378 kubelet[2721]: E0912 05:54:52.644338 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.644378 kubelet[2721]: W0912 05:54:52.644349 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.644378 kubelet[2721]: E0912 05:54:52.644359 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.688668 containerd[1564]: time="2025-09-12T05:54:52.687667807Z" level=info msg="connecting to shim 829fab6e1d9e86bfccffb44bcf6c59e237f5c1dd1162db1bba695507e3a99a19" address="unix:///run/containerd/s/88a65e2934ebb5ba389bd3d7730448298085f375fa82e4a9b29487b9966d4da9" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:54:52.744944 kubelet[2721]: E0912 05:54:52.744881 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.744944 kubelet[2721]: W0912 05:54:52.744916 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.745527 kubelet[2721]: E0912 05:54:52.744970 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.747749 kubelet[2721]: E0912 05:54:52.747637 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.747749 kubelet[2721]: W0912 05:54:52.747674 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.747749 kubelet[2721]: E0912 05:54:52.747728 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.748417 kubelet[2721]: E0912 05:54:52.748380 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.748539 kubelet[2721]: W0912 05:54:52.748415 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.749668 kubelet[2721]: E0912 05:54:52.749647 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.749668 kubelet[2721]: W0912 05:54:52.749664 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.749848 kubelet[2721]: E0912 05:54:52.749701 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.750358 kubelet[2721]: E0912 05:54:52.750270 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.752150 kubelet[2721]: E0912 05:54:52.752125 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.752150 kubelet[2721]: W0912 05:54:52.752143 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.752235 kubelet[2721]: E0912 05:54:52.752154 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.756971 kubelet[2721]: E0912 05:54:52.756765 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.756971 kubelet[2721]: W0912 05:54:52.756795 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.757618 kubelet[2721]: E0912 05:54:52.757551 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.769493 kubelet[2721]: E0912 05:54:52.768444 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.769493 kubelet[2721]: W0912 05:54:52.768494 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.769990 kubelet[2721]: E0912 05:54:52.769975 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.770035 kubelet[2721]: W0912 05:54:52.769989 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.770699 kubelet[2721]: E0912 05:54:52.770153 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.770699 kubelet[2721]: E0912 05:54:52.770183 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.773018 kubelet[2721]: E0912 05:54:52.772767 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.773018 kubelet[2721]: W0912 05:54:52.772974 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.773219 systemd[1]: Started cri-containerd-829fab6e1d9e86bfccffb44bcf6c59e237f5c1dd1162db1bba695507e3a99a19.scope - libcontainer container 829fab6e1d9e86bfccffb44bcf6c59e237f5c1dd1162db1bba695507e3a99a19. Sep 12 05:54:52.776404 kubelet[2721]: E0912 05:54:52.776290 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.777389 kubelet[2721]: W0912 05:54:52.777357 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.778054 kubelet[2721]: E0912 05:54:52.777665 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.778054 kubelet[2721]: W0912 05:54:52.777693 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.778054 kubelet[2721]: E0912 05:54:52.777993 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.778054 kubelet[2721]: W0912 05:54:52.778017 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.780419 kubelet[2721]: E0912 05:54:52.776860 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.780419 kubelet[2721]: E0912 05:54:52.778254 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.780419 kubelet[2721]: E0912 05:54:52.778326 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.780419 kubelet[2721]: E0912 05:54:52.778344 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.780419 kubelet[2721]: E0912 05:54:52.779967 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.780419 kubelet[2721]: W0912 05:54:52.779982 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.780419 kubelet[2721]: E0912 05:54:52.780006 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.782876 kubelet[2721]: E0912 05:54:52.782733 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.782876 kubelet[2721]: W0912 05:54:52.782745 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.782876 kubelet[2721]: E0912 05:54:52.782865 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.785936 kubelet[2721]: E0912 05:54:52.785626 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.785936 kubelet[2721]: W0912 05:54:52.785666 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.786265 kubelet[2721]: E0912 05:54:52.786157 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.786599 kubelet[2721]: W0912 05:54:52.786519 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.787231 kubelet[2721]: E0912 05:54:52.787215 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.787231 kubelet[2721]: W0912 05:54:52.787227 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.787750 kubelet[2721]: E0912 05:54:52.787697 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.787750 kubelet[2721]: E0912 05:54:52.786417 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.787967 kubelet[2721]: E0912 05:54:52.787759 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.788724 kubelet[2721]: E0912 05:54:52.788644 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.788724 kubelet[2721]: W0912 05:54:52.788657 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.788791 kubelet[2721]: E0912 05:54:52.788723 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.788926 kubelet[2721]: E0912 05:54:52.788885 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.788926 kubelet[2721]: W0912 05:54:52.788896 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.789109 kubelet[2721]: E0912 05:54:52.788954 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.789109 kubelet[2721]: E0912 05:54:52.789080 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.789109 kubelet[2721]: W0912 05:54:52.789089 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.790598 kubelet[2721]: E0912 05:54:52.789483 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.790598 kubelet[2721]: W0912 05:54:52.789622 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.790598 kubelet[2721]: E0912 05:54:52.789729 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.791073 kubelet[2721]: E0912 05:54:52.791034 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.791830 kubelet[2721]: E0912 05:54:52.791794 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.791877 kubelet[2721]: W0912 05:54:52.791848 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.791877 kubelet[2721]: E0912 05:54:52.791868 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.792594 kubelet[2721]: E0912 05:54:52.792536 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.792671 kubelet[2721]: W0912 05:54:52.792610 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.792751 kubelet[2721]: E0912 05:54:52.792713 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.793152 kubelet[2721]: E0912 05:54:52.793120 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.793152 kubelet[2721]: W0912 05:54:52.793146 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.793211 kubelet[2721]: E0912 05:54:52.793188 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.793351 kubelet[2721]: E0912 05:54:52.793336 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.793351 kubelet[2721]: W0912 05:54:52.793349 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.793413 kubelet[2721]: E0912 05:54:52.793377 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.813764 kubelet[2721]: E0912 05:54:52.813717 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:52.813764 kubelet[2721]: W0912 05:54:52.813745 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:52.813764 kubelet[2721]: E0912 05:54:52.813767 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:52.914179 containerd[1564]: time="2025-09-12T05:54:52.914060958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sjjqp,Uid:1d9a700a-8743-431c-b449-1d3ad10398e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"829fab6e1d9e86bfccffb44bcf6c59e237f5c1dd1162db1bba695507e3a99a19\"" Sep 12 05:54:53.888739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258584869.mount: Deactivated successfully. Sep 12 05:54:54.098086 kubelet[2721]: E0912 05:54:54.097964 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k69w8" podUID="856f1557-0f2c-4284-9c75-f29b82c7d285" Sep 12 05:54:55.833919 containerd[1564]: time="2025-09-12T05:54:55.833825935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:55.834594 containerd[1564]: time="2025-09-12T05:54:55.834439518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 05:54:55.835725 containerd[1564]: time="2025-09-12T05:54:55.835692734Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:55.837970 containerd[1564]: time="2025-09-12T05:54:55.837926407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:55.838548 containerd[1564]: time="2025-09-12T05:54:55.838515735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.349188188s" Sep 12 05:54:55.838606 containerd[1564]: time="2025-09-12T05:54:55.838553146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 05:54:55.839755 containerd[1564]: time="2025-09-12T05:54:55.839709037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 05:54:55.851345 containerd[1564]: time="2025-09-12T05:54:55.851290450Z" level=info msg="CreateContainer within sandbox \"8164411be5ca8e2d7b9704bd9fd27a375d94458f71e667aa1f34592628dccac6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 05:54:55.860440 containerd[1564]: time="2025-09-12T05:54:55.860384488Z" level=info msg="Container faf20f6978f02edae66db87130e866057eac70e1ee6c04a170422f674bd9d8ab: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:54:55.870780 containerd[1564]: time="2025-09-12T05:54:55.870718628Z" level=info msg="CreateContainer within sandbox \"8164411be5ca8e2d7b9704bd9fd27a375d94458f71e667aa1f34592628dccac6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"faf20f6978f02edae66db87130e866057eac70e1ee6c04a170422f674bd9d8ab\"" Sep 12 05:54:55.871751 containerd[1564]: time="2025-09-12T05:54:55.871703084Z" level=info msg="StartContainer for \"faf20f6978f02edae66db87130e866057eac70e1ee6c04a170422f674bd9d8ab\"" Sep 12 05:54:55.874094 containerd[1564]: time="2025-09-12T05:54:55.874058118Z" level=info msg="connecting to shim faf20f6978f02edae66db87130e866057eac70e1ee6c04a170422f674bd9d8ab" address="unix:///run/containerd/s/e2005112963fd53e93671fbeda7bfa1146ebd61d77b81e1cbba9133a226eab69" protocol=ttrpc version=3 Sep 12 05:54:55.900749 systemd[1]: Started cri-containerd-faf20f6978f02edae66db87130e866057eac70e1ee6c04a170422f674bd9d8ab.scope - libcontainer container faf20f6978f02edae66db87130e866057eac70e1ee6c04a170422f674bd9d8ab. Sep 12 05:54:55.956686 containerd[1564]: time="2025-09-12T05:54:55.956629007Z" level=info msg="StartContainer for \"faf20f6978f02edae66db87130e866057eac70e1ee6c04a170422f674bd9d8ab\" returns successfully" Sep 12 05:54:56.097968 kubelet[2721]: E0912 05:54:56.097919 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k69w8" podUID="856f1557-0f2c-4284-9c75-f29b82c7d285" Sep 12 05:54:56.176591 kubelet[2721]: E0912 05:54:56.175966 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:56.258528 kubelet[2721]: E0912 05:54:56.258474 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.258528 kubelet[2721]: W0912 05:54:56.258511 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.258783 kubelet[2721]: E0912 05:54:56.258541 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.258934 kubelet[2721]: E0912 05:54:56.258915 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.258934 kubelet[2721]: W0912 05:54:56.258928 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.259023 kubelet[2721]: E0912 05:54:56.258938 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.259144 kubelet[2721]: E0912 05:54:56.259121 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.259144 kubelet[2721]: W0912 05:54:56.259132 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.259144 kubelet[2721]: E0912 05:54:56.259141 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.259331 kubelet[2721]: E0912 05:54:56.259307 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.259331 kubelet[2721]: W0912 05:54:56.259320 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.259331 kubelet[2721]: E0912 05:54:56.259329 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.259501 kubelet[2721]: E0912 05:54:56.259495 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.259528 kubelet[2721]: W0912 05:54:56.259503 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.259528 kubelet[2721]: E0912 05:54:56.259512 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.259714 kubelet[2721]: E0912 05:54:56.259698 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.259714 kubelet[2721]: W0912 05:54:56.259709 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.259776 kubelet[2721]: E0912 05:54:56.259718 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.259881 kubelet[2721]: E0912 05:54:56.259866 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.259881 kubelet[2721]: W0912 05:54:56.259877 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.259933 kubelet[2721]: E0912 05:54:56.259887 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.260052 kubelet[2721]: E0912 05:54:56.260037 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.260052 kubelet[2721]: W0912 05:54:56.260048 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.260117 kubelet[2721]: E0912 05:54:56.260056 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.260229 kubelet[2721]: E0912 05:54:56.260213 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.260229 kubelet[2721]: W0912 05:54:56.260224 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.260283 kubelet[2721]: E0912 05:54:56.260232 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.260399 kubelet[2721]: E0912 05:54:56.260383 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.260399 kubelet[2721]: W0912 05:54:56.260393 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.260458 kubelet[2721]: E0912 05:54:56.260401 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.260585 kubelet[2721]: E0912 05:54:56.260551 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.260585 kubelet[2721]: W0912 05:54:56.260580 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.260629 kubelet[2721]: E0912 05:54:56.260589 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.260796 kubelet[2721]: E0912 05:54:56.260777 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.260796 kubelet[2721]: W0912 05:54:56.260789 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.260867 kubelet[2721]: E0912 05:54:56.260798 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.260976 kubelet[2721]: E0912 05:54:56.260960 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.260976 kubelet[2721]: W0912 05:54:56.260971 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.261030 kubelet[2721]: E0912 05:54:56.260980 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.261150 kubelet[2721]: E0912 05:54:56.261135 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.261150 kubelet[2721]: W0912 05:54:56.261145 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.261209 kubelet[2721]: E0912 05:54:56.261153 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.261316 kubelet[2721]: E0912 05:54:56.261300 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.261316 kubelet[2721]: W0912 05:54:56.261311 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.261370 kubelet[2721]: E0912 05:54:56.261319 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.285911 kubelet[2721]: E0912 05:54:56.285873 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.285911 kubelet[2721]: W0912 05:54:56.285900 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.286042 kubelet[2721]: E0912 05:54:56.285925 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.286252 kubelet[2721]: E0912 05:54:56.286214 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.286252 kubelet[2721]: W0912 05:54:56.286245 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.286333 kubelet[2721]: E0912 05:54:56.286282 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.286727 kubelet[2721]: E0912 05:54:56.286691 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.286727 kubelet[2721]: W0912 05:54:56.286725 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.286815 kubelet[2721]: E0912 05:54:56.286758 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.286982 kubelet[2721]: E0912 05:54:56.286956 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.286982 kubelet[2721]: W0912 05:54:56.286969 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.287037 kubelet[2721]: E0912 05:54:56.286982 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.287165 kubelet[2721]: E0912 05:54:56.287150 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.287165 kubelet[2721]: W0912 05:54:56.287160 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.287222 kubelet[2721]: E0912 05:54:56.287174 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.287455 kubelet[2721]: E0912 05:54:56.287431 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.287455 kubelet[2721]: W0912 05:54:56.287451 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.287515 kubelet[2721]: E0912 05:54:56.287470 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.287684 kubelet[2721]: E0912 05:54:56.287666 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.287684 kubelet[2721]: W0912 05:54:56.287680 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.287754 kubelet[2721]: E0912 05:54:56.287696 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.287984 kubelet[2721]: E0912 05:54:56.287962 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.287984 kubelet[2721]: W0912 05:54:56.287978 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.288036 kubelet[2721]: E0912 05:54:56.287994 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.288249 kubelet[2721]: E0912 05:54:56.288230 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.288249 kubelet[2721]: W0912 05:54:56.288242 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.288322 kubelet[2721]: E0912 05:54:56.288257 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.288469 kubelet[2721]: E0912 05:54:56.288449 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.288469 kubelet[2721]: W0912 05:54:56.288464 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.288529 kubelet[2721]: E0912 05:54:56.288491 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.288727 kubelet[2721]: E0912 05:54:56.288710 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.288727 kubelet[2721]: W0912 05:54:56.288721 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.288800 kubelet[2721]: E0912 05:54:56.288734 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.288958 kubelet[2721]: E0912 05:54:56.288941 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.288958 kubelet[2721]: W0912 05:54:56.288952 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.289002 kubelet[2721]: E0912 05:54:56.288966 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.289228 kubelet[2721]: E0912 05:54:56.289214 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.289228 kubelet[2721]: W0912 05:54:56.289226 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.289289 kubelet[2721]: E0912 05:54:56.289242 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.289437 kubelet[2721]: E0912 05:54:56.289424 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.289437 kubelet[2721]: W0912 05:54:56.289434 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.289481 kubelet[2721]: E0912 05:54:56.289447 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.289681 kubelet[2721]: E0912 05:54:56.289668 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.289681 kubelet[2721]: W0912 05:54:56.289678 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.289739 kubelet[2721]: E0912 05:54:56.289692 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.289942 kubelet[2721]: E0912 05:54:56.289924 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.289942 kubelet[2721]: W0912 05:54:56.289937 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.289996 kubelet[2721]: E0912 05:54:56.289952 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.290139 kubelet[2721]: E0912 05:54:56.290123 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.290139 kubelet[2721]: W0912 05:54:56.290133 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.290195 kubelet[2721]: E0912 05:54:56.290148 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:56.290319 kubelet[2721]: E0912 05:54:56.290304 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:56.290319 kubelet[2721]: W0912 05:54:56.290315 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:56.290373 kubelet[2721]: E0912 05:54:56.290324 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.176940 kubelet[2721]: I0912 05:54:57.176887 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 05:54:57.177593 kubelet[2721]: E0912 05:54:57.177547 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:54:57.213135 containerd[1564]: time="2025-09-12T05:54:57.213074513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:57.213807 containerd[1564]: time="2025-09-12T05:54:57.213770551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 05:54:57.214964 containerd[1564]: time="2025-09-12T05:54:57.214932240Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:57.216743 containerd[1564]: time="2025-09-12T05:54:57.216718604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:54:57.217213 containerd[1564]: time="2025-09-12T05:54:57.217181691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.37743849s" Sep 12 05:54:57.217270 containerd[1564]: time="2025-09-12T05:54:57.217217329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 05:54:57.219505 containerd[1564]: time="2025-09-12T05:54:57.219477188Z" level=info msg="CreateContainer within sandbox \"829fab6e1d9e86bfccffb44bcf6c59e237f5c1dd1162db1bba695507e3a99a19\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 05:54:57.228737 containerd[1564]: time="2025-09-12T05:54:57.228697700Z" level=info msg="Container 0c281a1e7c8bc46a2e83c642984c49b0c4cc530f4322cb3dcf04ca1cf124c2e0: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:54:57.237596 containerd[1564]: time="2025-09-12T05:54:57.237399571Z" level=info msg="CreateContainer within sandbox \"829fab6e1d9e86bfccffb44bcf6c59e237f5c1dd1162db1bba695507e3a99a19\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0c281a1e7c8bc46a2e83c642984c49b0c4cc530f4322cb3dcf04ca1cf124c2e0\"" Sep 12 05:54:57.241137 containerd[1564]: time="2025-09-12T05:54:57.241093076Z" level=info msg="StartContainer for \"0c281a1e7c8bc46a2e83c642984c49b0c4cc530f4322cb3dcf04ca1cf124c2e0\"" Sep 12 05:54:57.243110 containerd[1564]: time="2025-09-12T05:54:57.243044231Z" level=info msg="connecting to shim 0c281a1e7c8bc46a2e83c642984c49b0c4cc530f4322cb3dcf04ca1cf124c2e0" address="unix:///run/containerd/s/88a65e2934ebb5ba389bd3d7730448298085f375fa82e4a9b29487b9966d4da9" protocol=ttrpc version=3 Sep 12 05:54:57.268949 kubelet[2721]: E0912 05:54:57.268918 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.268949 kubelet[2721]: W0912 05:54:57.268938 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.269095 kubelet[2721]: E0912 05:54:57.268959 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.269259 kubelet[2721]: E0912 05:54:57.269239 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.269284 kubelet[2721]: W0912 05:54:57.269274 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.269315 kubelet[2721]: E0912 05:54:57.269285 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.269531 kubelet[2721]: E0912 05:54:57.269516 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.269531 kubelet[2721]: W0912 05:54:57.269528 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.269605 kubelet[2721]: E0912 05:54:57.269537 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.269796 kubelet[2721]: E0912 05:54:57.269780 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.269796 kubelet[2721]: W0912 05:54:57.269791 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.269855 kubelet[2721]: E0912 05:54:57.269801 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.270004 kubelet[2721]: E0912 05:54:57.269990 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.270004 kubelet[2721]: W0912 05:54:57.270001 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.270071 kubelet[2721]: E0912 05:54:57.270010 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.270302 kubelet[2721]: E0912 05:54:57.270278 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.270327 kubelet[2721]: W0912 05:54:57.270317 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.270364 kubelet[2721]: E0912 05:54:57.270327 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.270586 kubelet[2721]: E0912 05:54:57.270571 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.270586 kubelet[2721]: W0912 05:54:57.270582 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.270655 kubelet[2721]: E0912 05:54:57.270591 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.271130 kubelet[2721]: E0912 05:54:57.271114 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.271130 kubelet[2721]: W0912 05:54:57.271126 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.271207 kubelet[2721]: E0912 05:54:57.271136 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.271432 kubelet[2721]: E0912 05:54:57.271413 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.271463 kubelet[2721]: W0912 05:54:57.271436 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.271463 kubelet[2721]: E0912 05:54:57.271446 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.271654 kubelet[2721]: E0912 05:54:57.271632 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.271654 kubelet[2721]: W0912 05:54:57.271644 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.271725 kubelet[2721]: E0912 05:54:57.271665 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.271891 kubelet[2721]: E0912 05:54:57.271877 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.271891 kubelet[2721]: W0912 05:54:57.271888 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.271940 kubelet[2721]: E0912 05:54:57.271897 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.272241 kubelet[2721]: E0912 05:54:57.272199 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.272241 kubelet[2721]: W0912 05:54:57.272213 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.272241 kubelet[2721]: E0912 05:54:57.272223 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.272529 kubelet[2721]: E0912 05:54:57.272512 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.272529 kubelet[2721]: W0912 05:54:57.272524 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.272598 kubelet[2721]: E0912 05:54:57.272533 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.272761 kubelet[2721]: E0912 05:54:57.272747 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.272761 kubelet[2721]: W0912 05:54:57.272757 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.272825 kubelet[2721]: E0912 05:54:57.272767 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.273033 kubelet[2721]: E0912 05:54:57.273019 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.273033 kubelet[2721]: W0912 05:54:57.273030 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.273075 kubelet[2721]: E0912 05:54:57.273040 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.273780 systemd[1]: Started cri-containerd-0c281a1e7c8bc46a2e83c642984c49b0c4cc530f4322cb3dcf04ca1cf124c2e0.scope - libcontainer container 0c281a1e7c8bc46a2e83c642984c49b0c4cc530f4322cb3dcf04ca1cf124c2e0. Sep 12 05:54:57.295163 kubelet[2721]: E0912 05:54:57.295123 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.295163 kubelet[2721]: W0912 05:54:57.295149 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.295271 kubelet[2721]: E0912 05:54:57.295175 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.296584 kubelet[2721]: E0912 05:54:57.295398 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.296584 kubelet[2721]: W0912 05:54:57.295408 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.296584 kubelet[2721]: E0912 05:54:57.295425 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.296584 kubelet[2721]: E0912 05:54:57.295687 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.296584 kubelet[2721]: W0912 05:54:57.295727 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.296584 kubelet[2721]: E0912 05:54:57.295749 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.296584 kubelet[2721]: E0912 05:54:57.296037 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.296584 kubelet[2721]: W0912 05:54:57.296060 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.296584 kubelet[2721]: E0912 05:54:57.296094 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.296584 kubelet[2721]: E0912 05:54:57.296286 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.296851 kubelet[2721]: W0912 05:54:57.296297 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.296851 kubelet[2721]: E0912 05:54:57.296311 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.296851 kubelet[2721]: E0912 05:54:57.296544 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.296851 kubelet[2721]: W0912 05:54:57.296561 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.296851 kubelet[2721]: E0912 05:54:57.296610 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.297053 kubelet[2721]: E0912 05:54:57.296856 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.297053 kubelet[2721]: W0912 05:54:57.296873 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.297053 kubelet[2721]: E0912 05:54:57.296888 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.297120 kubelet[2721]: E0912 05:54:57.297096 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.297120 kubelet[2721]: W0912 05:54:57.297105 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.297192 kubelet[2721]: E0912 05:54:57.297162 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.297328 kubelet[2721]: E0912 05:54:57.297309 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.297362 kubelet[2721]: W0912 05:54:57.297330 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.297390 kubelet[2721]: E0912 05:54:57.297361 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.297547 kubelet[2721]: E0912 05:54:57.297530 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.297547 kubelet[2721]: W0912 05:54:57.297541 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.297641 kubelet[2721]: E0912 05:54:57.297589 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.297789 kubelet[2721]: E0912 05:54:57.297771 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.297789 kubelet[2721]: W0912 05:54:57.297781 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.297841 kubelet[2721]: E0912 05:54:57.297799 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.298293 kubelet[2721]: E0912 05:54:57.298260 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.298293 kubelet[2721]: W0912 05:54:57.298272 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.298293 kubelet[2721]: E0912 05:54:57.298290 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.299829 kubelet[2721]: E0912 05:54:57.298728 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.299829 kubelet[2721]: W0912 05:54:57.298757 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.299829 kubelet[2721]: E0912 05:54:57.298770 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.299829 kubelet[2721]: E0912 05:54:57.298994 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.299829 kubelet[2721]: W0912 05:54:57.299002 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.299829 kubelet[2721]: E0912 05:54:57.299053 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.299829 kubelet[2721]: E0912 05:54:57.299406 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.299829 kubelet[2721]: W0912 05:54:57.299415 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.299829 kubelet[2721]: E0912 05:54:57.299454 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.300378 kubelet[2721]: E0912 05:54:57.300363 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.300378 kubelet[2721]: W0912 05:54:57.300375 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.300443 kubelet[2721]: E0912 05:54:57.300399 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.300637 kubelet[2721]: E0912 05:54:57.300619 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.300637 kubelet[2721]: W0912 05:54:57.300630 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.300697 kubelet[2721]: E0912 05:54:57.300657 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.301076 kubelet[2721]: E0912 05:54:57.301053 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:54:57.301076 kubelet[2721]: W0912 05:54:57.301064 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:54:57.301076 kubelet[2721]: E0912 05:54:57.301073 2721 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:54:57.320502 containerd[1564]: time="2025-09-12T05:54:57.320443033Z" level=info msg="StartContainer for \"0c281a1e7c8bc46a2e83c642984c49b0c4cc530f4322cb3dcf04ca1cf124c2e0\" returns successfully" Sep 12 05:54:57.333747 systemd[1]: cri-containerd-0c281a1e7c8bc46a2e83c642984c49b0c4cc530f4322cb3dcf04ca1cf124c2e0.scope: Deactivated successfully. Sep 12 05:54:57.335791 containerd[1564]: time="2025-09-12T05:54:57.335725225Z" level=info msg="received exit event container_id:\"0c281a1e7c8bc46a2e83c642984c49b0c4cc530f4322cb3dcf04ca1cf124c2e0\" id:\"0c281a1e7c8bc46a2e83c642984c49b0c4cc530f4322cb3dcf04ca1cf124c2e0\" pid:3494 exited_at:{seconds:1757656497 nanos:335327723}" Sep 12 05:54:57.335910 containerd[1564]: time="2025-09-12T05:54:57.335754751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c281a1e7c8bc46a2e83c642984c49b0c4cc530f4322cb3dcf04ca1cf124c2e0\" id:\"0c281a1e7c8bc46a2e83c642984c49b0c4cc530f4322cb3dcf04ca1cf124c2e0\" pid:3494 exited_at:{seconds:1757656497 nanos:335327723}" Sep 12 05:54:57.361519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c281a1e7c8bc46a2e83c642984c49b0c4cc530f4322cb3dcf04ca1cf124c2e0-rootfs.mount: Deactivated successfully. Sep 12 05:54:58.094432 kubelet[2721]: E0912 05:54:58.094344 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k69w8" podUID="856f1557-0f2c-4284-9c75-f29b82c7d285" Sep 12 05:54:58.181854 containerd[1564]: time="2025-09-12T05:54:58.181805207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 05:54:58.195971 kubelet[2721]: I0912 05:54:58.195872 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b74565fcd-gbbrc" podStartSLOduration=3.84531855 podStartE2EDuration="7.195844244s" podCreationTimestamp="2025-09-12 05:54:51 +0000 UTC" firstStartedPulling="2025-09-12 05:54:52.488978234 +0000 UTC m=+16.495430213" lastFinishedPulling="2025-09-12 05:54:55.839503938 +0000 UTC m=+19.845955907" observedRunningTime="2025-09-12 05:54:56.195109022 +0000 UTC m=+20.201561001" watchObservedRunningTime="2025-09-12 05:54:58.195844244 +0000 UTC m=+22.202296223" Sep 12 05:55:00.094856 kubelet[2721]: E0912 05:55:00.094799 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k69w8" podUID="856f1557-0f2c-4284-9c75-f29b82c7d285" Sep 12 05:55:00.745379 containerd[1564]: time="2025-09-12T05:55:00.745326603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:00.746142 containerd[1564]: time="2025-09-12T05:55:00.746091149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 05:55:00.747319 containerd[1564]: time="2025-09-12T05:55:00.747282122Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:00.749356 containerd[1564]: time="2025-09-12T05:55:00.749318194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:00.749923 containerd[1564]: time="2025-09-12T05:55:00.749898652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.568045283s" Sep 12 05:55:00.749977 containerd[1564]: time="2025-09-12T05:55:00.749926494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 05:55:00.751742 containerd[1564]: time="2025-09-12T05:55:00.751696183Z" level=info msg="CreateContainer within sandbox \"829fab6e1d9e86bfccffb44bcf6c59e237f5c1dd1162db1bba695507e3a99a19\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 05:55:00.760986 containerd[1564]: time="2025-09-12T05:55:00.760937313Z" level=info msg="Container 1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:55:00.770965 containerd[1564]: time="2025-09-12T05:55:00.770920659Z" level=info msg="CreateContainer within sandbox \"829fab6e1d9e86bfccffb44bcf6c59e237f5c1dd1162db1bba695507e3a99a19\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82\"" Sep 12 05:55:00.771429 containerd[1564]: time="2025-09-12T05:55:00.771399894Z" level=info msg="StartContainer for \"1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82\"" Sep 12 05:55:00.772879 containerd[1564]: time="2025-09-12T05:55:00.772844628Z" level=info msg="connecting to shim 1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82" address="unix:///run/containerd/s/88a65e2934ebb5ba389bd3d7730448298085f375fa82e4a9b29487b9966d4da9" protocol=ttrpc version=3 Sep 12 05:55:00.800720 systemd[1]: Started cri-containerd-1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82.scope - libcontainer container 1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82. Sep 12 05:55:00.853332 containerd[1564]: time="2025-09-12T05:55:00.853276647Z" level=info msg="StartContainer for \"1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82\" returns successfully" Sep 12 05:55:01.937320 systemd[1]: cri-containerd-1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82.scope: Deactivated successfully. Sep 12 05:55:01.937750 systemd[1]: cri-containerd-1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82.scope: Consumed 686ms CPU time, 178.6M memory peak, 3M read from disk, 171.3M written to disk. Sep 12 05:55:01.939637 containerd[1564]: time="2025-09-12T05:55:01.939588007Z" level=info msg="received exit event container_id:\"1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82\" id:\"1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82\" pid:3573 exited_at:{seconds:1757656501 nanos:939344406}" Sep 12 05:55:01.940150 containerd[1564]: time="2025-09-12T05:55:01.939661976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82\" id:\"1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82\" pid:3573 exited_at:{seconds:1757656501 nanos:939344406}" Sep 12 05:55:01.962909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ef51491b809a26ecd0ccab1b0f30998b7b2ac9011727fd65b21a5779944bc82-rootfs.mount: Deactivated successfully. Sep 12 05:55:01.970268 kubelet[2721]: I0912 05:55:01.969663 2721 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 05:55:02.128876 systemd[1]: Created slice kubepods-burstable-pod50e41295_2104_45bb_9ef8_36b5a981ae68.slice - libcontainer container kubepods-burstable-pod50e41295_2104_45bb_9ef8_36b5a981ae68.slice. Sep 12 05:55:02.131860 kubelet[2721]: I0912 05:55:02.129372 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v5ls\" (UniqueName: \"kubernetes.io/projected/50e41295-2104-45bb-9ef8-36b5a981ae68-kube-api-access-4v5ls\") pod \"coredns-7c65d6cfc9-82dfx\" (UID: \"50e41295-2104-45bb-9ef8-36b5a981ae68\") " pod="kube-system/coredns-7c65d6cfc9-82dfx" Sep 12 05:55:02.131860 kubelet[2721]: I0912 05:55:02.129407 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50e41295-2104-45bb-9ef8-36b5a981ae68-config-volume\") pod \"coredns-7c65d6cfc9-82dfx\" (UID: \"50e41295-2104-45bb-9ef8-36b5a981ae68\") " pod="kube-system/coredns-7c65d6cfc9-82dfx" Sep 12 05:55:02.139500 systemd[1]: Created slice kubepods-besteffort-pod856f1557_0f2c_4284_9c75_f29b82c7d285.slice - libcontainer container kubepods-besteffort-pod856f1557_0f2c_4284_9c75_f29b82c7d285.slice. Sep 12 05:55:02.145536 containerd[1564]: time="2025-09-12T05:55:02.143905708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k69w8,Uid:856f1557-0f2c-4284-9c75-f29b82c7d285,Namespace:calico-system,Attempt:0,}" Sep 12 05:55:02.146489 systemd[1]: Created slice kubepods-besteffort-pod8357436c_29f7_41cc_b362_07676225c053.slice - libcontainer container kubepods-besteffort-pod8357436c_29f7_41cc_b362_07676225c053.slice. Sep 12 05:55:02.153101 systemd[1]: Created slice kubepods-burstable-pod526b300e_a756_421a_97c5_0b7adb3bdb5c.slice - libcontainer container kubepods-burstable-pod526b300e_a756_421a_97c5_0b7adb3bdb5c.slice. Sep 12 05:55:02.158724 systemd[1]: Created slice kubepods-besteffort-podcf7de45e_2e1e_49e4_b9ac_e261fad55466.slice - libcontainer container kubepods-besteffort-podcf7de45e_2e1e_49e4_b9ac_e261fad55466.slice. Sep 12 05:55:02.164016 systemd[1]: Created slice kubepods-besteffort-podaa088531_a52d_404c_8722_a5eb6ff66bbc.slice - libcontainer container kubepods-besteffort-podaa088531_a52d_404c_8722_a5eb6ff66bbc.slice. Sep 12 05:55:02.169247 systemd[1]: Created slice kubepods-besteffort-pod46f6b528_ef1f_4323_b258_9e68fca12d4d.slice - libcontainer container kubepods-besteffort-pod46f6b528_ef1f_4323_b258_9e68fca12d4d.slice. Sep 12 05:55:02.173436 systemd[1]: Created slice kubepods-besteffort-poda900808d_da78_40eb_83db_2a5d679a55e8.slice - libcontainer container kubepods-besteffort-poda900808d_da78_40eb_83db_2a5d679a55e8.slice. Sep 12 05:55:02.231056 kubelet[2721]: I0912 05:55:02.230448 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa088531-a52d-404c-8722-a5eb6ff66bbc-calico-apiserver-certs\") pod \"calico-apiserver-59fc85cc6-l7rwv\" (UID: \"aa088531-a52d-404c-8722-a5eb6ff66bbc\") " pod="calico-apiserver/calico-apiserver-59fc85cc6-l7rwv" Sep 12 05:55:02.231056 kubelet[2721]: I0912 05:55:02.230495 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a900808d-da78-40eb-83db-2a5d679a55e8-whisker-ca-bundle\") pod \"whisker-7c56fb5448-5dkvd\" (UID: \"a900808d-da78-40eb-83db-2a5d679a55e8\") " pod="calico-system/whisker-7c56fb5448-5dkvd" Sep 12 05:55:02.231056 kubelet[2721]: I0912 05:55:02.230514 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8c9v\" (UniqueName: \"kubernetes.io/projected/a900808d-da78-40eb-83db-2a5d679a55e8-kube-api-access-c8c9v\") pod \"whisker-7c56fb5448-5dkvd\" (UID: \"a900808d-da78-40eb-83db-2a5d679a55e8\") " pod="calico-system/whisker-7c56fb5448-5dkvd" Sep 12 05:55:02.231056 kubelet[2721]: I0912 05:55:02.230531 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf7de45e-2e1e-49e4-b9ac-e261fad55466-goldmane-ca-bundle\") pod \"goldmane-7988f88666-h8thk\" (UID: \"cf7de45e-2e1e-49e4-b9ac-e261fad55466\") " pod="calico-system/goldmane-7988f88666-h8thk" Sep 12 05:55:02.231056 kubelet[2721]: I0912 05:55:02.230547 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmhvq\" (UniqueName: \"kubernetes.io/projected/cf7de45e-2e1e-49e4-b9ac-e261fad55466-kube-api-access-qmhvq\") pod \"goldmane-7988f88666-h8thk\" (UID: \"cf7de45e-2e1e-49e4-b9ac-e261fad55466\") " pod="calico-system/goldmane-7988f88666-h8thk" Sep 12 05:55:02.231303 kubelet[2721]: I0912 05:55:02.230614 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/526b300e-a756-421a-97c5-0b7adb3bdb5c-config-volume\") pod \"coredns-7c65d6cfc9-kqnvn\" (UID: \"526b300e-a756-421a-97c5-0b7adb3bdb5c\") " pod="kube-system/coredns-7c65d6cfc9-kqnvn" Sep 12 05:55:02.231303 kubelet[2721]: I0912 05:55:02.230637 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf7de45e-2e1e-49e4-b9ac-e261fad55466-config\") pod \"goldmane-7988f88666-h8thk\" (UID: \"cf7de45e-2e1e-49e4-b9ac-e261fad55466\") " pod="calico-system/goldmane-7988f88666-h8thk" Sep 12 05:55:02.231303 kubelet[2721]: I0912 05:55:02.231066 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwdq9\" (UniqueName: \"kubernetes.io/projected/8357436c-29f7-41cc-b362-07676225c053-kube-api-access-mwdq9\") pod \"calico-kube-controllers-58454d7cff-qnknh\" (UID: \"8357436c-29f7-41cc-b362-07676225c053\") " pod="calico-system/calico-kube-controllers-58454d7cff-qnknh" Sep 12 05:55:02.231303 kubelet[2721]: I0912 05:55:02.231124 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zb2h\" (UniqueName: \"kubernetes.io/projected/46f6b528-ef1f-4323-b258-9e68fca12d4d-kube-api-access-2zb2h\") pod \"calico-apiserver-59fc85cc6-9jmd6\" (UID: \"46f6b528-ef1f-4323-b258-9e68fca12d4d\") " pod="calico-apiserver/calico-apiserver-59fc85cc6-9jmd6" Sep 12 05:55:02.231303 kubelet[2721]: I0912 05:55:02.231149 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/46f6b528-ef1f-4323-b258-9e68fca12d4d-calico-apiserver-certs\") pod \"calico-apiserver-59fc85cc6-9jmd6\" (UID: \"46f6b528-ef1f-4323-b258-9e68fca12d4d\") " pod="calico-apiserver/calico-apiserver-59fc85cc6-9jmd6" Sep 12 05:55:02.231431 kubelet[2721]: I0912 05:55:02.231168 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a900808d-da78-40eb-83db-2a5d679a55e8-whisker-backend-key-pair\") pod \"whisker-7c56fb5448-5dkvd\" (UID: \"a900808d-da78-40eb-83db-2a5d679a55e8\") " pod="calico-system/whisker-7c56fb5448-5dkvd" Sep 12 05:55:02.231431 kubelet[2721]: I0912 05:55:02.231385 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8357436c-29f7-41cc-b362-07676225c053-tigera-ca-bundle\") pod \"calico-kube-controllers-58454d7cff-qnknh\" (UID: \"8357436c-29f7-41cc-b362-07676225c053\") " pod="calico-system/calico-kube-controllers-58454d7cff-qnknh" Sep 12 05:55:02.231431 kubelet[2721]: I0912 05:55:02.231410 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzkbk\" (UniqueName: \"kubernetes.io/projected/526b300e-a756-421a-97c5-0b7adb3bdb5c-kube-api-access-vzkbk\") pod \"coredns-7c65d6cfc9-kqnvn\" (UID: \"526b300e-a756-421a-97c5-0b7adb3bdb5c\") " pod="kube-system/coredns-7c65d6cfc9-kqnvn" Sep 12 05:55:02.231431 kubelet[2721]: I0912 05:55:02.231425 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mh4w\" (UniqueName: \"kubernetes.io/projected/aa088531-a52d-404c-8722-a5eb6ff66bbc-kube-api-access-9mh4w\") pod \"calico-apiserver-59fc85cc6-l7rwv\" (UID: \"aa088531-a52d-404c-8722-a5eb6ff66bbc\") " pod="calico-apiserver/calico-apiserver-59fc85cc6-l7rwv" Sep 12 05:55:02.231529 kubelet[2721]: I0912 05:55:02.231445 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cf7de45e-2e1e-49e4-b9ac-e261fad55466-goldmane-key-pair\") pod \"goldmane-7988f88666-h8thk\" (UID: \"cf7de45e-2e1e-49e4-b9ac-e261fad55466\") " pod="calico-system/goldmane-7988f88666-h8thk" Sep 12 05:55:02.382588 containerd[1564]: time="2025-09-12T05:55:02.382508587Z" level=error msg="Failed to destroy network for sandbox \"f5d753c895d0c1fff5eb18931eedb79b69db91f621068775ea15b93a744dbd37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.385206 containerd[1564]: time="2025-09-12T05:55:02.385157204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k69w8,Uid:856f1557-0f2c-4284-9c75-f29b82c7d285,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d753c895d0c1fff5eb18931eedb79b69db91f621068775ea15b93a744dbd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.386043 kubelet[2721]: E0912 05:55:02.385979 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d753c895d0c1fff5eb18931eedb79b69db91f621068775ea15b93a744dbd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.386100 kubelet[2721]: E0912 05:55:02.386084 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d753c895d0c1fff5eb18931eedb79b69db91f621068775ea15b93a744dbd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k69w8" Sep 12 05:55:02.386134 kubelet[2721]: E0912 05:55:02.386114 2721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d753c895d0c1fff5eb18931eedb79b69db91f621068775ea15b93a744dbd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k69w8" Sep 12 05:55:02.386195 kubelet[2721]: E0912 05:55:02.386165 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k69w8_calico-system(856f1557-0f2c-4284-9c75-f29b82c7d285)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k69w8_calico-system(856f1557-0f2c-4284-9c75-f29b82c7d285)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5d753c895d0c1fff5eb18931eedb79b69db91f621068775ea15b93a744dbd37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k69w8" podUID="856f1557-0f2c-4284-9c75-f29b82c7d285" Sep 12 05:55:02.434662 kubelet[2721]: E0912 05:55:02.434546 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:02.435258 containerd[1564]: time="2025-09-12T05:55:02.435202500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-82dfx,Uid:50e41295-2104-45bb-9ef8-36b5a981ae68,Namespace:kube-system,Attempt:0,}" Sep 12 05:55:02.449649 containerd[1564]: time="2025-09-12T05:55:02.449607402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58454d7cff-qnknh,Uid:8357436c-29f7-41cc-b362-07676225c053,Namespace:calico-system,Attempt:0,}" Sep 12 05:55:02.456326 kubelet[2721]: E0912 05:55:02.456278 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:02.458335 containerd[1564]: time="2025-09-12T05:55:02.457798887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kqnvn,Uid:526b300e-a756-421a-97c5-0b7adb3bdb5c,Namespace:kube-system,Attempt:0,}" Sep 12 05:55:02.462297 containerd[1564]: time="2025-09-12T05:55:02.462258528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-h8thk,Uid:cf7de45e-2e1e-49e4-b9ac-e261fad55466,Namespace:calico-system,Attempt:0,}" Sep 12 05:55:02.473609 containerd[1564]: time="2025-09-12T05:55:02.473528693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fc85cc6-9jmd6,Uid:46f6b528-ef1f-4323-b258-9e68fca12d4d,Namespace:calico-apiserver,Attempt:0,}" Sep 12 05:55:02.474329 containerd[1564]: time="2025-09-12T05:55:02.474286475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fc85cc6-l7rwv,Uid:aa088531-a52d-404c-8722-a5eb6ff66bbc,Namespace:calico-apiserver,Attempt:0,}" Sep 12 05:55:02.477588 containerd[1564]: time="2025-09-12T05:55:02.477194984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c56fb5448-5dkvd,Uid:a900808d-da78-40eb-83db-2a5d679a55e8,Namespace:calico-system,Attempt:0,}" Sep 12 05:55:02.511027 containerd[1564]: time="2025-09-12T05:55:02.509764524Z" level=error msg="Failed to destroy network for sandbox \"6a548141d9368c3cd1dcec5ad7dae9bf9fa22e95adb1aabb6a202b8e6b19e308\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.514320 containerd[1564]: time="2025-09-12T05:55:02.514158259Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-82dfx,Uid:50e41295-2104-45bb-9ef8-36b5a981ae68,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a548141d9368c3cd1dcec5ad7dae9bf9fa22e95adb1aabb6a202b8e6b19e308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.514545 kubelet[2721]: E0912 05:55:02.514428 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a548141d9368c3cd1dcec5ad7dae9bf9fa22e95adb1aabb6a202b8e6b19e308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.514545 kubelet[2721]: E0912 05:55:02.514493 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a548141d9368c3cd1dcec5ad7dae9bf9fa22e95adb1aabb6a202b8e6b19e308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-82dfx" Sep 12 05:55:02.514545 kubelet[2721]: E0912 05:55:02.514515 2721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a548141d9368c3cd1dcec5ad7dae9bf9fa22e95adb1aabb6a202b8e6b19e308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-82dfx" Sep 12 05:55:02.514724 kubelet[2721]: E0912 05:55:02.514695 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-82dfx_kube-system(50e41295-2104-45bb-9ef8-36b5a981ae68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-82dfx_kube-system(50e41295-2104-45bb-9ef8-36b5a981ae68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a548141d9368c3cd1dcec5ad7dae9bf9fa22e95adb1aabb6a202b8e6b19e308\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-82dfx" podUID="50e41295-2104-45bb-9ef8-36b5a981ae68" Sep 12 05:55:02.548607 containerd[1564]: time="2025-09-12T05:55:02.548175756Z" level=error msg="Failed to destroy network for sandbox \"8a8ecff88ffcf5b3873011c1ff3cd0c35da8b71993107232d3beddd70e9557e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.549679 containerd[1564]: time="2025-09-12T05:55:02.549640696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58454d7cff-qnknh,Uid:8357436c-29f7-41cc-b362-07676225c053,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8ecff88ffcf5b3873011c1ff3cd0c35da8b71993107232d3beddd70e9557e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.550671 kubelet[2721]: E0912 05:55:02.549996 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8ecff88ffcf5b3873011c1ff3cd0c35da8b71993107232d3beddd70e9557e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.550745 kubelet[2721]: E0912 05:55:02.550715 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8ecff88ffcf5b3873011c1ff3cd0c35da8b71993107232d3beddd70e9557e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58454d7cff-qnknh" Sep 12 05:55:02.550795 kubelet[2721]: E0912 05:55:02.550746 2721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8ecff88ffcf5b3873011c1ff3cd0c35da8b71993107232d3beddd70e9557e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58454d7cff-qnknh" Sep 12 05:55:02.551152 kubelet[2721]: E0912 05:55:02.551105 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58454d7cff-qnknh_calico-system(8357436c-29f7-41cc-b362-07676225c053)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58454d7cff-qnknh_calico-system(8357436c-29f7-41cc-b362-07676225c053)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a8ecff88ffcf5b3873011c1ff3cd0c35da8b71993107232d3beddd70e9557e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58454d7cff-qnknh" podUID="8357436c-29f7-41cc-b362-07676225c053" Sep 12 05:55:02.564091 containerd[1564]: time="2025-09-12T05:55:02.563948384Z" level=error msg="Failed to destroy network for sandbox \"5d1f1d27f097134482fbec8eee192b8c51b04a191d3ba6731c506856667cb4bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.564950 containerd[1564]: time="2025-09-12T05:55:02.564928136Z" level=error msg="Failed to destroy network for sandbox \"da5b2d61a0ff7a908574f959a834d84f0f3f061fe7fd1af35612017e2cc6976e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.575522 containerd[1564]: time="2025-09-12T05:55:02.575451158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-h8thk,Uid:cf7de45e-2e1e-49e4-b9ac-e261fad55466,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5b2d61a0ff7a908574f959a834d84f0f3f061fe7fd1af35612017e2cc6976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.575934 kubelet[2721]: E0912 05:55:02.575882 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5b2d61a0ff7a908574f959a834d84f0f3f061fe7fd1af35612017e2cc6976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.576001 kubelet[2721]: E0912 05:55:02.575960 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5b2d61a0ff7a908574f959a834d84f0f3f061fe7fd1af35612017e2cc6976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-h8thk" Sep 12 05:55:02.576001 kubelet[2721]: E0912 05:55:02.575982 2721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5b2d61a0ff7a908574f959a834d84f0f3f061fe7fd1af35612017e2cc6976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-h8thk" Sep 12 05:55:02.576158 kubelet[2721]: E0912 05:55:02.576029 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-h8thk_calico-system(cf7de45e-2e1e-49e4-b9ac-e261fad55466)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-h8thk_calico-system(cf7de45e-2e1e-49e4-b9ac-e261fad55466)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da5b2d61a0ff7a908574f959a834d84f0f3f061fe7fd1af35612017e2cc6976e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-h8thk" podUID="cf7de45e-2e1e-49e4-b9ac-e261fad55466" Sep 12 05:55:02.577394 containerd[1564]: time="2025-09-12T05:55:02.577337294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kqnvn,Uid:526b300e-a756-421a-97c5-0b7adb3bdb5c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d1f1d27f097134482fbec8eee192b8c51b04a191d3ba6731c506856667cb4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.578961 kubelet[2721]: E0912 05:55:02.577642 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d1f1d27f097134482fbec8eee192b8c51b04a191d3ba6731c506856667cb4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.578961 kubelet[2721]: E0912 05:55:02.577716 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d1f1d27f097134482fbec8eee192b8c51b04a191d3ba6731c506856667cb4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kqnvn" Sep 12 05:55:02.578961 kubelet[2721]: E0912 05:55:02.577742 2721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d1f1d27f097134482fbec8eee192b8c51b04a191d3ba6731c506856667cb4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kqnvn" Sep 12 05:55:02.579201 containerd[1564]: time="2025-09-12T05:55:02.577958348Z" level=error msg="Failed to destroy network for sandbox \"0bc2b309f8be3fbb1c8163c582531a065b3d61d12dbf366a5615f08b53fc4912\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.579201 containerd[1564]: time="2025-09-12T05:55:02.579160521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fc85cc6-l7rwv,Uid:aa088531-a52d-404c-8722-a5eb6ff66bbc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc2b309f8be3fbb1c8163c582531a065b3d61d12dbf366a5615f08b53fc4912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.579369 kubelet[2721]: E0912 05:55:02.577834 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kqnvn_kube-system(526b300e-a756-421a-97c5-0b7adb3bdb5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kqnvn_kube-system(526b300e-a756-421a-97c5-0b7adb3bdb5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d1f1d27f097134482fbec8eee192b8c51b04a191d3ba6731c506856667cb4bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kqnvn" podUID="526b300e-a756-421a-97c5-0b7adb3bdb5c" Sep 12 05:55:02.579369 kubelet[2721]: E0912 05:55:02.579305 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc2b309f8be3fbb1c8163c582531a065b3d61d12dbf366a5615f08b53fc4912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.579369 kubelet[2721]: E0912 05:55:02.579338 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc2b309f8be3fbb1c8163c582531a065b3d61d12dbf366a5615f08b53fc4912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59fc85cc6-l7rwv" Sep 12 05:55:02.579499 kubelet[2721]: E0912 05:55:02.579353 2721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc2b309f8be3fbb1c8163c582531a065b3d61d12dbf366a5615f08b53fc4912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59fc85cc6-l7rwv" Sep 12 05:55:02.579499 kubelet[2721]: E0912 05:55:02.579384 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59fc85cc6-l7rwv_calico-apiserver(aa088531-a52d-404c-8722-a5eb6ff66bbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59fc85cc6-l7rwv_calico-apiserver(aa088531-a52d-404c-8722-a5eb6ff66bbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bc2b309f8be3fbb1c8163c582531a065b3d61d12dbf366a5615f08b53fc4912\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59fc85cc6-l7rwv" podUID="aa088531-a52d-404c-8722-a5eb6ff66bbc" Sep 12 05:55:02.593386 containerd[1564]: time="2025-09-12T05:55:02.593319136Z" level=error msg="Failed to destroy network for sandbox \"56170b5eb7b6380238e75140c53771f8f0c0d34eb9bf2836444c2dd1ff7735a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.594590 containerd[1564]: time="2025-09-12T05:55:02.594538351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fc85cc6-9jmd6,Uid:46f6b528-ef1f-4323-b258-9e68fca12d4d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56170b5eb7b6380238e75140c53771f8f0c0d34eb9bf2836444c2dd1ff7735a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.594857 kubelet[2721]: E0912 05:55:02.594825 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56170b5eb7b6380238e75140c53771f8f0c0d34eb9bf2836444c2dd1ff7735a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.594937 kubelet[2721]: E0912 05:55:02.594870 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56170b5eb7b6380238e75140c53771f8f0c0d34eb9bf2836444c2dd1ff7735a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59fc85cc6-9jmd6" Sep 12 05:55:02.594937 kubelet[2721]: E0912 05:55:02.594887 2721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56170b5eb7b6380238e75140c53771f8f0c0d34eb9bf2836444c2dd1ff7735a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59fc85cc6-9jmd6" Sep 12 05:55:02.595007 kubelet[2721]: E0912 05:55:02.594933 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59fc85cc6-9jmd6_calico-apiserver(46f6b528-ef1f-4323-b258-9e68fca12d4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59fc85cc6-9jmd6_calico-apiserver(46f6b528-ef1f-4323-b258-9e68fca12d4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56170b5eb7b6380238e75140c53771f8f0c0d34eb9bf2836444c2dd1ff7735a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59fc85cc6-9jmd6" podUID="46f6b528-ef1f-4323-b258-9e68fca12d4d" Sep 12 05:55:02.602318 containerd[1564]: time="2025-09-12T05:55:02.602268474Z" level=error msg="Failed to destroy network for sandbox \"635f3c4e9c7175da56509a8227f5507beef7ed4ae71c10a9276f045730ab6d2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.603714 containerd[1564]: time="2025-09-12T05:55:02.603552933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c56fb5448-5dkvd,Uid:a900808d-da78-40eb-83db-2a5d679a55e8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"635f3c4e9c7175da56509a8227f5507beef7ed4ae71c10a9276f045730ab6d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.603874 kubelet[2721]: E0912 05:55:02.603819 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"635f3c4e9c7175da56509a8227f5507beef7ed4ae71c10a9276f045730ab6d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:02.603912 kubelet[2721]: E0912 05:55:02.603896 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"635f3c4e9c7175da56509a8227f5507beef7ed4ae71c10a9276f045730ab6d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c56fb5448-5dkvd" Sep 12 05:55:02.603937 kubelet[2721]: E0912 05:55:02.603920 2721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"635f3c4e9c7175da56509a8227f5507beef7ed4ae71c10a9276f045730ab6d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c56fb5448-5dkvd" Sep 12 05:55:02.603989 kubelet[2721]: E0912 05:55:02.603961 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c56fb5448-5dkvd_calico-system(a900808d-da78-40eb-83db-2a5d679a55e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c56fb5448-5dkvd_calico-system(a900808d-da78-40eb-83db-2a5d679a55e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"635f3c4e9c7175da56509a8227f5507beef7ed4ae71c10a9276f045730ab6d2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c56fb5448-5dkvd" podUID="a900808d-da78-40eb-83db-2a5d679a55e8" Sep 12 05:55:02.969796 systemd[1]: run-netns-cni\x2dd0e4a724\x2da80b\x2ddaac\x2df21b\x2d59f250c37992.mount: Deactivated successfully. Sep 12 05:55:03.198931 containerd[1564]: time="2025-09-12T05:55:03.198881550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 05:55:12.170934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090179158.mount: Deactivated successfully. Sep 12 05:55:13.094075 kubelet[2721]: E0912 05:55:13.094018 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:13.094589 containerd[1564]: time="2025-09-12T05:55:13.094423669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-82dfx,Uid:50e41295-2104-45bb-9ef8-36b5a981ae68,Namespace:kube-system,Attempt:0,}" Sep 12 05:55:13.304658 containerd[1564]: time="2025-09-12T05:55:13.303945480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:13.304920 containerd[1564]: time="2025-09-12T05:55:13.304897947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 05:55:13.305997 containerd[1564]: time="2025-09-12T05:55:13.305971491Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:13.308547 containerd[1564]: time="2025-09-12T05:55:13.308510650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:13.309194 containerd[1564]: time="2025-09-12T05:55:13.309166456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.11024395s" Sep 12 05:55:13.309288 containerd[1564]: time="2025-09-12T05:55:13.309270512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 05:55:13.321966 containerd[1564]: time="2025-09-12T05:55:13.321816642Z" level=info msg="CreateContainer within sandbox \"829fab6e1d9e86bfccffb44bcf6c59e237f5c1dd1162db1bba695507e3a99a19\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 05:55:13.343838 containerd[1564]: time="2025-09-12T05:55:13.342490079Z" level=info msg="Container 6e41fe1433500302fc2b8bd1f9ab9ecfe9ccb716a3d9c847ea64291af4cb6a5b: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:55:13.368221 containerd[1564]: time="2025-09-12T05:55:13.367947217Z" level=error msg="Failed to destroy network for sandbox \"d3fe550b11f3d1a076b1f09dc5fcc06eb23e5f63f35c6e219b7cc73b08f848d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:13.370144 containerd[1564]: time="2025-09-12T05:55:13.370033842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-82dfx,Uid:50e41295-2104-45bb-9ef8-36b5a981ae68,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3fe550b11f3d1a076b1f09dc5fcc06eb23e5f63f35c6e219b7cc73b08f848d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:13.370425 kubelet[2721]: E0912 05:55:13.370353 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3fe550b11f3d1a076b1f09dc5fcc06eb23e5f63f35c6e219b7cc73b08f848d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:55:13.370517 kubelet[2721]: E0912 05:55:13.370456 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3fe550b11f3d1a076b1f09dc5fcc06eb23e5f63f35c6e219b7cc73b08f848d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-82dfx" Sep 12 05:55:13.370517 kubelet[2721]: E0912 05:55:13.370483 2721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3fe550b11f3d1a076b1f09dc5fcc06eb23e5f63f35c6e219b7cc73b08f848d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-82dfx" Sep 12 05:55:13.371657 kubelet[2721]: E0912 05:55:13.371533 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-82dfx_kube-system(50e41295-2104-45bb-9ef8-36b5a981ae68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-82dfx_kube-system(50e41295-2104-45bb-9ef8-36b5a981ae68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3fe550b11f3d1a076b1f09dc5fcc06eb23e5f63f35c6e219b7cc73b08f848d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-82dfx" podUID="50e41295-2104-45bb-9ef8-36b5a981ae68" Sep 12 05:55:13.382127 containerd[1564]: time="2025-09-12T05:55:13.382084467Z" level=info msg="CreateContainer within sandbox \"829fab6e1d9e86bfccffb44bcf6c59e237f5c1dd1162db1bba695507e3a99a19\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6e41fe1433500302fc2b8bd1f9ab9ecfe9ccb716a3d9c847ea64291af4cb6a5b\"" Sep 12 05:55:13.382764 containerd[1564]: time="2025-09-12T05:55:13.382700859Z" level=info msg="StartContainer for \"6e41fe1433500302fc2b8bd1f9ab9ecfe9ccb716a3d9c847ea64291af4cb6a5b\"" Sep 12 05:55:13.384615 containerd[1564]: time="2025-09-12T05:55:13.384581816Z" level=info msg="connecting to shim 6e41fe1433500302fc2b8bd1f9ab9ecfe9ccb716a3d9c847ea64291af4cb6a5b" address="unix:///run/containerd/s/88a65e2934ebb5ba389bd3d7730448298085f375fa82e4a9b29487b9966d4da9" protocol=ttrpc version=3 Sep 12 05:55:13.413726 systemd[1]: Started cri-containerd-6e41fe1433500302fc2b8bd1f9ab9ecfe9ccb716a3d9c847ea64291af4cb6a5b.scope - libcontainer container 6e41fe1433500302fc2b8bd1f9ab9ecfe9ccb716a3d9c847ea64291af4cb6a5b. Sep 12 05:55:13.459497 containerd[1564]: time="2025-09-12T05:55:13.459452510Z" level=info msg="StartContainer for \"6e41fe1433500302fc2b8bd1f9ab9ecfe9ccb716a3d9c847ea64291af4cb6a5b\" returns successfully" Sep 12 05:55:13.544279 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 05:55:13.545552 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 05:55:13.705724 kubelet[2721]: I0912 05:55:13.705466 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a900808d-da78-40eb-83db-2a5d679a55e8-whisker-ca-bundle\") pod \"a900808d-da78-40eb-83db-2a5d679a55e8\" (UID: \"a900808d-da78-40eb-83db-2a5d679a55e8\") " Sep 12 05:55:13.705724 kubelet[2721]: I0912 05:55:13.705530 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a900808d-da78-40eb-83db-2a5d679a55e8-whisker-backend-key-pair\") pod \"a900808d-da78-40eb-83db-2a5d679a55e8\" (UID: \"a900808d-da78-40eb-83db-2a5d679a55e8\") " Sep 12 05:55:13.707318 kubelet[2721]: I0912 05:55:13.707241 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8c9v\" (UniqueName: \"kubernetes.io/projected/a900808d-da78-40eb-83db-2a5d679a55e8-kube-api-access-c8c9v\") pod \"a900808d-da78-40eb-83db-2a5d679a55e8\" (UID: \"a900808d-da78-40eb-83db-2a5d679a55e8\") " Sep 12 05:55:13.707511 kubelet[2721]: I0912 05:55:13.707489 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a900808d-da78-40eb-83db-2a5d679a55e8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a900808d-da78-40eb-83db-2a5d679a55e8" (UID: "a900808d-da78-40eb-83db-2a5d679a55e8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 05:55:13.712792 kubelet[2721]: I0912 05:55:13.712739 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a900808d-da78-40eb-83db-2a5d679a55e8-kube-api-access-c8c9v" (OuterVolumeSpecName: "kube-api-access-c8c9v") pod "a900808d-da78-40eb-83db-2a5d679a55e8" (UID: "a900808d-da78-40eb-83db-2a5d679a55e8"). InnerVolumeSpecName "kube-api-access-c8c9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 05:55:13.713777 kubelet[2721]: I0912 05:55:13.713720 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a900808d-da78-40eb-83db-2a5d679a55e8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a900808d-da78-40eb-83db-2a5d679a55e8" (UID: "a900808d-da78-40eb-83db-2a5d679a55e8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 05:55:13.808080 kubelet[2721]: I0912 05:55:13.807998 2721 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a900808d-da78-40eb-83db-2a5d679a55e8-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 05:55:13.808080 kubelet[2721]: I0912 05:55:13.808039 2721 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a900808d-da78-40eb-83db-2a5d679a55e8-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 05:55:13.808080 kubelet[2721]: I0912 05:55:13.808050 2721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8c9v\" (UniqueName: \"kubernetes.io/projected/a900808d-da78-40eb-83db-2a5d679a55e8-kube-api-access-c8c9v\") on node \"localhost\" DevicePath \"\"" Sep 12 05:55:14.094231 kubelet[2721]: E0912 05:55:14.094165 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:14.095017 containerd[1564]: time="2025-09-12T05:55:14.094290707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-h8thk,Uid:cf7de45e-2e1e-49e4-b9ac-e261fad55466,Namespace:calico-system,Attempt:0,}" Sep 12 05:55:14.095017 containerd[1564]: time="2025-09-12T05:55:14.094839192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kqnvn,Uid:526b300e-a756-421a-97c5-0b7adb3bdb5c,Namespace:kube-system,Attempt:0,}" Sep 12 05:55:14.302789 systemd[1]: run-netns-cni\x2d06fb7e4f\x2dd664\x2d55c8\x2dba59\x2d416c2912e583.mount: Deactivated successfully. Sep 12 05:55:14.302921 systemd[1]: var-lib-kubelet-pods-a900808d\x2dda78\x2d40eb\x2d83db\x2d2a5d679a55e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc8c9v.mount: Deactivated successfully. Sep 12 05:55:14.303022 systemd[1]: var-lib-kubelet-pods-a900808d\x2dda78\x2d40eb\x2d83db\x2d2a5d679a55e8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 05:55:14.783403 systemd[1]: Removed slice kubepods-besteffort-poda900808d_da78_40eb_83db_2a5d679a55e8.slice - libcontainer container kubepods-besteffort-poda900808d_da78_40eb_83db_2a5d679a55e8.slice. Sep 12 05:55:14.997432 containerd[1564]: time="2025-09-12T05:55:14.997368974Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e41fe1433500302fc2b8bd1f9ab9ecfe9ccb716a3d9c847ea64291af4cb6a5b\" id:\"38e8ab03bc2d1d405cd055960e327df5761e214ed77118be5acdf1a1f974bae0\" pid:4026 exit_status:1 exited_at:{seconds:1757656514 nanos:996525885}" Sep 12 05:55:15.134470 kubelet[2721]: I0912 05:55:15.134377 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sjjqp" podStartSLOduration=2.73991839 podStartE2EDuration="23.134349683s" podCreationTimestamp="2025-09-12 05:54:52 +0000 UTC" firstStartedPulling="2025-09-12 05:54:52.915941168 +0000 UTC m=+16.922393147" lastFinishedPulling="2025-09-12 05:55:13.310372461 +0000 UTC m=+37.316824440" observedRunningTime="2025-09-12 05:55:14.972884137 +0000 UTC m=+38.979336126" watchObservedRunningTime="2025-09-12 05:55:15.134349683 +0000 UTC m=+39.140801652" Sep 12 05:55:15.224529 systemd[1]: Created slice kubepods-besteffort-podfc85a0ea_7f63_43e4_87c4_ba496006d159.slice - libcontainer container kubepods-besteffort-podfc85a0ea_7f63_43e4_87c4_ba496006d159.slice. Sep 12 05:55:15.316503 kubelet[2721]: I0912 05:55:15.316393 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc85a0ea-7f63-43e4-87c4-ba496006d159-whisker-backend-key-pair\") pod \"whisker-fccf6cb4c-4dj67\" (UID: \"fc85a0ea-7f63-43e4-87c4-ba496006d159\") " pod="calico-system/whisker-fccf6cb4c-4dj67" Sep 12 05:55:15.316503 kubelet[2721]: I0912 05:55:15.316465 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc85a0ea-7f63-43e4-87c4-ba496006d159-whisker-ca-bundle\") pod \"whisker-fccf6cb4c-4dj67\" (UID: \"fc85a0ea-7f63-43e4-87c4-ba496006d159\") " pod="calico-system/whisker-fccf6cb4c-4dj67" Sep 12 05:55:15.316503 kubelet[2721]: I0912 05:55:15.316483 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44tr7\" (UniqueName: \"kubernetes.io/projected/fc85a0ea-7f63-43e4-87c4-ba496006d159-kube-api-access-44tr7\") pod \"whisker-fccf6cb4c-4dj67\" (UID: \"fc85a0ea-7f63-43e4-87c4-ba496006d159\") " pod="calico-system/whisker-fccf6cb4c-4dj67" Sep 12 05:55:15.366749 systemd-networkd[1475]: calia7fbf29ec54: Link UP Sep 12 05:55:15.367086 systemd-networkd[1475]: calia7fbf29ec54: Gained carrier Sep 12 05:55:15.378668 containerd[1564]: 2025-09-12 05:55:15.234 [INFO][4128] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 05:55:15.378668 containerd[1564]: 2025-09-12 05:55:15.253 [INFO][4128] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--h8thk-eth0 goldmane-7988f88666- calico-system cf7de45e-2e1e-49e4-b9ac-e261fad55466 818 0 2025-09-12 05:54:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-h8thk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia7fbf29ec54 [] [] }} ContainerID="657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" Namespace="calico-system" Pod="goldmane-7988f88666-h8thk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--h8thk-" Sep 12 05:55:15.378668 containerd[1564]: 2025-09-12 05:55:15.253 [INFO][4128] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" Namespace="calico-system" Pod="goldmane-7988f88666-h8thk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--h8thk-eth0" Sep 12 05:55:15.378668 containerd[1564]: 2025-09-12 05:55:15.319 [INFO][4152] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" HandleID="k8s-pod-network.657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" Workload="localhost-k8s-goldmane--7988f88666--h8thk-eth0" Sep 12 05:55:15.379185 containerd[1564]: 2025-09-12 05:55:15.320 [INFO][4152] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" HandleID="k8s-pod-network.657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" Workload="localhost-k8s-goldmane--7988f88666--h8thk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032a320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-h8thk", "timestamp":"2025-09-12 05:55:15.319921008 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:55:15.379185 containerd[1564]: 2025-09-12 05:55:15.320 [INFO][4152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:55:15.379185 containerd[1564]: 2025-09-12 05:55:15.321 [INFO][4152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:55:15.379185 containerd[1564]: 2025-09-12 05:55:15.321 [INFO][4152] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:55:15.379185 containerd[1564]: 2025-09-12 05:55:15.330 [INFO][4152] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" host="localhost" Sep 12 05:55:15.379185 containerd[1564]: 2025-09-12 05:55:15.336 [INFO][4152] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:55:15.379185 containerd[1564]: 2025-09-12 05:55:15.340 [INFO][4152] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:55:15.379185 containerd[1564]: 2025-09-12 05:55:15.342 [INFO][4152] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:15.379185 containerd[1564]: 2025-09-12 05:55:15.344 [INFO][4152] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:15.379185 containerd[1564]: 2025-09-12 05:55:15.344 [INFO][4152] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" host="localhost" Sep 12 05:55:15.379442 containerd[1564]: 2025-09-12 05:55:15.346 [INFO][4152] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e Sep 12 05:55:15.379442 containerd[1564]: 2025-09-12 05:55:15.349 [INFO][4152] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" host="localhost" Sep 12 05:55:15.379442 containerd[1564]: 2025-09-12 05:55:15.354 [INFO][4152] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" host="localhost" Sep 12 05:55:15.379442 containerd[1564]: 2025-09-12 05:55:15.355 [INFO][4152] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" host="localhost" Sep 12 05:55:15.379442 containerd[1564]: 2025-09-12 05:55:15.355 [INFO][4152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:55:15.379442 containerd[1564]: 2025-09-12 05:55:15.355 [INFO][4152] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" HandleID="k8s-pod-network.657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" Workload="localhost-k8s-goldmane--7988f88666--h8thk-eth0" Sep 12 05:55:15.379606 containerd[1564]: 2025-09-12 05:55:15.358 [INFO][4128] cni-plugin/k8s.go 418: Populated endpoint ContainerID="657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" Namespace="calico-system" Pod="goldmane-7988f88666-h8thk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--h8thk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--h8thk-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"cf7de45e-2e1e-49e4-b9ac-e261fad55466", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-h8thk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia7fbf29ec54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:15.379606 containerd[1564]: 2025-09-12 05:55:15.358 [INFO][4128] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" Namespace="calico-system" Pod="goldmane-7988f88666-h8thk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--h8thk-eth0" Sep 12 05:55:15.379694 containerd[1564]: 2025-09-12 05:55:15.358 [INFO][4128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7fbf29ec54 ContainerID="657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" Namespace="calico-system" Pod="goldmane-7988f88666-h8thk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--h8thk-eth0" Sep 12 05:55:15.379694 containerd[1564]: 2025-09-12 05:55:15.367 [INFO][4128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" Namespace="calico-system" Pod="goldmane-7988f88666-h8thk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--h8thk-eth0" Sep 12 05:55:15.379739 containerd[1564]: 2025-09-12 05:55:15.368 [INFO][4128] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" Namespace="calico-system" Pod="goldmane-7988f88666-h8thk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--h8thk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--h8thk-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"cf7de45e-2e1e-49e4-b9ac-e261fad55466", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e", Pod:"goldmane-7988f88666-h8thk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia7fbf29ec54", MAC:"a2:27:f2:b1:80:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:15.379788 containerd[1564]: 2025-09-12 05:55:15.375 [INFO][4128] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" Namespace="calico-system" Pod="goldmane-7988f88666-h8thk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--h8thk-eth0" Sep 12 05:55:15.472751 systemd-networkd[1475]: cali9c206c7bb9c: Link UP Sep 12 05:55:15.473592 systemd-networkd[1475]: cali9c206c7bb9c: Gained carrier Sep 12 05:55:15.488632 containerd[1564]: 2025-09-12 05:55:15.229 [INFO][4120] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 05:55:15.488632 containerd[1564]: 2025-09-12 05:55:15.253 [INFO][4120] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--kqnvn-eth0 coredns-7c65d6cfc9- kube-system 526b300e-a756-421a-97c5-0b7adb3bdb5c 814 0 2025-09-12 05:54:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-kqnvn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9c206c7bb9c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kqnvn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kqnvn-" Sep 12 05:55:15.488632 containerd[1564]: 2025-09-12 05:55:15.253 [INFO][4120] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kqnvn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kqnvn-eth0" Sep 12 05:55:15.488632 containerd[1564]: 2025-09-12 05:55:15.319 [INFO][4151] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" HandleID="k8s-pod-network.334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" Workload="localhost-k8s-coredns--7c65d6cfc9--kqnvn-eth0" Sep 12 05:55:15.488888 containerd[1564]: 2025-09-12 05:55:15.320 [INFO][4151] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" HandleID="k8s-pod-network.334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" Workload="localhost-k8s-coredns--7c65d6cfc9--kqnvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032c870), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-kqnvn", "timestamp":"2025-09-12 05:55:15.319908704 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:55:15.488888 containerd[1564]: 2025-09-12 05:55:15.321 [INFO][4151] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:55:15.488888 containerd[1564]: 2025-09-12 05:55:15.355 [INFO][4151] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:55:15.488888 containerd[1564]: 2025-09-12 05:55:15.355 [INFO][4151] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:55:15.488888 containerd[1564]: 2025-09-12 05:55:15.433 [INFO][4151] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" host="localhost" Sep 12 05:55:15.488888 containerd[1564]: 2025-09-12 05:55:15.440 [INFO][4151] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:55:15.488888 containerd[1564]: 2025-09-12 05:55:15.445 [INFO][4151] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:55:15.488888 containerd[1564]: 2025-09-12 05:55:15.447 [INFO][4151] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:15.488888 containerd[1564]: 2025-09-12 05:55:15.449 [INFO][4151] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:15.488888 containerd[1564]: 2025-09-12 05:55:15.449 [INFO][4151] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" host="localhost" Sep 12 05:55:15.489106 containerd[1564]: 2025-09-12 05:55:15.451 [INFO][4151] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57 Sep 12 05:55:15.489106 containerd[1564]: 2025-09-12 05:55:15.457 [INFO][4151] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" host="localhost" Sep 12 05:55:15.489106 containerd[1564]: 2025-09-12 05:55:15.465 [INFO][4151] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" host="localhost" Sep 12 05:55:15.489106 containerd[1564]: 2025-09-12 05:55:15.465 [INFO][4151] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" host="localhost" Sep 12 05:55:15.489106 containerd[1564]: 2025-09-12 05:55:15.465 [INFO][4151] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:55:15.489106 containerd[1564]: 2025-09-12 05:55:15.466 [INFO][4151] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" HandleID="k8s-pod-network.334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" Workload="localhost-k8s-coredns--7c65d6cfc9--kqnvn-eth0" Sep 12 05:55:15.489227 containerd[1564]: 2025-09-12 05:55:15.469 [INFO][4120] cni-plugin/k8s.go 418: Populated endpoint ContainerID="334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kqnvn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kqnvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kqnvn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"526b300e-a756-421a-97c5-0b7adb3bdb5c", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-kqnvn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c206c7bb9c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:15.489299 containerd[1564]: 2025-09-12 05:55:15.470 [INFO][4120] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kqnvn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kqnvn-eth0" Sep 12 05:55:15.489299 containerd[1564]: 2025-09-12 05:55:15.470 [INFO][4120] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c206c7bb9c ContainerID="334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kqnvn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kqnvn-eth0" Sep 12 05:55:15.489299 containerd[1564]: 2025-09-12 05:55:15.474 [INFO][4120] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kqnvn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kqnvn-eth0" Sep 12 05:55:15.489371 containerd[1564]: 2025-09-12 05:55:15.474 [INFO][4120] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kqnvn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kqnvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kqnvn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"526b300e-a756-421a-97c5-0b7adb3bdb5c", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57", Pod:"coredns-7c65d6cfc9-kqnvn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c206c7bb9c", MAC:"7a:06:8c:e0:ad:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:15.489371 containerd[1564]: 2025-09-12 05:55:15.484 [INFO][4120] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kqnvn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kqnvn-eth0" Sep 12 05:55:15.529658 containerd[1564]: time="2025-09-12T05:55:15.529592338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fccf6cb4c-4dj67,Uid:fc85a0ea-7f63-43e4-87c4-ba496006d159,Namespace:calico-system,Attempt:0,}" Sep 12 05:55:15.568018 containerd[1564]: time="2025-09-12T05:55:15.567860114Z" level=info msg="connecting to shim 334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57" address="unix:///run/containerd/s/d472f66489344a1dc7c3ebb71c8ed566cfb12a3803d3e28d00588ef593a6c173" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:55:15.573537 containerd[1564]: time="2025-09-12T05:55:15.573498454Z" level=info msg="connecting to shim 657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e" address="unix:///run/containerd/s/b2546042ddd25c6883d5fc928c39b01f20eeeabd2ed27e92574f30f3069e760a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:55:15.599801 systemd[1]: Started cri-containerd-657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e.scope - libcontainer container 657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e. Sep 12 05:55:15.604786 systemd[1]: Started cri-containerd-334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57.scope - libcontainer container 334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57. Sep 12 05:55:15.622808 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:55:15.624933 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:55:15.670713 systemd-networkd[1475]: cali7ea3c87f374: Link UP Sep 12 05:55:15.671410 systemd-networkd[1475]: cali7ea3c87f374: Gained carrier Sep 12 05:55:15.677656 containerd[1564]: time="2025-09-12T05:55:15.677606658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kqnvn,Uid:526b300e-a756-421a-97c5-0b7adb3bdb5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57\"" Sep 12 05:55:15.678538 kubelet[2721]: E0912 05:55:15.678515 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:15.680301 containerd[1564]: time="2025-09-12T05:55:15.680263637Z" level=info msg="CreateContainer within sandbox \"334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 05:55:15.681763 containerd[1564]: time="2025-09-12T05:55:15.681662213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-h8thk,Uid:cf7de45e-2e1e-49e4-b9ac-e261fad55466,Namespace:calico-system,Attempt:0,} returns sandbox id \"657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e\"" Sep 12 05:55:15.684494 containerd[1564]: time="2025-09-12T05:55:15.684431815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.558 [INFO][4193] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.575 [INFO][4193] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--fccf6cb4c--4dj67-eth0 whisker-fccf6cb4c- calico-system fc85a0ea-7f63-43e4-87c4-ba496006d159 898 0 2025-09-12 05:55:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:fccf6cb4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-fccf6cb4c-4dj67 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7ea3c87f374 [] [] }} ContainerID="3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" Namespace="calico-system" Pod="whisker-fccf6cb4c-4dj67" WorkloadEndpoint="localhost-k8s-whisker--fccf6cb4c--4dj67-" Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.576 [INFO][4193] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" Namespace="calico-system" Pod="whisker-fccf6cb4c-4dj67" WorkloadEndpoint="localhost-k8s-whisker--fccf6cb4c--4dj67-eth0" Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.620 [INFO][4254] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" HandleID="k8s-pod-network.3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" Workload="localhost-k8s-whisker--fccf6cb4c--4dj67-eth0" Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.620 [INFO][4254] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" HandleID="k8s-pod-network.3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" Workload="localhost-k8s-whisker--fccf6cb4c--4dj67-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000522b50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-fccf6cb4c-4dj67", "timestamp":"2025-09-12 05:55:15.620559478 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.621 [INFO][4254] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.621 [INFO][4254] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.621 [INFO][4254] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.633 [INFO][4254] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" host="localhost" Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.637 [INFO][4254] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.642 [INFO][4254] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.645 [INFO][4254] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.647 [INFO][4254] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.647 [INFO][4254] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" host="localhost" Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.649 [INFO][4254] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6 Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.654 [INFO][4254] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" host="localhost" Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.662 [INFO][4254] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" host="localhost" Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.663 [INFO][4254] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" host="localhost" Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.663 [INFO][4254] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:55:15.687891 containerd[1564]: 2025-09-12 05:55:15.663 [INFO][4254] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" HandleID="k8s-pod-network.3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" Workload="localhost-k8s-whisker--fccf6cb4c--4dj67-eth0" Sep 12 05:55:15.688990 containerd[1564]: 2025-09-12 05:55:15.667 [INFO][4193] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" Namespace="calico-system" Pod="whisker-fccf6cb4c-4dj67" WorkloadEndpoint="localhost-k8s-whisker--fccf6cb4c--4dj67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--fccf6cb4c--4dj67-eth0", GenerateName:"whisker-fccf6cb4c-", Namespace:"calico-system", SelfLink:"", UID:"fc85a0ea-7f63-43e4-87c4-ba496006d159", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fccf6cb4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-fccf6cb4c-4dj67", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7ea3c87f374", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:15.688990 containerd[1564]: 2025-09-12 05:55:15.667 [INFO][4193] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" Namespace="calico-system" Pod="whisker-fccf6cb4c-4dj67" WorkloadEndpoint="localhost-k8s-whisker--fccf6cb4c--4dj67-eth0" Sep 12 05:55:15.688990 containerd[1564]: 2025-09-12 05:55:15.667 [INFO][4193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ea3c87f374 ContainerID="3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" Namespace="calico-system" Pod="whisker-fccf6cb4c-4dj67" WorkloadEndpoint="localhost-k8s-whisker--fccf6cb4c--4dj67-eth0" Sep 12 05:55:15.688990 containerd[1564]: 2025-09-12 05:55:15.671 [INFO][4193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" Namespace="calico-system" Pod="whisker-fccf6cb4c-4dj67" WorkloadEndpoint="localhost-k8s-whisker--fccf6cb4c--4dj67-eth0" Sep 12 05:55:15.688990 containerd[1564]: 2025-09-12 05:55:15.671 [INFO][4193] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" Namespace="calico-system" Pod="whisker-fccf6cb4c-4dj67" WorkloadEndpoint="localhost-k8s-whisker--fccf6cb4c--4dj67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--fccf6cb4c--4dj67-eth0", GenerateName:"whisker-fccf6cb4c-", Namespace:"calico-system", SelfLink:"", UID:"fc85a0ea-7f63-43e4-87c4-ba496006d159", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fccf6cb4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6", Pod:"whisker-fccf6cb4c-4dj67", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7ea3c87f374", MAC:"6e:54:a6:b9:51:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:15.688990 containerd[1564]: 2025-09-12 05:55:15.684 [INFO][4193] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" Namespace="calico-system" Pod="whisker-fccf6cb4c-4dj67" WorkloadEndpoint="localhost-k8s-whisker--fccf6cb4c--4dj67-eth0" Sep 12 05:55:15.701556 containerd[1564]: time="2025-09-12T05:55:15.701506386Z" level=info msg="Container f482831ca1c6984f9888dd2197d50bf6267c898866872f2988fb8c30fdfb6c09: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:55:15.708819 containerd[1564]: time="2025-09-12T05:55:15.708770650Z" level=info msg="CreateContainer within sandbox \"334f1540c86f012bf43b39fc61039d267d8ff0bbf1e6fe179697c93635d1cf57\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f482831ca1c6984f9888dd2197d50bf6267c898866872f2988fb8c30fdfb6c09\"" Sep 12 05:55:15.709473 containerd[1564]: time="2025-09-12T05:55:15.709333662Z" level=info msg="StartContainer for \"f482831ca1c6984f9888dd2197d50bf6267c898866872f2988fb8c30fdfb6c09\"" Sep 12 05:55:15.710254 containerd[1564]: time="2025-09-12T05:55:15.710222127Z" level=info msg="connecting to shim f482831ca1c6984f9888dd2197d50bf6267c898866872f2988fb8c30fdfb6c09" address="unix:///run/containerd/s/d472f66489344a1dc7c3ebb71c8ed566cfb12a3803d3e28d00588ef593a6c173" protocol=ttrpc version=3 Sep 12 05:55:15.722350 containerd[1564]: time="2025-09-12T05:55:15.722302171Z" level=info msg="connecting to shim 3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6" address="unix:///run/containerd/s/448509883b7eb550626b1115146124221aa369e014e17d43b5c9a87f09aac6a7" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:55:15.734757 systemd[1]: Started cri-containerd-f482831ca1c6984f9888dd2197d50bf6267c898866872f2988fb8c30fdfb6c09.scope - libcontainer container f482831ca1c6984f9888dd2197d50bf6267c898866872f2988fb8c30fdfb6c09. Sep 12 05:55:15.763771 systemd[1]: Started cri-containerd-3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6.scope - libcontainer container 3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6. Sep 12 05:55:15.788604 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:55:15.873178 containerd[1564]: time="2025-09-12T05:55:15.873113615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e41fe1433500302fc2b8bd1f9ab9ecfe9ccb716a3d9c847ea64291af4cb6a5b\" id:\"35154fa6e8f2008ecfea28668435367c4e8a9cbf9b1915281bc81657cb72bb52\" pid:4382 exit_status:1 exited_at:{seconds:1757656515 nanos:872735361}" Sep 12 05:55:16.028634 containerd[1564]: time="2025-09-12T05:55:16.028227843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fccf6cb4c-4dj67,Uid:fc85a0ea-7f63-43e4-87c4-ba496006d159,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6\"" Sep 12 05:55:16.029420 containerd[1564]: time="2025-09-12T05:55:16.029366188Z" level=info msg="StartContainer for \"f482831ca1c6984f9888dd2197d50bf6267c898866872f2988fb8c30fdfb6c09\" returns successfully" Sep 12 05:55:16.095681 containerd[1564]: time="2025-09-12T05:55:16.095301610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fc85cc6-9jmd6,Uid:46f6b528-ef1f-4323-b258-9e68fca12d4d,Namespace:calico-apiserver,Attempt:0,}" Sep 12 05:55:16.095681 containerd[1564]: time="2025-09-12T05:55:16.095421806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58454d7cff-qnknh,Uid:8357436c-29f7-41cc-b362-07676225c053,Namespace:calico-system,Attempt:0,}" Sep 12 05:55:16.106265 kubelet[2721]: I0912 05:55:16.106218 2721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a900808d-da78-40eb-83db-2a5d679a55e8" path="/var/lib/kubelet/pods/a900808d-da78-40eb-83db-2a5d679a55e8/volumes" Sep 12 05:55:16.215498 kubelet[2721]: I0912 05:55:16.215438 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 05:55:16.217505 kubelet[2721]: E0912 05:55:16.217473 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:16.602959 systemd-networkd[1475]: cali08c35443172: Link UP Sep 12 05:55:16.603616 systemd-networkd[1475]: cali08c35443172: Gained carrier Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.500 [INFO][4434] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.515 [INFO][4434] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-eth0 calico-kube-controllers-58454d7cff- calico-system 8357436c-29f7-41cc-b362-07676225c053 817 0 2025-09-12 05:54:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58454d7cff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58454d7cff-qnknh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali08c35443172 [] [] }} ContainerID="e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" Namespace="calico-system" Pod="calico-kube-controllers-58454d7cff-qnknh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-" Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.515 [INFO][4434] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" Namespace="calico-system" Pod="calico-kube-controllers-58454d7cff-qnknh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-eth0" Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.546 [INFO][4468] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" HandleID="k8s-pod-network.e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" Workload="localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-eth0" Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.547 [INFO][4468] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" HandleID="k8s-pod-network.e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" Workload="localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58454d7cff-qnknh", "timestamp":"2025-09-12 05:55:16.546912715 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.547 [INFO][4468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.547 [INFO][4468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.547 [INFO][4468] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.554 [INFO][4468] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" host="localhost" Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.569 [INFO][4468] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.576 [INFO][4468] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.578 [INFO][4468] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.580 [INFO][4468] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.580 [INFO][4468] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" host="localhost" Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.581 [INFO][4468] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273 Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.585 [INFO][4468] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" host="localhost" Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.594 [INFO][4468] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" host="localhost" Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.594 [INFO][4468] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" host="localhost" Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.594 [INFO][4468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:55:16.623659 containerd[1564]: 2025-09-12 05:55:16.594 [INFO][4468] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" HandleID="k8s-pod-network.e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" Workload="localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-eth0" Sep 12 05:55:16.624461 containerd[1564]: 2025-09-12 05:55:16.598 [INFO][4434] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" Namespace="calico-system" Pod="calico-kube-controllers-58454d7cff-qnknh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-eth0", GenerateName:"calico-kube-controllers-58454d7cff-", Namespace:"calico-system", SelfLink:"", UID:"8357436c-29f7-41cc-b362-07676225c053", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58454d7cff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58454d7cff-qnknh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08c35443172", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:16.624461 containerd[1564]: 2025-09-12 05:55:16.598 [INFO][4434] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" Namespace="calico-system" Pod="calico-kube-controllers-58454d7cff-qnknh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-eth0" Sep 12 05:55:16.624461 containerd[1564]: 2025-09-12 05:55:16.598 [INFO][4434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08c35443172 ContainerID="e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" Namespace="calico-system" Pod="calico-kube-controllers-58454d7cff-qnknh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-eth0" Sep 12 05:55:16.624461 containerd[1564]: 2025-09-12 05:55:16.608 [INFO][4434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" Namespace="calico-system" Pod="calico-kube-controllers-58454d7cff-qnknh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-eth0" Sep 12 05:55:16.624461 containerd[1564]: 2025-09-12 05:55:16.608 [INFO][4434] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" Namespace="calico-system" Pod="calico-kube-controllers-58454d7cff-qnknh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-eth0", GenerateName:"calico-kube-controllers-58454d7cff-", Namespace:"calico-system", SelfLink:"", UID:"8357436c-29f7-41cc-b362-07676225c053", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58454d7cff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273", Pod:"calico-kube-controllers-58454d7cff-qnknh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08c35443172", MAC:"5a:85:36:3f:1a:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:16.624461 containerd[1564]: 2025-09-12 05:55:16.619 [INFO][4434] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" Namespace="calico-system" Pod="calico-kube-controllers-58454d7cff-qnknh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58454d7cff--qnknh-eth0" Sep 12 05:55:16.650596 containerd[1564]: time="2025-09-12T05:55:16.650454837Z" level=info msg="connecting to shim e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273" address="unix:///run/containerd/s/651b3083ad06ae216e1aab689f4b999842b42ba876b2bfc88effae1402352c56" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:55:16.684231 systemd[1]: Started cri-containerd-e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273.scope - libcontainer container e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273. Sep 12 05:55:16.700643 systemd-networkd[1475]: calia47961d9bc6: Link UP Sep 12 05:55:16.700937 systemd-networkd[1475]: calia47961d9bc6: Gained carrier Sep 12 05:55:16.712670 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.499 [INFO][4436] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.513 [INFO][4436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-eth0 calico-apiserver-59fc85cc6- calico-apiserver 46f6b528-ef1f-4323-b258-9e68fca12d4d 815 0 2025-09-12 05:54:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59fc85cc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59fc85cc6-9jmd6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia47961d9bc6 [] [] }} ContainerID="c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-9jmd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-" Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.513 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-9jmd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-eth0" Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.547 [INFO][4470] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" HandleID="k8s-pod-network.c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" Workload="localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-eth0" Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.547 [INFO][4470] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" HandleID="k8s-pod-network.c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" Workload="localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59fc85cc6-9jmd6", "timestamp":"2025-09-12 05:55:16.547208934 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.547 [INFO][4470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.594 [INFO][4470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.595 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.655 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" host="localhost" Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.670 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.674 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.676 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.678 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.678 [INFO][4470] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" host="localhost" Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.679 [INFO][4470] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.687 [INFO][4470] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" host="localhost" Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.693 [INFO][4470] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" host="localhost" Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.693 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" host="localhost" Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.693 [INFO][4470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:55:16.718858 containerd[1564]: 2025-09-12 05:55:16.693 [INFO][4470] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" HandleID="k8s-pod-network.c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" Workload="localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-eth0" Sep 12 05:55:16.719737 containerd[1564]: 2025-09-12 05:55:16.697 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-9jmd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-eth0", GenerateName:"calico-apiserver-59fc85cc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"46f6b528-ef1f-4323-b258-9e68fca12d4d", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59fc85cc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59fc85cc6-9jmd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia47961d9bc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:16.719737 containerd[1564]: 2025-09-12 05:55:16.697 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-9jmd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-eth0" Sep 12 05:55:16.719737 containerd[1564]: 2025-09-12 05:55:16.697 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia47961d9bc6 ContainerID="c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-9jmd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-eth0" Sep 12 05:55:16.719737 containerd[1564]: 2025-09-12 05:55:16.700 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-9jmd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-eth0" Sep 12 05:55:16.719737 containerd[1564]: 2025-09-12 05:55:16.701 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-9jmd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-eth0", GenerateName:"calico-apiserver-59fc85cc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"46f6b528-ef1f-4323-b258-9e68fca12d4d", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59fc85cc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd", Pod:"calico-apiserver-59fc85cc6-9jmd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia47961d9bc6", MAC:"8e:da:f7:9e:02:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:16.719737 containerd[1564]: 2025-09-12 05:55:16.711 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-9jmd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--9jmd6-eth0" Sep 12 05:55:16.747796 containerd[1564]: time="2025-09-12T05:55:16.747738170Z" level=info msg="connecting to shim c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd" address="unix:///run/containerd/s/ef480c4b535debeeab60b430f9702ccb5784a553f74708021c7998f083911290" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:55:16.748708 containerd[1564]: time="2025-09-12T05:55:16.748680817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58454d7cff-qnknh,Uid:8357436c-29f7-41cc-b362-07676225c053,Namespace:calico-system,Attempt:0,} returns sandbox id \"e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273\"" Sep 12 05:55:16.775721 systemd[1]: Started cri-containerd-c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd.scope - libcontainer container c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd. Sep 12 05:55:16.778161 kubelet[2721]: E0912 05:55:16.778139 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:16.783123 kubelet[2721]: E0912 05:55:16.783050 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:16.793631 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:55:16.794152 kubelet[2721]: I0912 05:55:16.792997 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kqnvn" podStartSLOduration=35.792981536 podStartE2EDuration="35.792981536s" podCreationTimestamp="2025-09-12 05:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:55:16.791370178 +0000 UTC m=+40.797822158" watchObservedRunningTime="2025-09-12 05:55:16.792981536 +0000 UTC m=+40.799433505" Sep 12 05:55:16.841419 containerd[1564]: time="2025-09-12T05:55:16.841362376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fc85cc6-9jmd6,Uid:46f6b528-ef1f-4323-b258-9e68fca12d4d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd\"" Sep 12 05:55:17.044336 systemd-networkd[1475]: vxlan.calico: Link UP Sep 12 05:55:17.044351 systemd-networkd[1475]: vxlan.calico: Gained carrier Sep 12 05:55:17.095066 containerd[1564]: time="2025-09-12T05:55:17.094941692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k69w8,Uid:856f1557-0f2c-4284-9c75-f29b82c7d285,Namespace:calico-system,Attempt:0,}" Sep 12 05:55:17.123950 systemd-networkd[1475]: calia7fbf29ec54: Gained IPv6LL Sep 12 05:55:17.176439 systemd[1]: Started sshd@7-10.0.0.78:22-10.0.0.1:47698.service - OpenSSH per-connection server daemon (10.0.0.1:47698). Sep 12 05:55:17.223630 systemd-networkd[1475]: cali0c9da45af2d: Link UP Sep 12 05:55:17.223850 systemd-networkd[1475]: cali0c9da45af2d: Gained carrier Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.137 [INFO][4646] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--k69w8-eth0 csi-node-driver- calico-system 856f1557-0f2c-4284-9c75-f29b82c7d285 696 0 2025-09-12 05:54:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-k69w8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0c9da45af2d [] [] }} ContainerID="817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" Namespace="calico-system" Pod="csi-node-driver-k69w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--k69w8-" Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.137 [INFO][4646] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" Namespace="calico-system" Pod="csi-node-driver-k69w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--k69w8-eth0" Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.176 [INFO][4661] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" HandleID="k8s-pod-network.817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" Workload="localhost-k8s-csi--node--driver--k69w8-eth0" Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.176 [INFO][4661] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" HandleID="k8s-pod-network.817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" Workload="localhost-k8s-csi--node--driver--k69w8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000119610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-k69w8", "timestamp":"2025-09-12 05:55:17.176143973 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.176 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.176 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.176 [INFO][4661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.186 [INFO][4661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" host="localhost" Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.192 [INFO][4661] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.196 [INFO][4661] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.198 [INFO][4661] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.200 [INFO][4661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.200 [INFO][4661] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" host="localhost" Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.205 [INFO][4661] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0 Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.210 [INFO][4661] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" host="localhost" Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.215 [INFO][4661] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" host="localhost" Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.215 [INFO][4661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" host="localhost" Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.216 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:55:17.256225 containerd[1564]: 2025-09-12 05:55:17.216 [INFO][4661] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" HandleID="k8s-pod-network.817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" Workload="localhost-k8s-csi--node--driver--k69w8-eth0" Sep 12 05:55:17.257040 containerd[1564]: 2025-09-12 05:55:17.220 [INFO][4646] cni-plugin/k8s.go 418: Populated endpoint ContainerID="817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" Namespace="calico-system" Pod="csi-node-driver-k69w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--k69w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k69w8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"856f1557-0f2c-4284-9c75-f29b82c7d285", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-k69w8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0c9da45af2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:17.257040 containerd[1564]: 2025-09-12 05:55:17.220 [INFO][4646] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" Namespace="calico-system" Pod="csi-node-driver-k69w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--k69w8-eth0" Sep 12 05:55:17.257040 containerd[1564]: 2025-09-12 05:55:17.220 [INFO][4646] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c9da45af2d ContainerID="817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" Namespace="calico-system" Pod="csi-node-driver-k69w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--k69w8-eth0" Sep 12 05:55:17.257040 containerd[1564]: 2025-09-12 05:55:17.224 [INFO][4646] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" Namespace="calico-system" Pod="csi-node-driver-k69w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--k69w8-eth0" Sep 12 05:55:17.257040 containerd[1564]: 2025-09-12 05:55:17.224 [INFO][4646] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" Namespace="calico-system" Pod="csi-node-driver-k69w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--k69w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k69w8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"856f1557-0f2c-4284-9c75-f29b82c7d285", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0", Pod:"csi-node-driver-k69w8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0c9da45af2d", MAC:"4a:2d:a6:e6:8b:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:17.257040 containerd[1564]: 2025-09-12 05:55:17.243 [INFO][4646] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" Namespace="calico-system" Pod="csi-node-driver-k69w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--k69w8-eth0" Sep 12 05:55:17.266014 sshd[4669]: Accepted publickey for core from 10.0.0.1 port 47698 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:55:17.267475 sshd-session[4669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:55:17.280269 systemd-logind[1543]: New session 8 of user core. Sep 12 05:55:17.284728 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 05:55:17.305183 containerd[1564]: time="2025-09-12T05:55:17.305032554Z" level=info msg="connecting to shim 817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0" address="unix:///run/containerd/s/05c832e96be3c2ec70d60a11da432c8e76c2892c7ff5367035b3287becbc26c3" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:55:17.316737 systemd-networkd[1475]: cali9c206c7bb9c: Gained IPv6LL Sep 12 05:55:17.336724 systemd[1]: Started cri-containerd-817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0.scope - libcontainer container 817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0. Sep 12 05:55:17.356340 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:55:17.378755 systemd-networkd[1475]: cali7ea3c87f374: Gained IPv6LL Sep 12 05:55:17.387173 containerd[1564]: time="2025-09-12T05:55:17.386841770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k69w8,Uid:856f1557-0f2c-4284-9c75-f29b82c7d285,Namespace:calico-system,Attempt:0,} returns sandbox id \"817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0\"" Sep 12 05:55:17.488592 sshd[4694]: Connection closed by 10.0.0.1 port 47698 Sep 12 05:55:17.489132 sshd-session[4669]: pam_unix(sshd:session): session closed for user core Sep 12 05:55:17.495827 systemd[1]: sshd@7-10.0.0.78:22-10.0.0.1:47698.service: Deactivated successfully. Sep 12 05:55:17.498668 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 05:55:17.500529 systemd-logind[1543]: Session 8 logged out. Waiting for processes to exit. Sep 12 05:55:17.503190 systemd-logind[1543]: Removed session 8. Sep 12 05:55:17.787582 kubelet[2721]: E0912 05:55:17.787527 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:18.094983 containerd[1564]: time="2025-09-12T05:55:18.094917611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fc85cc6-l7rwv,Uid:aa088531-a52d-404c-8722-a5eb6ff66bbc,Namespace:calico-apiserver,Attempt:0,}" Sep 12 05:55:18.261183 systemd-networkd[1475]: cali112f5b81ef2: Link UP Sep 12 05:55:18.261670 systemd-networkd[1475]: cali112f5b81ef2: Gained carrier Sep 12 05:55:18.338769 systemd-networkd[1475]: cali08c35443172: Gained IPv6LL Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.186 [INFO][4831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-eth0 calico-apiserver-59fc85cc6- calico-apiserver aa088531-a52d-404c-8722-a5eb6ff66bbc 819 0 2025-09-12 05:54:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59fc85cc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59fc85cc6-l7rwv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali112f5b81ef2 [] [] }} ContainerID="f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-l7rwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-" Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.186 [INFO][4831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-l7rwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-eth0" Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.217 [INFO][4845] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" HandleID="k8s-pod-network.f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" Workload="localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-eth0" Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.217 [INFO][4845] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" HandleID="k8s-pod-network.f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" Workload="localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59fc85cc6-l7rwv", "timestamp":"2025-09-12 05:55:18.217361879 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.217 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.217 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.217 [INFO][4845] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.225 [INFO][4845] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" host="localhost" Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.232 [INFO][4845] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.236 [INFO][4845] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.238 [INFO][4845] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.240 [INFO][4845] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.240 [INFO][4845] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" host="localhost" Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.242 [INFO][4845] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.247 [INFO][4845] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" host="localhost" Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.253 [INFO][4845] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" host="localhost" Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.253 [INFO][4845] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" host="localhost" Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.253 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:55:18.419169 containerd[1564]: 2025-09-12 05:55:18.253 [INFO][4845] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" HandleID="k8s-pod-network.f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" Workload="localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-eth0" Sep 12 05:55:18.419972 containerd[1564]: 2025-09-12 05:55:18.258 [INFO][4831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-l7rwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-eth0", GenerateName:"calico-apiserver-59fc85cc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa088531-a52d-404c-8722-a5eb6ff66bbc", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59fc85cc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59fc85cc6-l7rwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali112f5b81ef2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:18.419972 containerd[1564]: 2025-09-12 05:55:18.258 [INFO][4831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-l7rwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-eth0" Sep 12 05:55:18.419972 containerd[1564]: 2025-09-12 05:55:18.258 [INFO][4831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali112f5b81ef2 ContainerID="f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-l7rwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-eth0" Sep 12 05:55:18.419972 containerd[1564]: 2025-09-12 05:55:18.261 [INFO][4831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-l7rwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-eth0" Sep 12 05:55:18.419972 containerd[1564]: 2025-09-12 05:55:18.262 [INFO][4831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-l7rwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-eth0", GenerateName:"calico-apiserver-59fc85cc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa088531-a52d-404c-8722-a5eb6ff66bbc", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59fc85cc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd", Pod:"calico-apiserver-59fc85cc6-l7rwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali112f5b81ef2", MAC:"4a:c4:90:39:51:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:18.419972 containerd[1564]: 2025-09-12 05:55:18.415 [INFO][4831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" Namespace="calico-apiserver" Pod="calico-apiserver-59fc85cc6-l7rwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fc85cc6--l7rwv-eth0" Sep 12 05:55:18.462449 containerd[1564]: time="2025-09-12T05:55:18.462400779Z" level=info msg="connecting to shim f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd" address="unix:///run/containerd/s/271d43253a65b8d31ced195b37d71127cb8234a5c9fcb710f65cee68e5807a17" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:55:18.466775 systemd-networkd[1475]: calia47961d9bc6: Gained IPv6LL Sep 12 05:55:18.501790 systemd[1]: Started cri-containerd-f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd.scope - libcontainer container f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd. Sep 12 05:55:18.517252 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:55:18.670818 containerd[1564]: time="2025-09-12T05:55:18.670688788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fc85cc6-l7rwv,Uid:aa088531-a52d-404c-8722-a5eb6ff66bbc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd\"" Sep 12 05:55:18.786729 systemd-networkd[1475]: vxlan.calico: Gained IPv6LL Sep 12 05:55:18.791953 kubelet[2721]: E0912 05:55:18.791925 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:18.804640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2081441804.mount: Deactivated successfully. Sep 12 05:55:18.979852 systemd-networkd[1475]: cali0c9da45af2d: Gained IPv6LL Sep 12 05:55:19.979162 containerd[1564]: time="2025-09-12T05:55:19.979091686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:19.983463 containerd[1564]: time="2025-09-12T05:55:19.983422277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 05:55:19.984821 containerd[1564]: time="2025-09-12T05:55:19.984778853Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:19.986960 containerd[1564]: time="2025-09-12T05:55:19.986928905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:19.987506 containerd[1564]: time="2025-09-12T05:55:19.987477699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.303013463s" Sep 12 05:55:19.987560 containerd[1564]: time="2025-09-12T05:55:19.987508817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 05:55:19.989040 containerd[1564]: time="2025-09-12T05:55:19.988554147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 05:55:19.989922 containerd[1564]: time="2025-09-12T05:55:19.989874846Z" level=info msg="CreateContainer within sandbox \"657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 05:55:19.998833 containerd[1564]: time="2025-09-12T05:55:19.998785517Z" level=info msg="Container c21a456e7c9137bbdff2032b4a02999fbbe944360dc34a6bd70ce76182ae08ac: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:55:20.003411 systemd-networkd[1475]: cali112f5b81ef2: Gained IPv6LL Sep 12 05:55:20.115157 containerd[1564]: time="2025-09-12T05:55:20.115096462Z" level=info msg="CreateContainer within sandbox \"657664695955d866a2846cf421e441b6fd161bd2e00f03083f4b4281613e9a2e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c21a456e7c9137bbdff2032b4a02999fbbe944360dc34a6bd70ce76182ae08ac\"" Sep 12 05:55:20.115707 containerd[1564]: time="2025-09-12T05:55:20.115684470Z" level=info msg="StartContainer for \"c21a456e7c9137bbdff2032b4a02999fbbe944360dc34a6bd70ce76182ae08ac\"" Sep 12 05:55:20.117151 containerd[1564]: time="2025-09-12T05:55:20.117113122Z" level=info msg="connecting to shim c21a456e7c9137bbdff2032b4a02999fbbe944360dc34a6bd70ce76182ae08ac" address="unix:///run/containerd/s/b2546042ddd25c6883d5fc928c39b01f20eeeabd2ed27e92574f30f3069e760a" protocol=ttrpc version=3 Sep 12 05:55:20.143802 systemd[1]: Started cri-containerd-c21a456e7c9137bbdff2032b4a02999fbbe944360dc34a6bd70ce76182ae08ac.scope - libcontainer container c21a456e7c9137bbdff2032b4a02999fbbe944360dc34a6bd70ce76182ae08ac. Sep 12 05:55:20.227421 containerd[1564]: time="2025-09-12T05:55:20.227361464Z" level=info msg="StartContainer for \"c21a456e7c9137bbdff2032b4a02999fbbe944360dc34a6bd70ce76182ae08ac\" returns successfully" Sep 12 05:55:20.810125 kubelet[2721]: I0912 05:55:20.809997 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-h8thk" podStartSLOduration=24.504583524 podStartE2EDuration="28.809977303s" podCreationTimestamp="2025-09-12 05:54:52 +0000 UTC" firstStartedPulling="2025-09-12 05:55:15.683049149 +0000 UTC m=+39.689501128" lastFinishedPulling="2025-09-12 05:55:19.988442928 +0000 UTC m=+43.994894907" observedRunningTime="2025-09-12 05:55:20.809122773 +0000 UTC m=+44.815574752" watchObservedRunningTime="2025-09-12 05:55:20.809977303 +0000 UTC m=+44.816429282" Sep 12 05:55:20.884779 containerd[1564]: time="2025-09-12T05:55:20.884723010Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c21a456e7c9137bbdff2032b4a02999fbbe944360dc34a6bd70ce76182ae08ac\" id:\"a4500ff55e6fd482a98a10e30508d0d8d16039a424cc423c2b127df032ddec43\" pid:4972 exit_status:1 exited_at:{seconds:1757656520 nanos:884256081}" Sep 12 05:55:21.503602 containerd[1564]: time="2025-09-12T05:55:21.503531637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:21.504519 containerd[1564]: time="2025-09-12T05:55:21.504454335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 05:55:21.505633 containerd[1564]: time="2025-09-12T05:55:21.505606606Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:21.507988 containerd[1564]: time="2025-09-12T05:55:21.507957154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:21.508479 containerd[1564]: time="2025-09-12T05:55:21.508452137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.519853857s" Sep 12 05:55:21.508520 containerd[1564]: time="2025-09-12T05:55:21.508482314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 05:55:21.509558 containerd[1564]: time="2025-09-12T05:55:21.509529778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 05:55:21.510697 containerd[1564]: time="2025-09-12T05:55:21.510672340Z" level=info msg="CreateContainer within sandbox \"3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 05:55:21.519405 containerd[1564]: time="2025-09-12T05:55:21.519361169Z" level=info msg="Container ff613955ef55e96509d2ceb864d9ec676e0a499759a6e88f72acedaf1bc32c0d: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:55:21.531362 containerd[1564]: time="2025-09-12T05:55:21.531272511Z" level=info msg="CreateContainer within sandbox \"3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ff613955ef55e96509d2ceb864d9ec676e0a499759a6e88f72acedaf1bc32c0d\"" Sep 12 05:55:21.532099 containerd[1564]: time="2025-09-12T05:55:21.532015511Z" level=info msg="StartContainer for \"ff613955ef55e96509d2ceb864d9ec676e0a499759a6e88f72acedaf1bc32c0d\"" Sep 12 05:55:21.533793 containerd[1564]: time="2025-09-12T05:55:21.533749277Z" level=info msg="connecting to shim ff613955ef55e96509d2ceb864d9ec676e0a499759a6e88f72acedaf1bc32c0d" address="unix:///run/containerd/s/448509883b7eb550626b1115146124221aa369e014e17d43b5c9a87f09aac6a7" protocol=ttrpc version=3 Sep 12 05:55:21.552722 systemd[1]: Started cri-containerd-ff613955ef55e96509d2ceb864d9ec676e0a499759a6e88f72acedaf1bc32c0d.scope - libcontainer container ff613955ef55e96509d2ceb864d9ec676e0a499759a6e88f72acedaf1bc32c0d. Sep 12 05:55:21.817163 containerd[1564]: time="2025-09-12T05:55:21.816933877Z" level=info msg="StartContainer for \"ff613955ef55e96509d2ceb864d9ec676e0a499759a6e88f72acedaf1bc32c0d\" returns successfully" Sep 12 05:55:21.908182 containerd[1564]: time="2025-09-12T05:55:21.908134592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c21a456e7c9137bbdff2032b4a02999fbbe944360dc34a6bd70ce76182ae08ac\" id:\"0e6cb96fd744cde4f76c74651ec59e932e0120419fbf8742687b0d8e7e7d9c5c\" pid:5033 exit_status:1 exited_at:{seconds:1757656521 nanos:907738286}" Sep 12 05:55:22.502111 systemd[1]: Started sshd@8-10.0.0.78:22-10.0.0.1:41028.service - OpenSSH per-connection server daemon (10.0.0.1:41028). Sep 12 05:55:22.558941 sshd[5053]: Accepted publickey for core from 10.0.0.1 port 41028 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:55:22.560758 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:55:22.566524 systemd-logind[1543]: New session 9 of user core. Sep 12 05:55:22.581720 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 05:55:22.718435 sshd[5056]: Connection closed by 10.0.0.1 port 41028 Sep 12 05:55:22.718824 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Sep 12 05:55:22.722899 systemd[1]: sshd@8-10.0.0.78:22-10.0.0.1:41028.service: Deactivated successfully. Sep 12 05:55:22.724832 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 05:55:22.725676 systemd-logind[1543]: Session 9 logged out. Waiting for processes to exit. Sep 12 05:55:22.727003 systemd-logind[1543]: Removed session 9. Sep 12 05:55:25.228211 containerd[1564]: time="2025-09-12T05:55:25.228137837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:25.228965 containerd[1564]: time="2025-09-12T05:55:25.228917416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 05:55:25.230123 containerd[1564]: time="2025-09-12T05:55:25.230081197Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:25.232170 containerd[1564]: time="2025-09-12T05:55:25.232101543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:25.232636 containerd[1564]: time="2025-09-12T05:55:25.232605582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.723048634s" Sep 12 05:55:25.232691 containerd[1564]: time="2025-09-12T05:55:25.232640067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 05:55:25.233745 containerd[1564]: time="2025-09-12T05:55:25.233699402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 05:55:25.242640 containerd[1564]: time="2025-09-12T05:55:25.242592913Z" level=info msg="CreateContainer within sandbox \"e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 05:55:25.251040 containerd[1564]: time="2025-09-12T05:55:25.251000147Z" level=info msg="Container 3bae0107e62bf7a894e3f66da2eb625d4c027136d5a1472dab1fb9e2d3343e82: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:55:25.261800 containerd[1564]: time="2025-09-12T05:55:25.261754843Z" level=info msg="CreateContainer within sandbox \"e638ad5fa97c13b948fd041ce90427cc0f0e5399f169d26b8c5e60c114dae273\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3bae0107e62bf7a894e3f66da2eb625d4c027136d5a1472dab1fb9e2d3343e82\"" Sep 12 05:55:25.262316 containerd[1564]: time="2025-09-12T05:55:25.262253803Z" level=info msg="StartContainer for \"3bae0107e62bf7a894e3f66da2eb625d4c027136d5a1472dab1fb9e2d3343e82\"" Sep 12 05:55:25.263453 containerd[1564]: time="2025-09-12T05:55:25.263415721Z" level=info msg="connecting to shim 3bae0107e62bf7a894e3f66da2eb625d4c027136d5a1472dab1fb9e2d3343e82" address="unix:///run/containerd/s/651b3083ad06ae216e1aab689f4b999842b42ba876b2bfc88effae1402352c56" protocol=ttrpc version=3 Sep 12 05:55:25.316725 systemd[1]: Started cri-containerd-3bae0107e62bf7a894e3f66da2eb625d4c027136d5a1472dab1fb9e2d3343e82.scope - libcontainer container 3bae0107e62bf7a894e3f66da2eb625d4c027136d5a1472dab1fb9e2d3343e82. Sep 12 05:55:25.367327 containerd[1564]: time="2025-09-12T05:55:25.367281505Z" level=info msg="StartContainer for \"3bae0107e62bf7a894e3f66da2eb625d4c027136d5a1472dab1fb9e2d3343e82\" returns successfully" Sep 12 05:55:26.036708 kubelet[2721]: I0912 05:55:26.036622 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58454d7cff-qnknh" podStartSLOduration=25.553254523 podStartE2EDuration="34.036602189s" podCreationTimestamp="2025-09-12 05:54:52 +0000 UTC" firstStartedPulling="2025-09-12 05:55:16.750099261 +0000 UTC m=+40.756551240" lastFinishedPulling="2025-09-12 05:55:25.233446927 +0000 UTC m=+49.239898906" observedRunningTime="2025-09-12 05:55:26.036132134 +0000 UTC m=+50.042584113" watchObservedRunningTime="2025-09-12 05:55:26.036602189 +0000 UTC m=+50.043054168" Sep 12 05:55:26.830085 kubelet[2721]: I0912 05:55:26.830042 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 05:55:27.097335 containerd[1564]: time="2025-09-12T05:55:27.097288770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bae0107e62bf7a894e3f66da2eb625d4c027136d5a1472dab1fb9e2d3343e82\" id:\"ccb5984f073a313f4d86995d4de7868f8bef7ee92d520203983f292f77c50978\" pid:5137 exited_at:{seconds:1757656527 nanos:97001710}" Sep 12 05:55:27.148109 containerd[1564]: time="2025-09-12T05:55:27.147978830Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bae0107e62bf7a894e3f66da2eb625d4c027136d5a1472dab1fb9e2d3343e82\" id:\"2fc37bc8c9eb20c14c30cecafe0ba57b8e6da116074ca7e341fbdac9fddfba1b\" pid:5160 exited_at:{seconds:1757656527 nanos:147559670}" Sep 12 05:55:27.739702 systemd[1]: Started sshd@9-10.0.0.78:22-10.0.0.1:41034.service - OpenSSH per-connection server daemon (10.0.0.1:41034). Sep 12 05:55:27.799538 sshd[5177]: Accepted publickey for core from 10.0.0.1 port 41034 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:55:27.801502 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:55:27.807700 systemd-logind[1543]: New session 10 of user core. Sep 12 05:55:27.814739 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 05:55:28.094410 kubelet[2721]: E0912 05:55:28.094346 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:28.094959 containerd[1564]: time="2025-09-12T05:55:28.094852023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-82dfx,Uid:50e41295-2104-45bb-9ef8-36b5a981ae68,Namespace:kube-system,Attempt:0,}" Sep 12 05:55:28.253971 sshd[5180]: Connection closed by 10.0.0.1 port 41034 Sep 12 05:55:28.254841 sshd-session[5177]: pam_unix(sshd:session): session closed for user core Sep 12 05:55:28.259866 systemd[1]: sshd@9-10.0.0.78:22-10.0.0.1:41034.service: Deactivated successfully. Sep 12 05:55:28.262545 systemd-logind[1543]: Session 10 logged out. Waiting for processes to exit. Sep 12 05:55:28.264539 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 05:55:28.269466 systemd-logind[1543]: Removed session 10. Sep 12 05:55:28.315269 systemd-networkd[1475]: cali3ebbff50a46: Link UP Sep 12 05:55:28.317864 systemd-networkd[1475]: cali3ebbff50a46: Gained carrier Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.237 [INFO][5200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--82dfx-eth0 coredns-7c65d6cfc9- kube-system 50e41295-2104-45bb-9ef8-36b5a981ae68 810 0 2025-09-12 05:54:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-82dfx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3ebbff50a46 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" Namespace="kube-system" Pod="coredns-7c65d6cfc9-82dfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--82dfx-" Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.238 [INFO][5200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" Namespace="kube-system" Pod="coredns-7c65d6cfc9-82dfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--82dfx-eth0" Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.277 [INFO][5209] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" HandleID="k8s-pod-network.fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" Workload="localhost-k8s-coredns--7c65d6cfc9--82dfx-eth0" Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.277 [INFO][5209] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" HandleID="k8s-pod-network.fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" Workload="localhost-k8s-coredns--7c65d6cfc9--82dfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7050), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-82dfx", "timestamp":"2025-09-12 05:55:28.277613084 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.277 [INFO][5209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.277 [INFO][5209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.278 [INFO][5209] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.283 [INFO][5209] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" host="localhost" Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.288 [INFO][5209] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.291 [INFO][5209] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.293 [INFO][5209] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.295 [INFO][5209] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.295 [INFO][5209] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" host="localhost" Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.297 [INFO][5209] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82 Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.302 [INFO][5209] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" host="localhost" Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.307 [INFO][5209] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" host="localhost" Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.308 [INFO][5209] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" host="localhost" Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.308 [INFO][5209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:55:28.337514 containerd[1564]: 2025-09-12 05:55:28.308 [INFO][5209] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" HandleID="k8s-pod-network.fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" Workload="localhost-k8s-coredns--7c65d6cfc9--82dfx-eth0" Sep 12 05:55:28.338357 containerd[1564]: 2025-09-12 05:55:28.312 [INFO][5200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" Namespace="kube-system" Pod="coredns-7c65d6cfc9-82dfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--82dfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--82dfx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"50e41295-2104-45bb-9ef8-36b5a981ae68", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-82dfx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3ebbff50a46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:28.338357 containerd[1564]: 2025-09-12 05:55:28.312 [INFO][5200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" Namespace="kube-system" Pod="coredns-7c65d6cfc9-82dfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--82dfx-eth0" Sep 12 05:55:28.338357 containerd[1564]: 2025-09-12 05:55:28.312 [INFO][5200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ebbff50a46 ContainerID="fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" Namespace="kube-system" Pod="coredns-7c65d6cfc9-82dfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--82dfx-eth0" Sep 12 05:55:28.338357 containerd[1564]: 2025-09-12 05:55:28.318 [INFO][5200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" Namespace="kube-system" Pod="coredns-7c65d6cfc9-82dfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--82dfx-eth0" Sep 12 05:55:28.338357 containerd[1564]: 2025-09-12 05:55:28.321 [INFO][5200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" Namespace="kube-system" Pod="coredns-7c65d6cfc9-82dfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--82dfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--82dfx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"50e41295-2104-45bb-9ef8-36b5a981ae68", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82", Pod:"coredns-7c65d6cfc9-82dfx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3ebbff50a46", MAC:"36:94:2a:2d:2b:4b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:55:28.338357 containerd[1564]: 2025-09-12 05:55:28.332 [INFO][5200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" Namespace="kube-system" Pod="coredns-7c65d6cfc9-82dfx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--82dfx-eth0" Sep 12 05:55:28.372699 containerd[1564]: time="2025-09-12T05:55:28.371596772Z" level=info msg="connecting to shim fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82" address="unix:///run/containerd/s/a064b9f2d04faae6cb4dff893419617f6e12a9b4b337907ba0ba5a4ff545d7b8" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:55:28.409754 systemd[1]: Started cri-containerd-fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82.scope - libcontainer container fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82. Sep 12 05:55:28.429030 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:55:28.541525 containerd[1564]: time="2025-09-12T05:55:28.541453054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-82dfx,Uid:50e41295-2104-45bb-9ef8-36b5a981ae68,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82\"" Sep 12 05:55:28.542744 kubelet[2721]: E0912 05:55:28.542706 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:28.545994 containerd[1564]: time="2025-09-12T05:55:28.545894877Z" level=info msg="CreateContainer within sandbox \"fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 05:55:29.208323 containerd[1564]: time="2025-09-12T05:55:29.208248915Z" level=info msg="Container 414ef68e73542cf7e51960b0e3c62aa8f78bbd489c60b0a411fd294d1d0a6e55: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:55:29.241441 containerd[1564]: time="2025-09-12T05:55:29.241370701Z" level=info msg="CreateContainer within sandbox \"fb855864efb5c4748bac029958200ab21d91d9e785341fdfa26cea8ecd373e82\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"414ef68e73542cf7e51960b0e3c62aa8f78bbd489c60b0a411fd294d1d0a6e55\"" Sep 12 05:55:29.242228 containerd[1564]: time="2025-09-12T05:55:29.242023771Z" level=info msg="StartContainer for \"414ef68e73542cf7e51960b0e3c62aa8f78bbd489c60b0a411fd294d1d0a6e55\"" Sep 12 05:55:29.242970 containerd[1564]: time="2025-09-12T05:55:29.242941971Z" level=info msg="connecting to shim 414ef68e73542cf7e51960b0e3c62aa8f78bbd489c60b0a411fd294d1d0a6e55" address="unix:///run/containerd/s/a064b9f2d04faae6cb4dff893419617f6e12a9b4b337907ba0ba5a4ff545d7b8" protocol=ttrpc version=3 Sep 12 05:55:29.257735 containerd[1564]: time="2025-09-12T05:55:29.257678790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:29.258580 containerd[1564]: time="2025-09-12T05:55:29.258460581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 05:55:29.260061 containerd[1564]: time="2025-09-12T05:55:29.260025308Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:29.262120 containerd[1564]: time="2025-09-12T05:55:29.262086259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:29.263127 containerd[1564]: time="2025-09-12T05:55:29.263103123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.029370077s" Sep 12 05:55:29.263242 containerd[1564]: time="2025-09-12T05:55:29.263207901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 05:55:29.266365 containerd[1564]: time="2025-09-12T05:55:29.266346681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 05:55:29.270951 containerd[1564]: time="2025-09-12T05:55:29.269848414Z" level=info msg="CreateContainer within sandbox \"c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 05:55:29.271811 systemd[1]: Started cri-containerd-414ef68e73542cf7e51960b0e3c62aa8f78bbd489c60b0a411fd294d1d0a6e55.scope - libcontainer container 414ef68e73542cf7e51960b0e3c62aa8f78bbd489c60b0a411fd294d1d0a6e55. Sep 12 05:55:29.288868 containerd[1564]: time="2025-09-12T05:55:29.288810800Z" level=info msg="Container 488ffc862087b1629b0a5c0f871dd7161ef9f17f4b699c1cc46cabe73909dcb3: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:55:29.443265 containerd[1564]: time="2025-09-12T05:55:29.443205730Z" level=info msg="StartContainer for \"414ef68e73542cf7e51960b0e3c62aa8f78bbd489c60b0a411fd294d1d0a6e55\" returns successfully" Sep 12 05:55:29.452820 containerd[1564]: time="2025-09-12T05:55:29.452777704Z" level=info msg="CreateContainer within sandbox \"c0227b0765eb5d4eb297646c55ec63244f8df84705edbe5c914e97fbe26f52cd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"488ffc862087b1629b0a5c0f871dd7161ef9f17f4b699c1cc46cabe73909dcb3\"" Sep 12 05:55:29.453353 containerd[1564]: time="2025-09-12T05:55:29.453322650Z" level=info msg="StartContainer for \"488ffc862087b1629b0a5c0f871dd7161ef9f17f4b699c1cc46cabe73909dcb3\"" Sep 12 05:55:29.454640 containerd[1564]: time="2025-09-12T05:55:29.454604002Z" level=info msg="connecting to shim 488ffc862087b1629b0a5c0f871dd7161ef9f17f4b699c1cc46cabe73909dcb3" address="unix:///run/containerd/s/ef480c4b535debeeab60b430f9702ccb5784a553f74708021c7998f083911290" protocol=ttrpc version=3 Sep 12 05:55:29.478738 systemd[1]: Started cri-containerd-488ffc862087b1629b0a5c0f871dd7161ef9f17f4b699c1cc46cabe73909dcb3.scope - libcontainer container 488ffc862087b1629b0a5c0f871dd7161ef9f17f4b699c1cc46cabe73909dcb3. Sep 12 05:55:29.537717 containerd[1564]: time="2025-09-12T05:55:29.537638072Z" level=info msg="StartContainer for \"488ffc862087b1629b0a5c0f871dd7161ef9f17f4b699c1cc46cabe73909dcb3\" returns successfully" Sep 12 05:55:29.845207 kubelet[2721]: E0912 05:55:29.845083 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:29.879655 kubelet[2721]: I0912 05:55:29.879552 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59fc85cc6-9jmd6" podStartSLOduration=28.456783436 podStartE2EDuration="40.879534341s" podCreationTimestamp="2025-09-12 05:54:49 +0000 UTC" firstStartedPulling="2025-09-12 05:55:16.84351874 +0000 UTC m=+40.849970719" lastFinishedPulling="2025-09-12 05:55:29.266269645 +0000 UTC m=+53.272721624" observedRunningTime="2025-09-12 05:55:29.87492404 +0000 UTC m=+53.881376019" watchObservedRunningTime="2025-09-12 05:55:29.879534341 +0000 UTC m=+53.885986320" Sep 12 05:55:30.370751 systemd-networkd[1475]: cali3ebbff50a46: Gained IPv6LL Sep 12 05:55:30.816914 kubelet[2721]: I0912 05:55:30.816733 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-82dfx" podStartSLOduration=49.816706682 podStartE2EDuration="49.816706682s" podCreationTimestamp="2025-09-12 05:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:55:29.897612482 +0000 UTC m=+53.904064461" watchObservedRunningTime="2025-09-12 05:55:30.816706682 +0000 UTC m=+54.823158661" Sep 12 05:55:31.024940 kubelet[2721]: E0912 05:55:31.024829 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:32.042219 containerd[1564]: time="2025-09-12T05:55:32.042166750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:32.043097 containerd[1564]: time="2025-09-12T05:55:32.043043400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 05:55:32.044989 containerd[1564]: time="2025-09-12T05:55:32.044963264Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:32.047686 containerd[1564]: time="2025-09-12T05:55:32.047626919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:32.048212 containerd[1564]: time="2025-09-12T05:55:32.048176204Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.781695018s" Sep 12 05:55:32.048212 containerd[1564]: time="2025-09-12T05:55:32.048203996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 05:55:32.049232 containerd[1564]: time="2025-09-12T05:55:32.049193419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 05:55:32.050333 containerd[1564]: time="2025-09-12T05:55:32.050299681Z" level=info msg="CreateContainer within sandbox \"817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 05:55:32.061461 containerd[1564]: time="2025-09-12T05:55:32.061415158Z" level=info msg="Container 27d499ec0c2df808b4190a894e0c4f3313f0241efe5b9c03b5dac7407573b4fb: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:55:32.079182 containerd[1564]: time="2025-09-12T05:55:32.079144607Z" level=info msg="CreateContainer within sandbox \"817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"27d499ec0c2df808b4190a894e0c4f3313f0241efe5b9c03b5dac7407573b4fb\"" Sep 12 05:55:32.079810 containerd[1564]: time="2025-09-12T05:55:32.079780073Z" level=info msg="StartContainer for \"27d499ec0c2df808b4190a894e0c4f3313f0241efe5b9c03b5dac7407573b4fb\"" Sep 12 05:55:32.081246 containerd[1564]: time="2025-09-12T05:55:32.081223040Z" level=info msg="connecting to shim 27d499ec0c2df808b4190a894e0c4f3313f0241efe5b9c03b5dac7407573b4fb" address="unix:///run/containerd/s/05c832e96be3c2ec70d60a11da432c8e76c2892c7ff5367035b3287becbc26c3" protocol=ttrpc version=3 Sep 12 05:55:32.111703 systemd[1]: Started cri-containerd-27d499ec0c2df808b4190a894e0c4f3313f0241efe5b9c03b5dac7407573b4fb.scope - libcontainer container 27d499ec0c2df808b4190a894e0c4f3313f0241efe5b9c03b5dac7407573b4fb. Sep 12 05:55:32.178178 containerd[1564]: time="2025-09-12T05:55:32.178133502Z" level=info msg="StartContainer for \"27d499ec0c2df808b4190a894e0c4f3313f0241efe5b9c03b5dac7407573b4fb\" returns successfully" Sep 12 05:55:32.408896 containerd[1564]: time="2025-09-12T05:55:32.408847023Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:32.409646 containerd[1564]: time="2025-09-12T05:55:32.409621170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 05:55:32.411403 containerd[1564]: time="2025-09-12T05:55:32.411377196Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 362.144003ms" Sep 12 05:55:32.411464 containerd[1564]: time="2025-09-12T05:55:32.411406611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 05:55:32.412948 containerd[1564]: time="2025-09-12T05:55:32.412747366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 05:55:32.414425 containerd[1564]: time="2025-09-12T05:55:32.414379719Z" level=info msg="CreateContainer within sandbox \"f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 05:55:32.423374 containerd[1564]: time="2025-09-12T05:55:32.423331934Z" level=info msg="Container 21d1eb96a672289ae48487f383a6b285eea0d005d4ef9c32d16517c115898ca1: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:55:32.430581 containerd[1564]: time="2025-09-12T05:55:32.430529344Z" level=info msg="CreateContainer within sandbox \"f3002350c1a53d049f091497f756b90ed6b4ace7edd50827765d50363c8713dd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"21d1eb96a672289ae48487f383a6b285eea0d005d4ef9c32d16517c115898ca1\"" Sep 12 05:55:32.430983 containerd[1564]: time="2025-09-12T05:55:32.430963171Z" level=info msg="StartContainer for \"21d1eb96a672289ae48487f383a6b285eea0d005d4ef9c32d16517c115898ca1\"" Sep 12 05:55:32.432012 containerd[1564]: time="2025-09-12T05:55:32.431988912Z" level=info msg="connecting to shim 21d1eb96a672289ae48487f383a6b285eea0d005d4ef9c32d16517c115898ca1" address="unix:///run/containerd/s/271d43253a65b8d31ced195b37d71127cb8234a5c9fcb710f65cee68e5807a17" protocol=ttrpc version=3 Sep 12 05:55:32.459730 systemd[1]: Started cri-containerd-21d1eb96a672289ae48487f383a6b285eea0d005d4ef9c32d16517c115898ca1.scope - libcontainer container 21d1eb96a672289ae48487f383a6b285eea0d005d4ef9c32d16517c115898ca1. Sep 12 05:55:32.512209 containerd[1564]: time="2025-09-12T05:55:32.512096940Z" level=info msg="StartContainer for \"21d1eb96a672289ae48487f383a6b285eea0d005d4ef9c32d16517c115898ca1\" returns successfully" Sep 12 05:55:32.908876 kubelet[2721]: I0912 05:55:32.908818 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59fc85cc6-l7rwv" podStartSLOduration=30.168528557 podStartE2EDuration="43.908796882s" podCreationTimestamp="2025-09-12 05:54:49 +0000 UTC" firstStartedPulling="2025-09-12 05:55:18.672049212 +0000 UTC m=+42.678501191" lastFinishedPulling="2025-09-12 05:55:32.412317537 +0000 UTC m=+56.418769516" observedRunningTime="2025-09-12 05:55:32.908520331 +0000 UTC m=+56.914972310" watchObservedRunningTime="2025-09-12 05:55:32.908796882 +0000 UTC m=+56.915248861" Sep 12 05:55:33.268983 systemd[1]: Started sshd@10-10.0.0.78:22-10.0.0.1:34818.service - OpenSSH per-connection server daemon (10.0.0.1:34818). Sep 12 05:55:33.339899 sshd[5439]: Accepted publickey for core from 10.0.0.1 port 34818 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:55:33.342136 sshd-session[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:55:33.347759 systemd-logind[1543]: New session 11 of user core. Sep 12 05:55:33.357813 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 05:55:33.502679 sshd[5443]: Connection closed by 10.0.0.1 port 34818 Sep 12 05:55:33.503075 sshd-session[5439]: pam_unix(sshd:session): session closed for user core Sep 12 05:55:33.512377 systemd[1]: sshd@10-10.0.0.78:22-10.0.0.1:34818.service: Deactivated successfully. Sep 12 05:55:33.514338 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 05:55:33.515074 systemd-logind[1543]: Session 11 logged out. Waiting for processes to exit. Sep 12 05:55:33.518703 systemd[1]: Started sshd@11-10.0.0.78:22-10.0.0.1:34820.service - OpenSSH per-connection server daemon (10.0.0.1:34820). Sep 12 05:55:33.519691 systemd-logind[1543]: Removed session 11. Sep 12 05:55:33.569661 sshd[5459]: Accepted publickey for core from 10.0.0.1 port 34820 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:55:33.571392 sshd-session[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:55:33.576518 systemd-logind[1543]: New session 12 of user core. Sep 12 05:55:33.582720 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 05:55:33.737831 sshd[5462]: Connection closed by 10.0.0.1 port 34820 Sep 12 05:55:33.739143 sshd-session[5459]: pam_unix(sshd:session): session closed for user core Sep 12 05:55:33.757537 systemd[1]: sshd@11-10.0.0.78:22-10.0.0.1:34820.service: Deactivated successfully. Sep 12 05:55:33.761227 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 05:55:33.764835 systemd-logind[1543]: Session 12 logged out. Waiting for processes to exit. Sep 12 05:55:33.768225 systemd[1]: Started sshd@12-10.0.0.78:22-10.0.0.1:34834.service - OpenSSH per-connection server daemon (10.0.0.1:34834). Sep 12 05:55:33.770198 systemd-logind[1543]: Removed session 12. Sep 12 05:55:33.825787 sshd[5474]: Accepted publickey for core from 10.0.0.1 port 34834 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:55:33.827605 sshd-session[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:55:33.832896 systemd-logind[1543]: New session 13 of user core. Sep 12 05:55:33.847704 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 05:55:33.905119 kubelet[2721]: I0912 05:55:33.905069 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 05:55:34.045822 sshd[5477]: Connection closed by 10.0.0.1 port 34834 Sep 12 05:55:34.046201 sshd-session[5474]: pam_unix(sshd:session): session closed for user core Sep 12 05:55:34.052028 systemd[1]: sshd@12-10.0.0.78:22-10.0.0.1:34834.service: Deactivated successfully. Sep 12 05:55:34.054865 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 05:55:34.055832 systemd-logind[1543]: Session 13 logged out. Waiting for processes to exit. Sep 12 05:55:34.057651 systemd-logind[1543]: Removed session 13. Sep 12 05:55:35.289496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656564091.mount: Deactivated successfully. Sep 12 05:55:35.483597 containerd[1564]: time="2025-09-12T05:55:35.483455505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:35.484804 containerd[1564]: time="2025-09-12T05:55:35.484763798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 05:55:35.489470 containerd[1564]: time="2025-09-12T05:55:35.489401999Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:35.491860 containerd[1564]: time="2025-09-12T05:55:35.491829879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:35.492435 containerd[1564]: time="2025-09-12T05:55:35.492407246Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.079620997s" Sep 12 05:55:35.492491 containerd[1564]: time="2025-09-12T05:55:35.492438395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 05:55:35.494732 containerd[1564]: time="2025-09-12T05:55:35.494650369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 05:55:35.497634 containerd[1564]: time="2025-09-12T05:55:35.497598599Z" level=info msg="CreateContainer within sandbox \"3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 05:55:35.508802 containerd[1564]: time="2025-09-12T05:55:35.508759018Z" level=info msg="Container 16c28babfcdf259e78887e06a9bf88857c4d955ab960f1f58ffe2183211ff2bd: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:55:35.526523 containerd[1564]: time="2025-09-12T05:55:35.526406068Z" level=info msg="CreateContainer within sandbox \"3c1c6c94cbc9eae4ecde8658c86b0392db41a1571de4641da8dc6ef0283bc7c6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"16c28babfcdf259e78887e06a9bf88857c4d955ab960f1f58ffe2183211ff2bd\"" Sep 12 05:55:35.527687 containerd[1564]: time="2025-09-12T05:55:35.527639439Z" level=info msg="StartContainer for \"16c28babfcdf259e78887e06a9bf88857c4d955ab960f1f58ffe2183211ff2bd\"" Sep 12 05:55:35.538595 containerd[1564]: time="2025-09-12T05:55:35.538528408Z" level=info msg="connecting to shim 16c28babfcdf259e78887e06a9bf88857c4d955ab960f1f58ffe2183211ff2bd" address="unix:///run/containerd/s/448509883b7eb550626b1115146124221aa369e014e17d43b5c9a87f09aac6a7" protocol=ttrpc version=3 Sep 12 05:55:35.568793 systemd[1]: Started cri-containerd-16c28babfcdf259e78887e06a9bf88857c4d955ab960f1f58ffe2183211ff2bd.scope - libcontainer container 16c28babfcdf259e78887e06a9bf88857c4d955ab960f1f58ffe2183211ff2bd. Sep 12 05:55:35.618986 containerd[1564]: time="2025-09-12T05:55:35.618773035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c21a456e7c9137bbdff2032b4a02999fbbe944360dc34a6bd70ce76182ae08ac\" id:\"8559cc948ee45f798a6805f2c15223600e5cce39f6c1030248d0471f9d5263b7\" pid:5510 exited_at:{seconds:1757656535 nanos:617745350}" Sep 12 05:55:35.706437 containerd[1564]: time="2025-09-12T05:55:35.706392614Z" level=info msg="StartContainer for \"16c28babfcdf259e78887e06a9bf88857c4d955ab960f1f58ffe2183211ff2bd\" returns successfully" Sep 12 05:55:37.076966 containerd[1564]: time="2025-09-12T05:55:37.076915357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bae0107e62bf7a894e3f66da2eb625d4c027136d5a1472dab1fb9e2d3343e82\" id:\"e388521e3c915ebb394397f9a99d6dc6f9b3249493339594b977eba9275ed11e\" pid:5573 exited_at:{seconds:1757656537 nanos:76600253}" Sep 12 05:55:37.685787 containerd[1564]: time="2025-09-12T05:55:37.685712332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:37.687858 containerd[1564]: time="2025-09-12T05:55:37.687794401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 05:55:37.689818 containerd[1564]: time="2025-09-12T05:55:37.689743770Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:37.693311 containerd[1564]: time="2025-09-12T05:55:37.693233880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:55:37.693951 containerd[1564]: time="2025-09-12T05:55:37.693916034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.199230229s" Sep 12 05:55:37.694046 containerd[1564]: time="2025-09-12T05:55:37.693952342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 05:55:37.696164 containerd[1564]: time="2025-09-12T05:55:37.696097581Z" level=info msg="CreateContainer within sandbox \"817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 05:55:37.705647 containerd[1564]: time="2025-09-12T05:55:37.705595930Z" level=info msg="Container bd1a13761d2ee9c7b8e06324619c7e8317765747b2a8dd39ef468ea17b4877f8: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:55:37.721828 containerd[1564]: time="2025-09-12T05:55:37.721763743Z" level=info msg="CreateContainer within sandbox \"817bb1f7c8830147be81194b2f80fa5ee88450708de09d5766ecd2a6077876c0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bd1a13761d2ee9c7b8e06324619c7e8317765747b2a8dd39ef468ea17b4877f8\"" Sep 12 05:55:37.722417 containerd[1564]: time="2025-09-12T05:55:37.722383299Z" level=info msg="StartContainer for \"bd1a13761d2ee9c7b8e06324619c7e8317765747b2a8dd39ef468ea17b4877f8\"" Sep 12 05:55:37.724378 containerd[1564]: time="2025-09-12T05:55:37.724333340Z" level=info msg="connecting to shim bd1a13761d2ee9c7b8e06324619c7e8317765747b2a8dd39ef468ea17b4877f8" address="unix:///run/containerd/s/05c832e96be3c2ec70d60a11da432c8e76c2892c7ff5367035b3287becbc26c3" protocol=ttrpc version=3 Sep 12 05:55:37.748738 systemd[1]: Started cri-containerd-bd1a13761d2ee9c7b8e06324619c7e8317765747b2a8dd39ef468ea17b4877f8.scope - libcontainer container bd1a13761d2ee9c7b8e06324619c7e8317765747b2a8dd39ef468ea17b4877f8. Sep 12 05:55:37.794174 containerd[1564]: time="2025-09-12T05:55:37.794097689Z" level=info msg="StartContainer for \"bd1a13761d2ee9c7b8e06324619c7e8317765747b2a8dd39ef468ea17b4877f8\" returns successfully" Sep 12 05:55:37.947856 kubelet[2721]: I0912 05:55:37.947451 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-fccf6cb4c-4dj67" podStartSLOduration=3.484039666 podStartE2EDuration="22.947429679s" podCreationTimestamp="2025-09-12 05:55:15 +0000 UTC" firstStartedPulling="2025-09-12 05:55:16.029994382 +0000 UTC m=+40.036446362" lastFinishedPulling="2025-09-12 05:55:35.493384396 +0000 UTC m=+59.499836375" observedRunningTime="2025-09-12 05:55:35.939358422 +0000 UTC m=+59.945810411" watchObservedRunningTime="2025-09-12 05:55:37.947429679 +0000 UTC m=+61.953881658" Sep 12 05:55:38.174871 kubelet[2721]: I0912 05:55:38.174829 2721 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 05:55:38.174871 kubelet[2721]: I0912 05:55:38.174878 2721 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 05:55:39.061963 systemd[1]: Started sshd@13-10.0.0.78:22-10.0.0.1:34850.service - OpenSSH per-connection server daemon (10.0.0.1:34850). Sep 12 05:55:39.129192 sshd[5633]: Accepted publickey for core from 10.0.0.1 port 34850 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:55:39.131122 sshd-session[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:55:39.137033 systemd-logind[1543]: New session 14 of user core. Sep 12 05:55:39.149140 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 05:55:39.322449 sshd[5636]: Connection closed by 10.0.0.1 port 34850 Sep 12 05:55:39.322718 sshd-session[5633]: pam_unix(sshd:session): session closed for user core Sep 12 05:55:39.327695 systemd[1]: sshd@13-10.0.0.78:22-10.0.0.1:34850.service: Deactivated successfully. Sep 12 05:55:39.329854 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 05:55:39.330904 systemd-logind[1543]: Session 14 logged out. Waiting for processes to exit. Sep 12 05:55:39.332300 systemd-logind[1543]: Removed session 14. Sep 12 05:55:44.335969 systemd[1]: Started sshd@14-10.0.0.78:22-10.0.0.1:33456.service - OpenSSH per-connection server daemon (10.0.0.1:33456). Sep 12 05:55:44.404245 sshd[5652]: Accepted publickey for core from 10.0.0.1 port 33456 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:55:44.406446 sshd-session[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:55:44.413892 systemd-logind[1543]: New session 15 of user core. Sep 12 05:55:44.424974 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 05:55:44.543936 sshd[5655]: Connection closed by 10.0.0.1 port 33456 Sep 12 05:55:44.544313 sshd-session[5652]: pam_unix(sshd:session): session closed for user core Sep 12 05:55:44.549227 systemd[1]: sshd@14-10.0.0.78:22-10.0.0.1:33456.service: Deactivated successfully. Sep 12 05:55:44.551771 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 05:55:44.552797 systemd-logind[1543]: Session 15 logged out. Waiting for processes to exit. Sep 12 05:55:44.554798 systemd-logind[1543]: Removed session 15. Sep 12 05:55:44.609003 containerd[1564]: time="2025-09-12T05:55:44.608935688Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e41fe1433500302fc2b8bd1f9ab9ecfe9ccb716a3d9c847ea64291af4cb6a5b\" id:\"36a8e465df643a13a8cd2e9c65fcde8457c204a90f8bead81aac7394e0670421\" pid:5677 exited_at:{seconds:1757656544 nanos:608480178}" Sep 12 05:55:44.629540 kubelet[2721]: I0912 05:55:44.629450 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k69w8" podStartSLOduration=32.324085257 podStartE2EDuration="52.629426322s" podCreationTimestamp="2025-09-12 05:54:52 +0000 UTC" firstStartedPulling="2025-09-12 05:55:17.389475013 +0000 UTC m=+41.395926992" lastFinishedPulling="2025-09-12 05:55:37.694816088 +0000 UTC m=+61.701268057" observedRunningTime="2025-09-12 05:55:37.947010209 +0000 UTC m=+61.953462198" watchObservedRunningTime="2025-09-12 05:55:44.629426322 +0000 UTC m=+68.635878301" Sep 12 05:55:45.094259 kubelet[2721]: E0912 05:55:45.094209 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:55:45.746701 containerd[1564]: time="2025-09-12T05:55:45.746643653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c21a456e7c9137bbdff2032b4a02999fbbe944360dc34a6bd70ce76182ae08ac\" id:\"88b26207aeb73f9795d488cb4489bc10dd56c51015d71146885472e5fec3702b\" pid:5702 exited_at:{seconds:1757656545 nanos:746170119}" Sep 12 05:55:49.566209 systemd[1]: Started sshd@15-10.0.0.78:22-10.0.0.1:33462.service - OpenSSH per-connection server daemon (10.0.0.1:33462). Sep 12 05:55:49.647329 sshd[5721]: Accepted publickey for core from 10.0.0.1 port 33462 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:55:49.649116 sshd-session[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:55:49.662621 systemd-logind[1543]: New session 16 of user core. Sep 12 05:55:49.667771 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 05:55:49.938009 sshd[5724]: Connection closed by 10.0.0.1 port 33462 Sep 12 05:55:49.938367 sshd-session[5721]: pam_unix(sshd:session): session closed for user core Sep 12 05:55:49.943990 systemd[1]: sshd@15-10.0.0.78:22-10.0.0.1:33462.service: Deactivated successfully. Sep 12 05:55:49.946306 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 05:55:49.947623 systemd-logind[1543]: Session 16 logged out. Waiting for processes to exit. Sep 12 05:55:49.949208 systemd-logind[1543]: Removed session 16. Sep 12 05:55:54.954823 systemd[1]: Started sshd@16-10.0.0.78:22-10.0.0.1:51556.service - OpenSSH per-connection server daemon (10.0.0.1:51556). Sep 12 05:55:55.009190 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 51556 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:55:55.010873 sshd-session[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:55:55.016050 systemd-logind[1543]: New session 17 of user core. Sep 12 05:55:55.022737 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 05:55:55.199259 sshd[5742]: Connection closed by 10.0.0.1 port 51556 Sep 12 05:55:55.199891 sshd-session[5739]: pam_unix(sshd:session): session closed for user core Sep 12 05:55:55.204905 systemd[1]: sshd@16-10.0.0.78:22-10.0.0.1:51556.service: Deactivated successfully. Sep 12 05:55:55.207521 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 05:55:55.209488 systemd-logind[1543]: Session 17 logged out. Waiting for processes to exit. Sep 12 05:55:55.211956 systemd-logind[1543]: Removed session 17. Sep 12 05:55:57.095368 containerd[1564]: time="2025-09-12T05:55:57.095292348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bae0107e62bf7a894e3f66da2eb625d4c027136d5a1472dab1fb9e2d3343e82\" id:\"50d3513c0b23eb7a3e7dd34da4325ea4393ec0217170653af509713c00f896ef\" pid:5766 exited_at:{seconds:1757656557 nanos:94975402}" Sep 12 05:56:00.220260 systemd[1]: Started sshd@17-10.0.0.78:22-10.0.0.1:34406.service - OpenSSH per-connection server daemon (10.0.0.1:34406). Sep 12 05:56:00.272751 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 34406 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:56:00.274360 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:56:00.279543 systemd-logind[1543]: New session 18 of user core. Sep 12 05:56:00.295855 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 05:56:00.433236 sshd[5786]: Connection closed by 10.0.0.1 port 34406 Sep 12 05:56:00.433705 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Sep 12 05:56:00.442444 systemd[1]: sshd@17-10.0.0.78:22-10.0.0.1:34406.service: Deactivated successfully. Sep 12 05:56:00.444552 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 05:56:00.445551 systemd-logind[1543]: Session 18 logged out. Waiting for processes to exit. Sep 12 05:56:00.449982 systemd[1]: Started sshd@18-10.0.0.78:22-10.0.0.1:34416.service - OpenSSH per-connection server daemon (10.0.0.1:34416). Sep 12 05:56:00.451454 systemd-logind[1543]: Removed session 18. Sep 12 05:56:00.503694 sshd[5800]: Accepted publickey for core from 10.0.0.1 port 34416 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:56:00.505498 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:56:00.510216 systemd-logind[1543]: New session 19 of user core. Sep 12 05:56:00.519861 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 05:56:01.435076 sshd[5803]: Connection closed by 10.0.0.1 port 34416 Sep 12 05:56:01.435637 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Sep 12 05:56:01.449505 systemd[1]: sshd@18-10.0.0.78:22-10.0.0.1:34416.service: Deactivated successfully. Sep 12 05:56:01.451682 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 05:56:01.452545 systemd-logind[1543]: Session 19 logged out. Waiting for processes to exit. Sep 12 05:56:01.456292 systemd[1]: Started sshd@19-10.0.0.78:22-10.0.0.1:34424.service - OpenSSH per-connection server daemon (10.0.0.1:34424). Sep 12 05:56:01.457981 systemd-logind[1543]: Removed session 19. Sep 12 05:56:01.512358 sshd[5814]: Accepted publickey for core from 10.0.0.1 port 34424 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:56:01.514249 sshd-session[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:56:01.519051 systemd-logind[1543]: New session 20 of user core. Sep 12 05:56:01.532741 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 05:56:02.038596 kubelet[2721]: I0912 05:56:02.038191 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 05:56:02.094584 kubelet[2721]: E0912 05:56:02.094532 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:56:03.174395 sshd[5817]: Connection closed by 10.0.0.1 port 34424 Sep 12 05:56:03.175122 sshd-session[5814]: pam_unix(sshd:session): session closed for user core Sep 12 05:56:03.190205 systemd[1]: Started sshd@20-10.0.0.78:22-10.0.0.1:34436.service - OpenSSH per-connection server daemon (10.0.0.1:34436). Sep 12 05:56:03.190775 systemd[1]: sshd@19-10.0.0.78:22-10.0.0.1:34424.service: Deactivated successfully. Sep 12 05:56:03.199274 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 05:56:03.200463 systemd[1]: session-20.scope: Consumed 630ms CPU time, 73.1M memory peak. Sep 12 05:56:03.203267 systemd-logind[1543]: Session 20 logged out. Waiting for processes to exit. Sep 12 05:56:03.206854 systemd-logind[1543]: Removed session 20. Sep 12 05:56:03.267225 sshd[5840]: Accepted publickey for core from 10.0.0.1 port 34436 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:56:03.268670 sshd-session[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:56:03.274361 systemd-logind[1543]: New session 21 of user core. Sep 12 05:56:03.282791 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 05:56:03.682227 sshd[5847]: Connection closed by 10.0.0.1 port 34436 Sep 12 05:56:03.682716 sshd-session[5840]: pam_unix(sshd:session): session closed for user core Sep 12 05:56:03.693558 systemd[1]: sshd@20-10.0.0.78:22-10.0.0.1:34436.service: Deactivated successfully. Sep 12 05:56:03.695938 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 05:56:03.697275 systemd-logind[1543]: Session 21 logged out. Waiting for processes to exit. Sep 12 05:56:03.701020 systemd[1]: Started sshd@21-10.0.0.78:22-10.0.0.1:34440.service - OpenSSH per-connection server daemon (10.0.0.1:34440). Sep 12 05:56:03.702606 systemd-logind[1543]: Removed session 21. Sep 12 05:56:03.758056 sshd[5858]: Accepted publickey for core from 10.0.0.1 port 34440 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:56:03.759944 sshd-session[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:56:03.764816 systemd-logind[1543]: New session 22 of user core. Sep 12 05:56:03.774726 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 05:56:03.883058 sshd[5861]: Connection closed by 10.0.0.1 port 34440 Sep 12 05:56:03.883433 sshd-session[5858]: pam_unix(sshd:session): session closed for user core Sep 12 05:56:03.888667 systemd[1]: sshd@21-10.0.0.78:22-10.0.0.1:34440.service: Deactivated successfully. Sep 12 05:56:03.891189 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 05:56:03.892128 systemd-logind[1543]: Session 22 logged out. Waiting for processes to exit. Sep 12 05:56:03.893521 systemd-logind[1543]: Removed session 22. Sep 12 05:56:05.623824 containerd[1564]: time="2025-09-12T05:56:05.623743886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c21a456e7c9137bbdff2032b4a02999fbbe944360dc34a6bd70ce76182ae08ac\" id:\"c82238fc26d4a39f3c75a2948b6c766120d94951675fdc0246e30931531e50cf\" pid:5885 exited_at:{seconds:1757656565 nanos:623260643}" Sep 12 05:56:08.094544 kubelet[2721]: E0912 05:56:08.094492 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:56:08.896542 systemd[1]: Started sshd@22-10.0.0.78:22-10.0.0.1:34444.service - OpenSSH per-connection server daemon (10.0.0.1:34444). Sep 12 05:56:08.938010 sshd[5902]: Accepted publickey for core from 10.0.0.1 port 34444 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:56:08.939436 sshd-session[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:56:08.943622 systemd-logind[1543]: New session 23 of user core. Sep 12 05:56:08.953769 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 05:56:09.066505 sshd[5905]: Connection closed by 10.0.0.1 port 34444 Sep 12 05:56:09.066874 sshd-session[5902]: pam_unix(sshd:session): session closed for user core Sep 12 05:56:09.071145 systemd[1]: sshd@22-10.0.0.78:22-10.0.0.1:34444.service: Deactivated successfully. Sep 12 05:56:09.073674 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 05:56:09.074674 systemd-logind[1543]: Session 23 logged out. Waiting for processes to exit. Sep 12 05:56:09.076045 systemd-logind[1543]: Removed session 23. Sep 12 05:56:10.094884 kubelet[2721]: E0912 05:56:10.094839 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:56:14.079634 systemd[1]: Started sshd@23-10.0.0.78:22-10.0.0.1:55208.service - OpenSSH per-connection server daemon (10.0.0.1:55208). Sep 12 05:56:14.157697 sshd[5920]: Accepted publickey for core from 10.0.0.1 port 55208 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:56:14.160450 sshd-session[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:56:14.174886 systemd-logind[1543]: New session 24 of user core. Sep 12 05:56:14.181836 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 05:56:14.311817 sshd[5923]: Connection closed by 10.0.0.1 port 55208 Sep 12 05:56:14.312168 sshd-session[5920]: pam_unix(sshd:session): session closed for user core Sep 12 05:56:14.317191 systemd-logind[1543]: Session 24 logged out. Waiting for processes to exit. Sep 12 05:56:14.317898 systemd[1]: sshd@23-10.0.0.78:22-10.0.0.1:55208.service: Deactivated successfully. Sep 12 05:56:14.320157 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 05:56:14.322650 systemd-logind[1543]: Removed session 24. Sep 12 05:56:14.623225 containerd[1564]: time="2025-09-12T05:56:14.623168616Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e41fe1433500302fc2b8bd1f9ab9ecfe9ccb716a3d9c847ea64291af4cb6a5b\" id:\"cdc14bf0fc452d27fb2b79e17713c50d6bd61c4fc318c06dc73f97f6cbe39065\" pid:5947 exited_at:{seconds:1757656574 nanos:622870910}" Sep 12 05:56:17.094606 kubelet[2721]: E0912 05:56:17.094526 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:56:19.329149 systemd[1]: Started sshd@24-10.0.0.78:22-10.0.0.1:55212.service - OpenSSH per-connection server daemon (10.0.0.1:55212). Sep 12 05:56:19.394188 sshd[5960]: Accepted publickey for core from 10.0.0.1 port 55212 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:56:19.396006 sshd-session[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:56:19.403583 systemd-logind[1543]: New session 25 of user core. Sep 12 05:56:19.406737 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 05:56:19.749588 sshd[5963]: Connection closed by 10.0.0.1 port 55212 Sep 12 05:56:19.750098 sshd-session[5960]: pam_unix(sshd:session): session closed for user core Sep 12 05:56:19.755823 systemd[1]: sshd@24-10.0.0.78:22-10.0.0.1:55212.service: Deactivated successfully. Sep 12 05:56:19.758506 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 05:56:19.759388 systemd-logind[1543]: Session 25 logged out. Waiting for processes to exit. Sep 12 05:56:19.760716 systemd-logind[1543]: Removed session 25. Sep 12 05:56:21.093917 kubelet[2721]: E0912 05:56:21.093871 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:56:24.552617 systemd[1]: Started sshd@25-10.0.0.78:22-10.0.0.1:52882.service - OpenSSH per-connection server daemon (10.0.0.1:52882). Sep 12 05:56:24.600812 sshd[5976]: Accepted publickey for core from 10.0.0.1 port 52882 ssh2: RSA SHA256:1ltQeNwsGZzWPatjm39NOSOhM7BVT7DhGn6/LONO9qE Sep 12 05:56:24.602881 sshd-session[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:56:24.608961 systemd-logind[1543]: New session 26 of user core. Sep 12 05:56:24.618780 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 05:56:24.738076 sshd[5979]: Connection closed by 10.0.0.1 port 52882 Sep 12 05:56:24.738442 sshd-session[5976]: pam_unix(sshd:session): session closed for user core Sep 12 05:56:24.742056 systemd[1]: sshd@25-10.0.0.78:22-10.0.0.1:52882.service: Deactivated successfully. Sep 12 05:56:24.744238 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 05:56:24.745106 systemd-logind[1543]: Session 26 logged out. Waiting for processes to exit. Sep 12 05:56:24.747191 systemd-logind[1543]: Removed session 26.