Sep 4 00:02:55.792685 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 3 22:05:39 -00 2025 Sep 4 00:02:55.792716 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:02:55.792728 kernel: BIOS-provided physical RAM map: Sep 4 00:02:55.792737 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 00:02:55.792745 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 00:02:55.792753 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 00:02:55.792763 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 4 00:02:55.792775 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 4 00:02:55.792783 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 4 00:02:55.792817 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 4 00:02:55.792826 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 00:02:55.792835 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 00:02:55.792843 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 00:02:55.792853 kernel: NX (Execute Disable) protection: active Sep 4 00:02:55.792867 kernel: APIC: Static calls initialized Sep 4 00:02:55.792877 kernel: SMBIOS 2.8 present. Sep 4 00:02:55.792887 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 4 00:02:55.792896 kernel: DMI: Memory slots populated: 1/1 Sep 4 00:02:55.792905 kernel: Hypervisor detected: KVM Sep 4 00:02:55.792915 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 00:02:55.792924 kernel: kvm-clock: using sched offset of 5942517239 cycles Sep 4 00:02:55.792935 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 00:02:55.792945 kernel: tsc: Detected 2794.750 MHz processor Sep 4 00:02:55.792958 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 00:02:55.792968 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 00:02:55.792978 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 4 00:02:55.792988 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 00:02:55.792998 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 00:02:55.793007 kernel: Using GB pages for direct mapping Sep 4 00:02:55.793017 kernel: ACPI: Early table checksum verification disabled Sep 4 00:02:55.793026 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 4 00:02:55.793036 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:02:55.793049 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:02:55.793059 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:02:55.793068 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 4 00:02:55.793078 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:02:55.793088 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:02:55.793097 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:02:55.793107 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 00:02:55.793117 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 4 00:02:55.793134 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 4 00:02:55.793144 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 4 00:02:55.793154 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 4 00:02:55.793164 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 4 00:02:55.793174 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 4 00:02:55.793184 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 4 00:02:55.793197 kernel: No NUMA configuration found Sep 4 00:02:55.793218 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 4 00:02:55.793228 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 4 00:02:55.793238 kernel: Zone ranges: Sep 4 00:02:55.793248 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 00:02:55.793258 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 4 00:02:55.793268 kernel: Normal empty Sep 4 00:02:55.793277 kernel: Device empty Sep 4 00:02:55.793287 kernel: Movable zone start for each node Sep 4 00:02:55.793300 kernel: Early memory node ranges Sep 4 00:02:55.793310 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 00:02:55.793320 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 4 00:02:55.793330 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 4 00:02:55.793340 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 00:02:55.793350 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 00:02:55.793360 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 4 00:02:55.793369 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 00:02:55.793379 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 00:02:55.793392 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 00:02:55.793402 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 00:02:55.793412 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 00:02:55.793422 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 00:02:55.793433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 00:02:55.793444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 00:02:55.793456 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 00:02:55.793466 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 00:02:55.793476 kernel: TSC deadline timer available Sep 4 00:02:55.793486 kernel: CPU topo: Max. logical packages: 1 Sep 4 00:02:55.793499 kernel: CPU topo: Max. logical dies: 1 Sep 4 00:02:55.793509 kernel: CPU topo: Max. dies per package: 1 Sep 4 00:02:55.793518 kernel: CPU topo: Max. threads per core: 1 Sep 4 00:02:55.793528 kernel: CPU topo: Num. cores per package: 4 Sep 4 00:02:55.793538 kernel: CPU topo: Num. threads per package: 4 Sep 4 00:02:55.793548 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 4 00:02:55.793558 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 00:02:55.793567 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 00:02:55.793577 kernel: kvm-guest: setup PV sched yield Sep 4 00:02:55.793590 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 4 00:02:55.793600 kernel: Booting paravirtualized kernel on KVM Sep 4 00:02:55.793610 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 00:02:55.793620 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 00:02:55.793630 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 4 00:02:55.793640 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 4 00:02:55.793649 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 00:02:55.793659 kernel: kvm-guest: PV spinlocks enabled Sep 4 00:02:55.793668 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 00:02:55.793681 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:02:55.793691 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 00:02:55.793701 kernel: random: crng init done Sep 4 00:02:55.793711 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 00:02:55.793721 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 00:02:55.793731 kernel: Fallback order for Node 0: 0 Sep 4 00:02:55.793741 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 4 00:02:55.793751 kernel: Policy zone: DMA32 Sep 4 00:02:55.793763 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 00:02:55.793773 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 00:02:55.793783 kernel: ftrace: allocating 40099 entries in 157 pages Sep 4 00:02:55.793815 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 00:02:55.793825 kernel: Dynamic Preempt: voluntary Sep 4 00:02:55.793835 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 00:02:55.793846 kernel: rcu: RCU event tracing is enabled. Sep 4 00:02:55.793856 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 00:02:55.793866 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 00:02:55.793880 kernel: Rude variant of Tasks RCU enabled. Sep 4 00:02:55.793889 kernel: Tracing variant of Tasks RCU enabled. Sep 4 00:02:55.793899 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 00:02:55.793909 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 00:02:55.793919 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 00:02:55.793930 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 00:02:55.793940 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 00:02:55.793950 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 00:02:55.793960 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 00:02:55.793980 kernel: Console: colour VGA+ 80x25 Sep 4 00:02:55.793991 kernel: printk: legacy console [ttyS0] enabled Sep 4 00:02:55.794001 kernel: ACPI: Core revision 20240827 Sep 4 00:02:55.794014 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 00:02:55.794024 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 00:02:55.794034 kernel: x2apic enabled Sep 4 00:02:55.794045 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 00:02:55.794055 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 00:02:55.794066 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 00:02:55.794078 kernel: kvm-guest: setup PV IPIs Sep 4 00:02:55.794088 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 00:02:55.794099 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 4 00:02:55.794110 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 4 00:02:55.794120 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 00:02:55.794131 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 00:02:55.794141 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 00:02:55.794151 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 00:02:55.794165 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 00:02:55.794175 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 00:02:55.794185 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 00:02:55.794195 kernel: active return thunk: retbleed_return_thunk Sep 4 00:02:55.794216 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 00:02:55.794227 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 00:02:55.794237 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 00:02:55.794248 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 00:02:55.794261 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 00:02:55.794271 kernel: active return thunk: srso_return_thunk Sep 4 00:02:55.794282 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 00:02:55.794292 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 00:02:55.794303 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 00:02:55.794313 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 00:02:55.794322 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 00:02:55.794329 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 00:02:55.794338 kernel: Freeing SMP alternatives memory: 32K Sep 4 00:02:55.794347 kernel: pid_max: default: 32768 minimum: 301 Sep 4 00:02:55.794355 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 00:02:55.794363 kernel: landlock: Up and running. Sep 4 00:02:55.794370 kernel: SELinux: Initializing. Sep 4 00:02:55.794378 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 00:02:55.794386 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 00:02:55.794394 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 00:02:55.794401 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 00:02:55.794409 kernel: ... version: 0 Sep 4 00:02:55.794418 kernel: ... bit width: 48 Sep 4 00:02:55.794426 kernel: ... generic registers: 6 Sep 4 00:02:55.794434 kernel: ... value mask: 0000ffffffffffff Sep 4 00:02:55.794441 kernel: ... max period: 00007fffffffffff Sep 4 00:02:55.794449 kernel: ... fixed-purpose events: 0 Sep 4 00:02:55.794457 kernel: ... event mask: 000000000000003f Sep 4 00:02:55.794464 kernel: signal: max sigframe size: 1776 Sep 4 00:02:55.794472 kernel: rcu: Hierarchical SRCU implementation. Sep 4 00:02:55.794480 kernel: rcu: Max phase no-delay instances is 400. Sep 4 00:02:55.794489 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 00:02:55.794497 kernel: smp: Bringing up secondary CPUs ... Sep 4 00:02:55.794506 kernel: smpboot: x86: Booting SMP configuration: Sep 4 00:02:55.794515 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 00:02:55.794523 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 00:02:55.794532 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 4 00:02:55.794541 kernel: Memory: 2430964K/2571752K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 134856K reserved, 0K cma-reserved) Sep 4 00:02:55.794550 kernel: devtmpfs: initialized Sep 4 00:02:55.794559 kernel: x86/mm: Memory block size: 128MB Sep 4 00:02:55.794569 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 00:02:55.794577 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 00:02:55.794585 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 00:02:55.794593 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 00:02:55.794600 kernel: audit: initializing netlink subsys (disabled) Sep 4 00:02:55.794608 kernel: audit: type=2000 audit(1756944172.526:1): state=initialized audit_enabled=0 res=1 Sep 4 00:02:55.794616 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 00:02:55.794624 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 00:02:55.794631 kernel: cpuidle: using governor menu Sep 4 00:02:55.794641 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 00:02:55.794648 kernel: dca service started, version 1.12.1 Sep 4 00:02:55.794656 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 4 00:02:55.794664 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 4 00:02:55.794672 kernel: PCI: Using configuration type 1 for base access Sep 4 00:02:55.794680 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 00:02:55.794688 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 00:02:55.794696 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 00:02:55.794703 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 00:02:55.794713 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 00:02:55.794721 kernel: ACPI: Added _OSI(Module Device) Sep 4 00:02:55.794728 kernel: ACPI: Added _OSI(Processor Device) Sep 4 00:02:55.794736 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 00:02:55.794744 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 00:02:55.794751 kernel: ACPI: Interpreter enabled Sep 4 00:02:55.794759 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 00:02:55.794767 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 00:02:55.794775 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 00:02:55.794870 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 00:02:55.794886 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 00:02:55.794898 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 00:02:55.795150 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 00:02:55.795349 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 00:02:55.795504 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 00:02:55.795520 kernel: PCI host bridge to bus 0000:00 Sep 4 00:02:55.795689 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 00:02:55.795963 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 00:02:55.796127 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 00:02:55.796285 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 4 00:02:55.797069 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 4 00:02:55.797222 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 4 00:02:55.797360 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 00:02:55.797614 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 4 00:02:55.797815 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 4 00:02:55.797988 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 4 00:02:55.798144 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 4 00:02:55.798309 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 4 00:02:55.798468 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 00:02:55.798641 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 00:02:55.798923 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 4 00:02:55.799103 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 4 00:02:55.799271 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 4 00:02:55.799434 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 4 00:02:55.799582 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 4 00:02:55.799729 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 4 00:02:55.799900 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 4 00:02:55.800062 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 4 00:02:55.800227 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 4 00:02:55.800382 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 4 00:02:55.800536 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 4 00:02:55.800689 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 4 00:02:55.800879 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 4 00:02:55.801045 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 00:02:55.801225 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 4 00:02:55.801424 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 4 00:02:55.801578 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 4 00:02:55.801740 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 4 00:02:55.801944 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 4 00:02:55.801966 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 00:02:55.801977 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 00:02:55.801988 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 00:02:55.801999 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 00:02:55.802010 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 00:02:55.802020 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 00:02:55.802031 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 00:02:55.802041 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 00:02:55.802052 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 00:02:55.802065 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 00:02:55.802075 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 00:02:55.802086 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 00:02:55.802097 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 00:02:55.802107 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 00:02:55.802118 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 00:02:55.802128 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 00:02:55.802139 kernel: iommu: Default domain type: Translated Sep 4 00:02:55.802149 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 00:02:55.802162 kernel: PCI: Using ACPI for IRQ routing Sep 4 00:02:55.802173 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 00:02:55.802183 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 00:02:55.802194 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 4 00:02:55.802385 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 00:02:55.802532 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 00:02:55.802670 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 00:02:55.802685 kernel: vgaarb: loaded Sep 4 00:02:55.802695 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 00:02:55.802710 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 00:02:55.802721 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 00:02:55.802732 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 00:02:55.802743 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 00:02:55.802754 kernel: pnp: PnP ACPI init Sep 4 00:02:55.802951 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 4 00:02:55.802969 kernel: pnp: PnP ACPI: found 6 devices Sep 4 00:02:55.802980 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 00:02:55.802995 kernel: NET: Registered PF_INET protocol family Sep 4 00:02:55.803005 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 00:02:55.803016 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 00:02:55.803027 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 00:02:55.803038 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 00:02:55.803049 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 00:02:55.803060 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 00:02:55.803071 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 00:02:55.803081 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 00:02:55.803095 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 00:02:55.803106 kernel: NET: Registered PF_XDP protocol family Sep 4 00:02:55.803247 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 00:02:55.803377 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 00:02:55.803511 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 00:02:55.803643 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 4 00:02:55.803774 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 4 00:02:55.803963 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 4 00:02:55.803983 kernel: PCI: CLS 0 bytes, default 64 Sep 4 00:02:55.803994 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 4 00:02:55.804005 kernel: Initialise system trusted keyrings Sep 4 00:02:55.804016 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 00:02:55.804027 kernel: Key type asymmetric registered Sep 4 00:02:55.804037 kernel: Asymmetric key parser 'x509' registered Sep 4 00:02:55.804048 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 00:02:55.804058 kernel: io scheduler mq-deadline registered Sep 4 00:02:55.804069 kernel: io scheduler kyber registered Sep 4 00:02:55.804082 kernel: io scheduler bfq registered Sep 4 00:02:55.804093 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 00:02:55.804105 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 00:02:55.804116 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 00:02:55.804126 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 00:02:55.804137 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 00:02:55.804148 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 00:02:55.804158 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 00:02:55.804169 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 00:02:55.804182 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 00:02:55.804349 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 00:02:55.804363 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 00:02:55.804472 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 00:02:55.804589 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T00:02:55 UTC (1756944175) Sep 4 00:02:55.804698 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 00:02:55.804708 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 00:02:55.804716 kernel: NET: Registered PF_INET6 protocol family Sep 4 00:02:55.804728 kernel: Segment Routing with IPv6 Sep 4 00:02:55.804736 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 00:02:55.804743 kernel: NET: Registered PF_PACKET protocol family Sep 4 00:02:55.804751 kernel: Key type dns_resolver registered Sep 4 00:02:55.804759 kernel: IPI shorthand broadcast: enabled Sep 4 00:02:55.804767 kernel: sched_clock: Marking stable (3013002238, 144783625)->(3263923906, -106138043) Sep 4 00:02:55.804775 kernel: registered taskstats version 1 Sep 4 00:02:55.804800 kernel: Loading compiled-in X.509 certificates Sep 4 00:02:55.804813 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 247a8159a15e16f8eb89737aa66cd9cf9bbb3c10' Sep 4 00:02:55.804827 kernel: Demotion targets for Node 0: null Sep 4 00:02:55.804837 kernel: Key type .fscrypt registered Sep 4 00:02:55.804848 kernel: Key type fscrypt-provisioning registered Sep 4 00:02:55.804858 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 00:02:55.804869 kernel: ima: Allocated hash algorithm: sha1 Sep 4 00:02:55.804879 kernel: ima: No architecture policies found Sep 4 00:02:55.804886 kernel: clk: Disabling unused clocks Sep 4 00:02:55.804895 kernel: Warning: unable to open an initial console. Sep 4 00:02:55.804903 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 4 00:02:55.804913 kernel: Write protecting the kernel read-only data: 24576k Sep 4 00:02:55.804921 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 4 00:02:55.804929 kernel: Run /init as init process Sep 4 00:02:55.804937 kernel: with arguments: Sep 4 00:02:55.804945 kernel: /init Sep 4 00:02:55.804953 kernel: with environment: Sep 4 00:02:55.804960 kernel: HOME=/ Sep 4 00:02:55.804968 kernel: TERM=linux Sep 4 00:02:55.804975 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 00:02:55.804987 systemd[1]: Successfully made /usr/ read-only. Sep 4 00:02:55.805008 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:02:55.805019 systemd[1]: Detected virtualization kvm. Sep 4 00:02:55.805028 systemd[1]: Detected architecture x86-64. Sep 4 00:02:55.805036 systemd[1]: Running in initrd. Sep 4 00:02:55.805046 systemd[1]: No hostname configured, using default hostname. Sep 4 00:02:55.805056 systemd[1]: Hostname set to . Sep 4 00:02:55.805064 systemd[1]: Initializing machine ID from VM UUID. Sep 4 00:02:55.805072 systemd[1]: Queued start job for default target initrd.target. Sep 4 00:02:55.805081 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:02:55.805090 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:02:55.805099 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 00:02:55.805108 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:02:55.805119 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 00:02:55.805128 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 00:02:55.805138 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 00:02:55.805147 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 00:02:55.805155 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:02:55.805164 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:02:55.805172 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:02:55.805183 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:02:55.805191 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:02:55.805200 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:02:55.805217 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:02:55.805226 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:02:55.805234 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 00:02:55.805243 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 00:02:55.805253 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:02:55.805264 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:02:55.805272 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:02:55.805281 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:02:55.805290 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 00:02:55.805299 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:02:55.805309 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 00:02:55.805320 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 00:02:55.805329 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 00:02:55.805338 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:02:55.805346 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:02:55.805355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:02:55.805364 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 00:02:55.805375 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:02:55.805384 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 00:02:55.805393 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 00:02:55.805421 systemd-journald[220]: Collecting audit messages is disabled. Sep 4 00:02:55.805445 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:02:55.805454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:02:55.805464 systemd-journald[220]: Journal started Sep 4 00:02:55.805483 systemd-journald[220]: Runtime Journal (/run/log/journal/374c7ddbce9543168a6ed76f02050cae) is 6M, max 48.6M, 42.5M free. Sep 4 00:02:55.800084 systemd-modules-load[223]: Inserted module 'overlay' Sep 4 00:02:55.846973 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:02:55.846994 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 00:02:55.847010 kernel: Bridge firewalling registered Sep 4 00:02:55.827363 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 4 00:02:55.846559 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:02:55.848218 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:02:55.850316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:02:55.853902 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 00:02:55.856914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:02:55.860630 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:02:55.873720 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 00:02:55.877956 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:02:55.881106 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:02:55.882987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:02:55.886628 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:02:55.899480 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 00:02:55.920336 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:02:55.941772 systemd-resolved[260]: Positive Trust Anchors: Sep 4 00:02:55.941908 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:02:55.941952 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:02:55.946443 systemd-resolved[260]: Defaulting to hostname 'linux'. Sep 4 00:02:55.948776 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:02:55.954402 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:02:56.182242 kernel: SCSI subsystem initialized Sep 4 00:02:56.198081 kernel: Loading iSCSI transport class v2.0-870. Sep 4 00:02:56.242172 kernel: iscsi: registered transport (tcp) Sep 4 00:02:56.292240 kernel: iscsi: registered transport (qla4xxx) Sep 4 00:02:56.292319 kernel: QLogic iSCSI HBA Driver Sep 4 00:02:56.336596 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:02:56.365135 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:02:56.367461 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:02:56.537178 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 00:02:56.547069 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 00:02:56.672853 kernel: raid6: avx2x4 gen() 10151 MB/s Sep 4 00:02:56.690227 kernel: raid6: avx2x2 gen() 17228 MB/s Sep 4 00:02:56.706840 kernel: raid6: avx2x1 gen() 14522 MB/s Sep 4 00:02:56.706922 kernel: raid6: using algorithm avx2x2 gen() 17228 MB/s Sep 4 00:02:56.727838 kernel: raid6: .... xor() 9479 MB/s, rmw enabled Sep 4 00:02:56.727915 kernel: raid6: using avx2x2 recovery algorithm Sep 4 00:02:56.779978 kernel: xor: automatically using best checksumming function avx Sep 4 00:02:57.168841 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 00:02:57.192186 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:02:57.201364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:02:57.269656 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 4 00:02:57.284956 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:02:57.307639 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 00:02:57.372278 dracut-pre-trigger[488]: rd.md=0: removing MD RAID activation Sep 4 00:02:57.445235 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:02:57.460624 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:02:57.620058 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:02:57.629735 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 00:02:57.690424 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 00:02:57.690697 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 00:02:57.699411 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 00:02:57.699479 kernel: GPT:9289727 != 19775487 Sep 4 00:02:57.699495 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 00:02:57.699510 kernel: GPT:9289727 != 19775487 Sep 4 00:02:57.699537 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 00:02:57.699553 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 00:02:57.707818 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 00:02:57.737822 kernel: libata version 3.00 loaded. Sep 4 00:02:57.757627 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:02:57.757729 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:02:57.778305 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:02:57.788968 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:02:57.792060 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:02:57.876818 kernel: AES CTR mode by8 optimization enabled Sep 4 00:02:57.885830 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 4 00:02:57.885900 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 00:02:57.886222 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 00:02:57.891234 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 4 00:02:57.891515 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 4 00:02:57.891709 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 00:02:57.897235 kernel: scsi host0: ahci Sep 4 00:02:57.898287 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 00:02:57.952851 kernel: scsi host1: ahci Sep 4 00:02:57.955843 kernel: scsi host2: ahci Sep 4 00:02:57.956073 kernel: scsi host3: ahci Sep 4 00:02:57.956280 kernel: scsi host4: ahci Sep 4 00:02:57.956471 kernel: scsi host5: ahci Sep 4 00:02:57.956671 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 4 00:02:57.956689 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 4 00:02:57.956703 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 4 00:02:57.956722 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 4 00:02:57.956738 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 4 00:02:57.956752 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 4 00:02:57.965706 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:02:57.983919 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 00:02:58.007232 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 00:02:58.024825 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 00:02:58.033878 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 00:02:58.045346 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 00:02:58.085845 disk-uuid[639]: Primary Header is updated. Sep 4 00:02:58.085845 disk-uuid[639]: Secondary Entries is updated. Sep 4 00:02:58.085845 disk-uuid[639]: Secondary Header is updated. Sep 4 00:02:58.097401 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 00:02:58.235830 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 00:02:58.235918 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 00:02:58.235938 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 00:02:58.235954 kernel: ata3.00: applying bridge limits Sep 4 00:02:58.236833 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 00:02:58.237831 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 00:02:58.239838 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 00:02:58.240820 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 00:02:58.243054 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 00:02:58.243081 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 00:02:58.243096 kernel: ata3.00: configured for UDMA/100 Sep 4 00:02:58.247815 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 00:02:58.296048 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 00:02:58.296489 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 00:02:58.317833 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 00:02:58.682776 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 00:02:58.684902 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:02:58.685639 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:02:58.686243 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:02:58.687785 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 00:02:58.715920 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:02:59.112190 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 00:02:59.113013 disk-uuid[640]: The operation has completed successfully. Sep 4 00:02:59.149712 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 00:02:59.149857 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 00:02:59.183282 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 00:02:59.213249 sh[670]: Success Sep 4 00:02:59.230847 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 00:02:59.230947 kernel: device-mapper: uevent: version 1.0.3 Sep 4 00:02:59.230965 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 00:02:59.240822 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 4 00:02:59.274952 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 00:02:59.278863 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 00:02:59.299567 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 00:02:59.305233 kernel: BTRFS: device fsid 8a9c2e34-3d3c-49a9-acce-59bf90003071 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (682) Sep 4 00:02:59.305267 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9c2e34-3d3c-49a9-acce-59bf90003071 Sep 4 00:02:59.305282 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:02:59.310822 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 00:02:59.310848 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 00:02:59.311723 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 00:02:59.313803 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:02:59.315971 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 00:02:59.318523 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 00:02:59.321234 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 00:02:59.342828 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (713) Sep 4 00:02:59.344821 kernel: BTRFS info (device vda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:02:59.344863 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:02:59.347824 kernel: BTRFS info (device vda6): turning on async discard Sep 4 00:02:59.347852 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 00:02:59.352837 kernel: BTRFS info (device vda6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:02:59.353546 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 00:02:59.356897 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 00:02:59.482813 ignition[760]: Ignition 2.21.0 Sep 4 00:02:59.483206 ignition[760]: Stage: fetch-offline Sep 4 00:02:59.483242 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:02:59.483251 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:02:59.943922 ignition[760]: parsed url from cmdline: "" Sep 4 00:02:59.943956 ignition[760]: no config URL provided Sep 4 00:02:59.943975 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 00:02:59.944015 ignition[760]: no config at "/usr/lib/ignition/user.ign" Sep 4 00:02:59.956284 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:02:59.944081 ignition[760]: op(1): [started] loading QEMU firmware config module Sep 4 00:02:59.944087 ignition[760]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 00:02:59.973212 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:03:00.004671 ignition[760]: op(1): [finished] loading QEMU firmware config module Sep 4 00:03:00.120671 ignition[760]: parsing config with SHA512: c077f3bc1a9c6951ada151f50b2218c9faae7fcbe59d585d17d62daa48194c81bdbf553d606928fb1f004f15e0781ab927f9edb7243e270e86bd3362ddeb619f Sep 4 00:03:00.125927 unknown[760]: fetched base config from "system" Sep 4 00:03:00.126385 ignition[760]: fetch-offline: fetch-offline passed Sep 4 00:03:00.125946 unknown[760]: fetched user config from "qemu" Sep 4 00:03:00.126464 ignition[760]: Ignition finished successfully Sep 4 00:03:00.130155 systemd-networkd[860]: lo: Link UP Sep 4 00:03:00.130167 systemd-networkd[860]: lo: Gained carrier Sep 4 00:03:00.133066 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:03:00.136591 systemd-networkd[860]: Enumeration completed Sep 4 00:03:00.136742 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:03:00.138764 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:03:00.138772 systemd-networkd[860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:03:00.139397 systemd-networkd[860]: eth0: Link UP Sep 4 00:03:00.145566 systemd[1]: Reached target network.target - Network. Sep 4 00:03:00.146090 systemd-networkd[860]: eth0: Gained carrier Sep 4 00:03:00.146122 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:03:00.153841 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 00:03:00.177399 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 00:03:00.211906 systemd-networkd[860]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 00:03:00.337278 ignition[864]: Ignition 2.21.0 Sep 4 00:03:00.337828 ignition[864]: Stage: kargs Sep 4 00:03:00.341812 ignition[864]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:03:00.341826 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:03:00.344479 ignition[864]: kargs: kargs passed Sep 4 00:03:00.344545 ignition[864]: Ignition finished successfully Sep 4 00:03:00.383924 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 00:03:00.391643 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 00:03:00.696125 ignition[873]: Ignition 2.21.0 Sep 4 00:03:00.696150 ignition[873]: Stage: disks Sep 4 00:03:00.696935 ignition[873]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:03:00.696949 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:03:00.704957 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 00:03:00.699481 ignition[873]: disks: disks passed Sep 4 00:03:00.713889 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 00:03:00.699562 ignition[873]: Ignition finished successfully Sep 4 00:03:00.723276 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 00:03:00.725930 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:03:00.726865 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:03:00.764500 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:03:00.768719 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 00:03:00.848540 systemd-fsck[883]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 00:03:00.863876 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 00:03:00.876427 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 00:03:01.223847 kernel: EXT4-fs (vda9): mounted filesystem c3518c93-f823-4477-a620-ff9666a59be5 r/w with ordered data mode. Quota mode: none. Sep 4 00:03:01.225751 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 00:03:01.229520 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 00:03:01.240905 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:03:01.272863 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 00:03:01.281634 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 00:03:01.281705 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 00:03:01.281742 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:03:01.292062 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 00:03:01.324171 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Sep 4 00:03:01.327064 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 00:03:01.335339 kernel: BTRFS info (device vda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:03:01.335386 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:03:01.349128 kernel: BTRFS info (device vda6): turning on async discard Sep 4 00:03:01.349221 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 00:03:01.353229 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:03:01.401551 systemd-networkd[860]: eth0: Gained IPv6LL Sep 4 00:03:01.484124 initrd-setup-root[915]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 00:03:01.494520 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Sep 4 00:03:01.512809 initrd-setup-root[929]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 00:03:01.525523 initrd-setup-root[936]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 00:03:01.755489 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 00:03:01.759044 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 00:03:01.765462 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 00:03:01.794456 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 00:03:01.803548 kernel: BTRFS info (device vda6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:03:02.009456 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 00:03:02.075632 ignition[1003]: INFO : Ignition 2.21.0 Sep 4 00:03:02.075632 ignition[1003]: INFO : Stage: mount Sep 4 00:03:02.085154 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:03:02.085154 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:03:02.085154 ignition[1003]: INFO : mount: mount passed Sep 4 00:03:02.085154 ignition[1003]: INFO : Ignition finished successfully Sep 4 00:03:02.090555 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 00:03:02.120215 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 00:03:02.234936 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:03:02.277107 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1017) Sep 4 00:03:02.279714 kernel: BTRFS info (device vda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:03:02.279744 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:03:02.286586 kernel: BTRFS info (device vda6): turning on async discard Sep 4 00:03:02.286694 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 00:03:02.291413 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:03:02.477273 ignition[1034]: INFO : Ignition 2.21.0 Sep 4 00:03:02.477273 ignition[1034]: INFO : Stage: files Sep 4 00:03:02.483706 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:03:02.483706 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:03:02.492439 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping Sep 4 00:03:02.497721 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 00:03:02.497721 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 00:03:02.506754 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 00:03:02.509922 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 00:03:02.509922 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 00:03:02.508199 unknown[1034]: wrote ssh authorized keys file for user: core Sep 4 00:03:02.518292 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 00:03:02.523238 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 00:03:02.645679 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 00:03:02.900484 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 00:03:02.900484 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 00:03:02.925216 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 00:03:02.925216 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:03:02.925216 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:03:02.925216 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:03:02.925216 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:03:02.925216 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:03:02.925216 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:03:02.962102 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:03:02.962102 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:03:02.962102 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 00:03:02.962102 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 00:03:02.962102 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 00:03:02.962102 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 4 00:03:03.384589 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 00:03:04.524315 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 00:03:04.524315 ignition[1034]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 00:03:04.528391 ignition[1034]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:03:04.533303 ignition[1034]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:03:04.533303 ignition[1034]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 00:03:04.533303 ignition[1034]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 00:03:04.538295 ignition[1034]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 00:03:04.538295 ignition[1034]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 00:03:04.538295 ignition[1034]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 00:03:04.538295 ignition[1034]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 00:03:04.559620 ignition[1034]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 00:03:04.566146 ignition[1034]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 00:03:04.567898 ignition[1034]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 00:03:04.567898 ignition[1034]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 00:03:04.567898 ignition[1034]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 00:03:04.567898 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:03:04.567898 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:03:04.567898 ignition[1034]: INFO : files: files passed Sep 4 00:03:04.567898 ignition[1034]: INFO : Ignition finished successfully Sep 4 00:03:04.570217 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 00:03:04.572735 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 00:03:04.576500 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 00:03:04.593816 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 00:03:04.594933 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 00:03:04.595066 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 00:03:04.601865 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:03:04.603650 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:03:04.603650 initrd-setup-root-after-ignition[1065]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:03:04.607896 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:03:04.610595 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 00:03:04.612892 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 00:03:04.679192 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 00:03:04.679379 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 00:03:04.682650 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 00:03:04.684042 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 00:03:04.684532 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 00:03:04.690720 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 00:03:04.726390 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:03:04.727900 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 00:03:04.750888 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:03:04.752457 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:03:04.755216 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 00:03:04.757738 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 00:03:04.757890 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:03:04.759022 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 00:03:04.759394 systemd[1]: Stopped target basic.target - Basic System. Sep 4 00:03:04.763383 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 00:03:04.763696 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:03:04.768297 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 00:03:04.770516 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:03:04.771762 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 00:03:04.774725 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:03:04.776119 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 00:03:04.776468 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 00:03:04.780502 systemd[1]: Stopped target swap.target - Swaps. Sep 4 00:03:04.780838 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 00:03:04.780969 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:03:04.786255 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:03:04.787327 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:03:04.787626 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 00:03:04.791499 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:03:04.791773 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 00:03:04.791916 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 00:03:04.797228 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 00:03:04.797364 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:03:04.798395 systemd[1]: Stopped target paths.target - Path Units. Sep 4 00:03:04.800562 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 00:03:04.805849 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:03:04.807336 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 00:03:04.808935 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 00:03:04.809260 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 00:03:04.809378 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:03:04.812600 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 00:03:04.812702 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:03:04.814445 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 00:03:04.814553 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:03:04.816503 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 00:03:04.816639 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 00:03:04.820768 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 00:03:04.821883 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 00:03:04.822012 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:03:04.826360 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 00:03:04.829078 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 00:03:04.832814 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:03:04.834230 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 00:03:04.834329 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:03:04.845626 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 00:03:04.846880 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 00:03:04.857477 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 00:03:04.869491 ignition[1090]: INFO : Ignition 2.21.0 Sep 4 00:03:04.869491 ignition[1090]: INFO : Stage: umount Sep 4 00:03:04.871437 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:03:04.871437 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 00:03:04.875050 ignition[1090]: INFO : umount: umount passed Sep 4 00:03:04.875856 ignition[1090]: INFO : Ignition finished successfully Sep 4 00:03:04.879524 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 00:03:04.879666 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 00:03:04.880481 systemd[1]: Stopped target network.target - Network. Sep 4 00:03:04.883183 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 00:03:04.883237 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 00:03:04.885012 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 00:03:04.885061 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 00:03:04.886876 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 00:03:04.886930 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 00:03:04.888030 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 00:03:04.888076 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 00:03:04.888424 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 00:03:04.888755 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 00:03:04.898936 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 00:03:04.899106 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 00:03:04.902952 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 00:03:04.903294 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 00:03:04.903345 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:03:04.908993 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:03:04.911230 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 00:03:04.911357 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 00:03:04.915421 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 00:03:04.915614 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 00:03:04.916871 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 00:03:04.916909 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:03:04.921918 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 00:03:04.921997 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 00:03:04.922045 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:03:04.922360 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 00:03:04.922401 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:03:04.927748 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 00:03:04.927820 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 00:03:04.930575 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:03:04.932389 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 00:03:04.979515 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 00:03:04.983960 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:03:04.985826 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 00:03:04.985873 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 00:03:04.987887 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 00:03:04.987984 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:03:04.990176 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 00:03:04.990244 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:03:04.992503 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 00:03:04.992553 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 00:03:04.994642 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 00:03:04.994703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:03:04.997771 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 00:03:04.998917 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 00:03:04.998967 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:03:05.002241 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 00:03:05.002309 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:03:05.005612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:03:05.005661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:03:05.017486 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 00:03:05.026165 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 00:03:05.028590 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 00:03:05.028748 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 00:03:05.032130 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 00:03:05.032216 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 00:03:05.036618 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 00:03:05.036766 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 00:03:05.038144 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 00:03:05.041408 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 00:03:05.065920 systemd[1]: Switching root. Sep 4 00:03:05.097503 systemd-journald[220]: Journal stopped Sep 4 00:03:07.334485 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 4 00:03:07.334568 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 00:03:07.334590 kernel: SELinux: policy capability open_perms=1 Sep 4 00:03:07.334605 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 00:03:07.334620 kernel: SELinux: policy capability always_check_network=0 Sep 4 00:03:07.334636 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 00:03:07.334652 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 00:03:07.334667 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 00:03:07.334682 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 00:03:07.334697 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 00:03:07.334715 kernel: audit: type=1403 audit(1756944185.721:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 00:03:07.334731 systemd[1]: Successfully loaded SELinux policy in 53.235ms. Sep 4 00:03:07.334755 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.929ms. Sep 4 00:03:07.334772 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:03:07.334804 systemd[1]: Detected virtualization kvm. Sep 4 00:03:07.334821 systemd[1]: Detected architecture x86-64. Sep 4 00:03:07.334837 systemd[1]: Detected first boot. Sep 4 00:03:07.334854 systemd[1]: Initializing machine ID from VM UUID. Sep 4 00:03:07.334870 zram_generator::config[1136]: No configuration found. Sep 4 00:03:07.336382 kernel: Guest personality initialized and is inactive Sep 4 00:03:07.336404 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 00:03:07.336420 kernel: Initialized host personality Sep 4 00:03:07.336435 kernel: NET: Registered PF_VSOCK protocol family Sep 4 00:03:07.336453 systemd[1]: Populated /etc with preset unit settings. Sep 4 00:03:07.336477 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 00:03:07.336495 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 00:03:07.336512 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 00:03:07.336532 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 00:03:07.336548 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 00:03:07.336570 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 00:03:07.336586 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 00:03:07.336603 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 00:03:07.336620 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 00:03:07.336637 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 00:03:07.336653 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 00:03:07.336669 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 00:03:07.336689 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:03:07.336706 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:03:07.336723 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 00:03:07.336740 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 00:03:07.336757 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 00:03:07.336774 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:03:07.336806 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 00:03:07.336827 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:03:07.336845 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:03:07.336861 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 00:03:07.336877 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 00:03:07.336894 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 00:03:07.336910 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 00:03:07.336940 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:03:07.336963 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:03:07.336980 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:03:07.336999 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:03:07.337015 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 00:03:07.337034 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 00:03:07.337051 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 00:03:07.337068 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:03:07.337084 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:03:07.337100 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:03:07.337116 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 00:03:07.337132 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 00:03:07.337149 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 00:03:07.337169 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 00:03:07.337186 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:07.337203 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 00:03:07.337219 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 00:03:07.337235 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 00:03:07.337252 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 00:03:07.337268 systemd[1]: Reached target machines.target - Containers. Sep 4 00:03:07.337284 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 00:03:07.337303 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:03:07.337320 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:03:07.337337 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 00:03:07.337353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:03:07.337370 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:03:07.337386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:03:07.337403 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 00:03:07.337419 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:03:07.337438 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 00:03:07.337455 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 00:03:07.337472 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 00:03:07.337489 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 00:03:07.337506 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 00:03:07.337523 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:03:07.337539 kernel: fuse: init (API version 7.41) Sep 4 00:03:07.337555 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:03:07.337571 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:03:07.337590 kernel: loop: module loaded Sep 4 00:03:07.337607 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:03:07.337624 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 00:03:07.337641 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 00:03:07.337657 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:03:07.337677 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 00:03:07.337693 systemd[1]: Stopped verity-setup.service. Sep 4 00:03:07.337710 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:07.337727 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 00:03:07.337743 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 00:03:07.337760 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 00:03:07.337776 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 00:03:07.337812 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 00:03:07.337834 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 00:03:07.337853 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:03:07.337870 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 00:03:07.337887 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 00:03:07.337903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:03:07.337931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:03:07.337950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:03:07.337967 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:03:07.337984 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 00:03:07.338000 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 00:03:07.338018 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:03:07.338035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:03:07.338051 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:03:07.338067 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:03:07.338084 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 00:03:07.338104 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:03:07.338120 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 00:03:07.338137 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 00:03:07.338153 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 00:03:07.338173 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:03:07.338191 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 00:03:07.338210 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 00:03:07.338235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:03:07.338252 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 00:03:07.338270 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:03:07.338287 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 00:03:07.338304 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:03:07.338321 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:03:07.338340 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 00:03:07.338357 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 00:03:07.338374 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 00:03:07.338391 kernel: ACPI: bus type drm_connector registered Sep 4 00:03:07.338407 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:03:07.338423 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:03:07.338440 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 00:03:07.338457 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 00:03:07.338476 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 00:03:07.338492 kernel: loop0: detected capacity change from 0 to 146240 Sep 4 00:03:07.338509 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 00:03:07.338525 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 00:03:07.338578 systemd-journald[1211]: Collecting audit messages is disabled. Sep 4 00:03:07.338611 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 00:03:07.338628 systemd-journald[1211]: Journal started Sep 4 00:03:07.338661 systemd-journald[1211]: Runtime Journal (/run/log/journal/374c7ddbce9543168a6ed76f02050cae) is 6M, max 48.6M, 42.5M free. Sep 4 00:03:07.339867 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:03:06.838773 systemd[1]: Queued start job for default target multi-user.target. Sep 4 00:03:06.852198 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 00:03:06.852740 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 00:03:07.348148 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:03:07.379513 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 00:03:07.383997 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 00:03:07.388200 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 00:03:07.397202 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 00:03:07.401783 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:03:07.406273 systemd-journald[1211]: Time spent on flushing to /var/log/journal/374c7ddbce9543168a6ed76f02050cae is 14.839ms for 990 entries. Sep 4 00:03:07.406273 systemd-journald[1211]: System Journal (/var/log/journal/374c7ddbce9543168a6ed76f02050cae) is 8M, max 195.6M, 187.6M free. Sep 4 00:03:07.427712 systemd-journald[1211]: Received client request to flush runtime journal. Sep 4 00:03:07.427929 kernel: loop1: detected capacity change from 0 to 113872 Sep 4 00:03:07.418121 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:03:07.431097 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 00:03:07.447729 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Sep 4 00:03:07.447754 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Sep 4 00:03:07.454819 kernel: loop2: detected capacity change from 0 to 221472 Sep 4 00:03:07.456027 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:03:07.489011 kernel: loop3: detected capacity change from 0 to 146240 Sep 4 00:03:07.502816 kernel: loop4: detected capacity change from 0 to 113872 Sep 4 00:03:07.514837 kernel: loop5: detected capacity change from 0 to 221472 Sep 4 00:03:07.567518 (sd-merge)[1278]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 00:03:07.568363 (sd-merge)[1278]: Merged extensions into '/usr'. Sep 4 00:03:07.573396 systemd[1]: Reload requested from client PID 1236 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 00:03:07.573415 systemd[1]: Reloading... Sep 4 00:03:07.868869 zram_generator::config[1303]: No configuration found. Sep 4 00:03:08.252775 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:03:08.374583 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 00:03:08.375086 systemd[1]: Reloading finished in 801 ms. Sep 4 00:03:08.828189 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 00:03:08.840430 systemd[1]: Starting ensure-sysext.service... Sep 4 00:03:08.842953 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:03:08.884611 systemd[1]: Reload requested from client PID 1340 ('systemctl') (unit ensure-sysext.service)... Sep 4 00:03:08.884632 systemd[1]: Reloading... Sep 4 00:03:08.922120 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 00:03:08.922167 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 00:03:08.923146 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 00:03:08.923608 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 00:03:08.930667 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 00:03:08.931580 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Sep 4 00:03:08.931667 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Sep 4 00:03:08.951169 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:03:08.951185 systemd-tmpfiles[1341]: Skipping /boot Sep 4 00:03:09.021012 zram_generator::config[1369]: No configuration found. Sep 4 00:03:09.027203 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:03:09.027219 systemd-tmpfiles[1341]: Skipping /boot Sep 4 00:03:09.045139 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 00:03:09.135983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:03:09.220556 systemd[1]: Reloading finished in 335 ms. Sep 4 00:03:09.243225 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 00:03:09.244900 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 00:03:09.264299 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:03:09.274949 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:03:09.278382 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 00:03:09.288899 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 00:03:09.294111 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:03:09.298161 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:03:09.302217 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 00:03:09.307888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:09.308272 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:03:09.313029 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:03:09.317688 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:03:09.320149 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:03:09.324109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:03:09.324278 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:03:09.330970 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 00:03:09.333021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:09.334733 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:03:09.335875 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:03:09.339325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:03:09.339633 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:03:09.342289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:03:09.342557 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:03:09.355751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:09.356884 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:03:09.364879 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:03:09.372561 systemd-udevd[1419]: Using default interface naming scheme 'v255'. Sep 4 00:03:09.376600 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:03:09.384993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:03:09.386247 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:03:09.386404 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:03:09.386575 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:09.388451 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 00:03:09.388828 augenrules[1443]: No rules Sep 4 00:03:09.390579 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:03:09.390895 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:03:09.392828 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:03:09.393076 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:03:09.396190 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:03:09.403004 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:03:09.405392 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 00:03:09.407654 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:03:09.407904 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:03:09.409556 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:03:09.415541 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 00:03:09.439654 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:09.442144 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:03:09.443293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:03:09.446919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:03:09.450836 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:03:09.454033 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:03:09.456273 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:03:09.457414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:03:09.457451 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:03:09.461890 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:03:09.472024 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 00:03:09.473144 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 00:03:09.473177 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:03:09.473623 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 00:03:09.475270 systemd[1]: Finished ensure-sysext.service. Sep 4 00:03:09.494052 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 00:03:09.495936 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:03:09.496194 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:03:09.498065 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:03:09.498288 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:03:09.502422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:03:09.502689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:03:09.504620 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:03:09.504888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:03:09.514871 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 00:03:09.515287 augenrules[1484]: /sbin/augenrules: No change Sep 4 00:03:09.523402 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 00:03:09.523504 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:03:09.523576 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:03:09.529747 augenrules[1520]: No rules Sep 4 00:03:09.531550 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:03:09.590005 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:03:09.649164 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 00:03:09.651844 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 00:03:09.656837 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 00:03:09.663815 kernel: ACPI: button: Power Button [PWRF] Sep 4 00:03:09.673352 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 00:03:09.708835 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 00:03:09.711098 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 00:03:09.727834 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 00:03:09.759285 systemd-networkd[1489]: lo: Link UP Sep 4 00:03:09.759295 systemd-networkd[1489]: lo: Gained carrier Sep 4 00:03:09.764807 systemd-networkd[1489]: Enumeration completed Sep 4 00:03:09.764914 systemd-resolved[1412]: Positive Trust Anchors: Sep 4 00:03:09.764929 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:03:09.764978 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:03:09.765008 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:03:09.770448 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 00:03:09.770726 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:03:09.770732 systemd-networkd[1489]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:03:09.770971 systemd-resolved[1412]: Defaulting to hostname 'linux'. Sep 4 00:03:09.776206 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 00:03:09.778264 systemd-networkd[1489]: eth0: Link UP Sep 4 00:03:09.778421 systemd-networkd[1489]: eth0: Gained carrier Sep 4 00:03:09.778450 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:03:09.778944 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:03:09.781830 systemd[1]: Reached target network.target - Network. Sep 4 00:03:09.785280 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:03:09.815888 systemd-networkd[1489]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 00:03:09.817118 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 00:03:09.820846 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:03:09.822437 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 00:03:09.824063 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 00:03:09.825711 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 00:03:09.827244 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 00:03:09.830900 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 00:03:09.830944 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:03:09.832111 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 00:03:09.833695 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 00:03:09.838042 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 00:03:09.839613 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:03:09.841616 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Sep 4 00:03:09.851928 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 00:03:09.857249 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 00:03:09.866490 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 00:03:09.869631 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 00:03:09.873249 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 00:03:10.107092 systemd-timesyncd[1497]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 00:03:10.107175 systemd-timesyncd[1497]: Initial clock synchronization to Thu 2025-09-04 00:03:10.327986 UTC. Sep 4 00:03:10.311752 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 00:03:10.314307 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 00:03:10.318401 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 00:03:10.320440 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 00:03:10.345125 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:03:10.345828 kernel: kvm_amd: TSC scaling supported Sep 4 00:03:10.345882 kernel: kvm_amd: Nested Virtualization enabled Sep 4 00:03:10.345903 kernel: kvm_amd: Nested Paging enabled Sep 4 00:03:10.345922 kernel: kvm_amd: LBR virtualization supported Sep 4 00:03:10.345940 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 00:03:10.345958 kernel: kvm_amd: Virtual GIF supported Sep 4 00:03:10.349087 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:03:10.350285 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:03:10.350369 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:03:10.354288 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 00:03:10.358313 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 00:03:10.362301 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 00:03:10.378212 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 00:03:10.380953 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 00:03:10.384046 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 00:03:10.386825 kernel: EDAC MC: Ver: 3.0.0 Sep 4 00:03:10.388048 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 00:03:10.405149 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 00:03:10.409138 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 00:03:10.409545 jq[1563]: false Sep 4 00:03:10.411539 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 00:03:10.419689 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 00:03:10.427385 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing passwd entry cache Sep 4 00:03:10.427233 oslogin_cache_refresh[1565]: Refreshing passwd entry cache Sep 4 00:03:10.428272 extend-filesystems[1564]: Found /dev/vda6 Sep 4 00:03:10.434497 extend-filesystems[1564]: Found /dev/vda9 Sep 4 00:03:10.435234 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 00:03:10.437689 extend-filesystems[1564]: Checking size of /dev/vda9 Sep 4 00:03:10.438810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:03:10.441030 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 00:03:10.441641 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 00:03:10.443264 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting users, quitting Sep 4 00:03:10.443264 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:03:10.443264 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing group entry cache Sep 4 00:03:10.443116 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 00:03:10.443093 oslogin_cache_refresh[1565]: Failure getting users, quitting Sep 4 00:03:10.443115 oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:03:10.443171 oslogin_cache_refresh[1565]: Refreshing group entry cache Sep 4 00:03:10.453403 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting groups, quitting Sep 4 00:03:10.453403 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:03:10.451211 oslogin_cache_refresh[1565]: Failure getting groups, quitting Sep 4 00:03:10.451226 oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:03:10.454385 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 00:03:10.460891 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 00:03:10.463717 extend-filesystems[1564]: Resized partition /dev/vda9 Sep 4 00:03:10.462785 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 00:03:10.467015 extend-filesystems[1594]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 00:03:10.463252 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 00:03:10.465652 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 00:03:10.469311 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 00:03:10.472958 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 00:03:10.472452 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 00:03:10.472880 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 00:03:10.475652 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 00:03:10.479041 jq[1589]: true Sep 4 00:03:10.481291 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 00:03:10.521709 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 00:03:10.554896 update_engine[1583]: I20250904 00:03:10.486634 1583 main.cc:92] Flatcar Update Engine starting Sep 4 00:03:10.555471 extend-filesystems[1594]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 00:03:10.555471 extend-filesystems[1594]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 00:03:10.555471 extend-filesystems[1594]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 00:03:10.527535 (ntainerd)[1597]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 00:03:10.556880 extend-filesystems[1564]: Resized filesystem in /dev/vda9 Sep 4 00:03:10.557471 jq[1596]: true Sep 4 00:03:10.566198 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 00:03:10.567884 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 00:03:10.623022 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:03:10.629230 bash[1627]: Updated "/home/core/.ssh/authorized_keys" Sep 4 00:03:10.637271 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 00:03:10.644701 systemd-logind[1575]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 00:03:10.644749 systemd-logind[1575]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 00:03:10.647046 systemd-logind[1575]: New seat seat0. Sep 4 00:03:10.649890 tar[1595]: linux-amd64/helm Sep 4 00:03:10.653469 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 00:03:10.672189 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 00:03:10.674589 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 00:03:10.676228 dbus-daemon[1561]: [system] SELinux support is enabled Sep 4 00:03:10.676528 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 00:03:10.681046 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 00:03:10.681081 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 00:03:10.682571 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 00:03:10.682597 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 00:03:10.688061 dbus-daemon[1561]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 00:03:10.689168 update_engine[1583]: I20250904 00:03:10.689111 1583 update_check_scheduler.cc:74] Next update check in 3m12s Sep 4 00:03:10.689707 systemd[1]: Started update-engine.service - Update Engine. Sep 4 00:03:10.693676 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 00:03:10.801172 locksmithd[1634]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 00:03:11.009638 containerd[1597]: time="2025-09-04T00:03:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 00:03:11.012912 containerd[1597]: time="2025-09-04T00:03:11.012864242Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 4 00:03:11.024595 containerd[1597]: time="2025-09-04T00:03:11.024532558Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.812µs" Sep 4 00:03:11.024595 containerd[1597]: time="2025-09-04T00:03:11.024585704Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 00:03:11.024706 containerd[1597]: time="2025-09-04T00:03:11.024605441Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 00:03:11.024863 containerd[1597]: time="2025-09-04T00:03:11.024838698Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 00:03:11.024863 containerd[1597]: time="2025-09-04T00:03:11.024860978Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 00:03:11.024921 containerd[1597]: time="2025-09-04T00:03:11.024888951Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:03:11.024985 containerd[1597]: time="2025-09-04T00:03:11.024960043Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:03:11.024985 containerd[1597]: time="2025-09-04T00:03:11.024978327Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:03:11.025298 containerd[1597]: time="2025-09-04T00:03:11.025269107Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:03:11.025298 containerd[1597]: time="2025-09-04T00:03:11.025289389Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:03:11.025298 containerd[1597]: time="2025-09-04T00:03:11.025299737Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:03:11.025542 containerd[1597]: time="2025-09-04T00:03:11.025308014Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 00:03:11.025542 containerd[1597]: time="2025-09-04T00:03:11.025420607Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 00:03:11.025729 containerd[1597]: time="2025-09-04T00:03:11.025699269Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:03:11.025773 containerd[1597]: time="2025-09-04T00:03:11.025758520Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:03:11.025773 containerd[1597]: time="2025-09-04T00:03:11.025769329Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 00:03:11.025857 containerd[1597]: time="2025-09-04T00:03:11.025809184Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 00:03:11.026126 containerd[1597]: time="2025-09-04T00:03:11.026088351Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 00:03:11.026218 containerd[1597]: time="2025-09-04T00:03:11.026193860Z" level=info msg="metadata content store policy set" policy=shared Sep 4 00:03:11.038913 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 00:03:11.068918 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 00:03:11.086526 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 00:03:11.089126 systemd[1]: Started sshd@0-10.0.0.100:22-10.0.0.1:33778.service - OpenSSH per-connection server daemon (10.0.0.1:33778). Sep 4 00:03:11.108560 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 00:03:11.109180 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 00:03:11.114342 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 00:03:11.198668 containerd[1597]: time="2025-09-04T00:03:11.198589168Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 00:03:11.198905 containerd[1597]: time="2025-09-04T00:03:11.198849586Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 00:03:11.199331 containerd[1597]: time="2025-09-04T00:03:11.199281435Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 00:03:11.199372 containerd[1597]: time="2025-09-04T00:03:11.199320476Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 00:03:11.199402 containerd[1597]: time="2025-09-04T00:03:11.199390981Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 00:03:11.199431 containerd[1597]: time="2025-09-04T00:03:11.199410110Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 00:03:11.199495 containerd[1597]: time="2025-09-04T00:03:11.199468260Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 00:03:11.199847 containerd[1597]: time="2025-09-04T00:03:11.199813410Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 00:03:11.199903 containerd[1597]: time="2025-09-04T00:03:11.199846644Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 00:03:11.199903 containerd[1597]: time="2025-09-04T00:03:11.199889124Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 00:03:11.199903 containerd[1597]: time="2025-09-04T00:03:11.199902467Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 00:03:11.199981 containerd[1597]: time="2025-09-04T00:03:11.199921946Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 00:03:11.200476 containerd[1597]: time="2025-09-04T00:03:11.200364771Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 00:03:11.200524 containerd[1597]: time="2025-09-04T00:03:11.200487845Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 00:03:11.200583 containerd[1597]: time="2025-09-04T00:03:11.200558504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 00:03:11.201172 containerd[1597]: time="2025-09-04T00:03:11.201124763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 00:03:11.201222 containerd[1597]: time="2025-09-04T00:03:11.201163248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 00:03:11.201222 containerd[1597]: time="2025-09-04T00:03:11.201199716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 00:03:11.201298 containerd[1597]: time="2025-09-04T00:03:11.201228718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 00:03:11.201298 containerd[1597]: time="2025-09-04T00:03:11.201251935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 00:03:11.201363 containerd[1597]: time="2025-09-04T00:03:11.201315603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 00:03:11.201363 containerd[1597]: time="2025-09-04T00:03:11.201332385Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 00:03:11.201416 containerd[1597]: time="2025-09-04T00:03:11.201370386Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 00:03:11.201576 containerd[1597]: time="2025-09-04T00:03:11.201499668Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 00:03:11.202038 containerd[1597]: time="2025-09-04T00:03:11.201991161Z" level=info msg="Start snapshots syncer" Sep 4 00:03:11.202102 containerd[1597]: time="2025-09-04T00:03:11.202058597Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 00:03:11.202500 containerd[1597]: time="2025-09-04T00:03:11.202422331Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 00:03:11.202689 containerd[1597]: time="2025-09-04T00:03:11.202514713Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 00:03:11.202689 containerd[1597]: time="2025-09-04T00:03:11.202620460Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 00:03:11.202787 containerd[1597]: time="2025-09-04T00:03:11.202753283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 00:03:11.205272 containerd[1597]: time="2025-09-04T00:03:11.205196054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 00:03:11.205357 containerd[1597]: time="2025-09-04T00:03:11.205232902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205397467Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205423875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205438227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205461526Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205502626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205518636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205540926Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205624743Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205650801Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205663546Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205678424Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205689646Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205702299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 00:03:11.206084 containerd[1597]: time="2025-09-04T00:03:11.205715570Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 00:03:11.206460 containerd[1597]: time="2025-09-04T00:03:11.205741227Z" level=info msg="runtime interface created" Sep 4 00:03:11.206460 containerd[1597]: time="2025-09-04T00:03:11.205748651Z" level=info msg="created NRI interface" Sep 4 00:03:11.206460 containerd[1597]: time="2025-09-04T00:03:11.205775707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 00:03:11.206460 containerd[1597]: time="2025-09-04T00:03:11.205811711Z" level=info msg="Connect containerd service" Sep 4 00:03:11.206460 containerd[1597]: time="2025-09-04T00:03:11.205862026Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 00:03:11.207630 containerd[1597]: time="2025-09-04T00:03:11.207432983Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:03:11.214692 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 00:03:11.218923 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 00:03:11.223248 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 00:03:11.224865 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 00:03:11.338395 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 33778 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:11.342466 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:11.365714 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 00:03:11.479160 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 00:03:11.503904 systemd-logind[1575]: New session 1 of user core. Sep 4 00:03:11.587799 tar[1595]: linux-amd64/LICENSE Sep 4 00:03:11.587799 tar[1595]: linux-amd64/README.md Sep 4 00:03:11.593124 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 00:03:11.814726 systemd-networkd[1489]: eth0: Gained IPv6LL Sep 4 00:03:11.818154 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 00:03:11.821206 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 00:03:11.826442 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 00:03:11.833391 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 00:03:11.838106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:11.843105 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 00:03:11.853183 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 00:03:11.869225 (systemd)[1677]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 00:03:11.881088 systemd-logind[1575]: New session c1 of user core. Sep 4 00:03:11.891043 containerd[1597]: time="2025-09-04T00:03:11.890961288Z" level=info msg="Start subscribing containerd event" Sep 4 00:03:11.892550 containerd[1597]: time="2025-09-04T00:03:11.892343167Z" level=info msg="Start recovering state" Sep 4 00:03:11.892945 containerd[1597]: time="2025-09-04T00:03:11.892777899Z" level=info msg="Start event monitor" Sep 4 00:03:11.893031 containerd[1597]: time="2025-09-04T00:03:11.892954057Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 00:03:11.893126 containerd[1597]: time="2025-09-04T00:03:11.893089939Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 00:03:11.893166 containerd[1597]: time="2025-09-04T00:03:11.892962057Z" level=info msg="Start cni network conf syncer for default" Sep 4 00:03:11.893166 containerd[1597]: time="2025-09-04T00:03:11.893155264Z" level=info msg="Start streaming server" Sep 4 00:03:11.893236 containerd[1597]: time="2025-09-04T00:03:11.893173909Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 00:03:11.893236 containerd[1597]: time="2025-09-04T00:03:11.893187943Z" level=info msg="runtime interface starting up..." Sep 4 00:03:11.893236 containerd[1597]: time="2025-09-04T00:03:11.893199123Z" level=info msg="starting plugins..." Sep 4 00:03:11.893236 containerd[1597]: time="2025-09-04T00:03:11.893232904Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 00:03:11.893541 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 00:03:11.893702 containerd[1597]: time="2025-09-04T00:03:11.893670087Z" level=info msg="containerd successfully booted in 0.884728s" Sep 4 00:03:11.898988 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 00:03:11.903150 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 00:03:11.903489 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 00:03:11.908362 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 00:03:12.123565 systemd[1677]: Queued start job for default target default.target. Sep 4 00:03:12.226802 systemd[1677]: Created slice app.slice - User Application Slice. Sep 4 00:03:12.226864 systemd[1677]: Reached target paths.target - Paths. Sep 4 00:03:12.228231 systemd[1677]: Reached target timers.target - Timers. Sep 4 00:03:12.232956 systemd[1677]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 00:03:12.260361 systemd[1677]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 00:03:12.260563 systemd[1677]: Reached target sockets.target - Sockets. Sep 4 00:03:12.260639 systemd[1677]: Reached target basic.target - Basic System. Sep 4 00:03:12.260694 systemd[1677]: Reached target default.target - Main User Target. Sep 4 00:03:12.260738 systemd[1677]: Startup finished in 357ms. Sep 4 00:03:12.261719 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 00:03:12.281115 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 00:03:12.383065 systemd[1]: Started sshd@1-10.0.0.100:22-10.0.0.1:33786.service - OpenSSH per-connection server daemon (10.0.0.1:33786). Sep 4 00:03:12.487337 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 33786 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:12.490229 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:12.516701 systemd-logind[1575]: New session 2 of user core. Sep 4 00:03:12.528160 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 00:03:12.672175 sshd[1714]: Connection closed by 10.0.0.1 port 33786 Sep 4 00:03:12.670150 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:12.701551 systemd[1]: sshd@1-10.0.0.100:22-10.0.0.1:33786.service: Deactivated successfully. Sep 4 00:03:12.704896 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 00:03:12.713514 systemd-logind[1575]: Session 2 logged out. Waiting for processes to exit. Sep 4 00:03:12.722200 systemd[1]: Started sshd@2-10.0.0.100:22-10.0.0.1:33796.service - OpenSSH per-connection server daemon (10.0.0.1:33796). Sep 4 00:03:12.732755 systemd-logind[1575]: Removed session 2. Sep 4 00:03:12.816807 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 33796 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:12.818186 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:12.835193 systemd-logind[1575]: New session 3 of user core. Sep 4 00:03:12.851824 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 00:03:13.036533 sshd[1722]: Connection closed by 10.0.0.1 port 33796 Sep 4 00:03:13.041359 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:13.052757 systemd[1]: sshd@2-10.0.0.100:22-10.0.0.1:33796.service: Deactivated successfully. Sep 4 00:03:13.056214 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 00:03:13.061376 systemd-logind[1575]: Session 3 logged out. Waiting for processes to exit. Sep 4 00:03:13.067112 systemd-logind[1575]: Removed session 3. Sep 4 00:03:15.106377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:15.108248 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 00:03:15.109760 systemd[1]: Startup finished in 3.069s (kernel) + 10.102s (initrd) + 9.440s (userspace) = 22.612s. Sep 4 00:03:15.146377 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:03:15.889972 kubelet[1732]: E0904 00:03:15.889891 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:03:15.894576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:03:15.894771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:03:15.895188 systemd[1]: kubelet.service: Consumed 3.077s CPU time, 266.1M memory peak. Sep 4 00:03:23.170137 systemd[1]: Started sshd@3-10.0.0.100:22-10.0.0.1:50310.service - OpenSSH per-connection server daemon (10.0.0.1:50310). Sep 4 00:03:23.224089 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 50310 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:23.225924 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:23.231121 systemd-logind[1575]: New session 4 of user core. Sep 4 00:03:23.238985 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 00:03:23.297835 sshd[1747]: Connection closed by 10.0.0.1 port 50310 Sep 4 00:03:23.298306 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:23.314135 systemd[1]: sshd@3-10.0.0.100:22-10.0.0.1:50310.service: Deactivated successfully. Sep 4 00:03:23.316992 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 00:03:23.318194 systemd-logind[1575]: Session 4 logged out. Waiting for processes to exit. Sep 4 00:03:23.322593 systemd[1]: Started sshd@4-10.0.0.100:22-10.0.0.1:50320.service - OpenSSH per-connection server daemon (10.0.0.1:50320). Sep 4 00:03:23.323382 systemd-logind[1575]: Removed session 4. Sep 4 00:03:23.387359 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 50320 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:23.388654 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:23.392896 systemd-logind[1575]: New session 5 of user core. Sep 4 00:03:23.403922 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 00:03:23.453777 sshd[1755]: Connection closed by 10.0.0.1 port 50320 Sep 4 00:03:23.454076 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:23.474035 systemd[1]: sshd@4-10.0.0.100:22-10.0.0.1:50320.service: Deactivated successfully. Sep 4 00:03:23.475978 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 00:03:23.476869 systemd-logind[1575]: Session 5 logged out. Waiting for processes to exit. Sep 4 00:03:23.479703 systemd[1]: Started sshd@5-10.0.0.100:22-10.0.0.1:50336.service - OpenSSH per-connection server daemon (10.0.0.1:50336). Sep 4 00:03:23.480632 systemd-logind[1575]: Removed session 5. Sep 4 00:03:23.541077 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 50336 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:23.542520 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:23.548065 systemd-logind[1575]: New session 6 of user core. Sep 4 00:03:23.561997 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 00:03:23.617406 sshd[1763]: Connection closed by 10.0.0.1 port 50336 Sep 4 00:03:23.617894 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:23.631858 systemd[1]: sshd@5-10.0.0.100:22-10.0.0.1:50336.service: Deactivated successfully. Sep 4 00:03:23.634018 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 00:03:23.634782 systemd-logind[1575]: Session 6 logged out. Waiting for processes to exit. Sep 4 00:03:23.638253 systemd[1]: Started sshd@6-10.0.0.100:22-10.0.0.1:50346.service - OpenSSH per-connection server daemon (10.0.0.1:50346). Sep 4 00:03:23.638968 systemd-logind[1575]: Removed session 6. Sep 4 00:03:23.685251 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 50346 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:23.686920 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:23.691950 systemd-logind[1575]: New session 7 of user core. Sep 4 00:03:23.701917 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 00:03:23.760489 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 00:03:23.760848 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:03:23.791724 sudo[1772]: pam_unix(sudo:session): session closed for user root Sep 4 00:03:23.793518 sshd[1771]: Connection closed by 10.0.0.1 port 50346 Sep 4 00:03:23.793911 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:23.806601 systemd[1]: sshd@6-10.0.0.100:22-10.0.0.1:50346.service: Deactivated successfully. Sep 4 00:03:23.808396 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 00:03:23.809226 systemd-logind[1575]: Session 7 logged out. Waiting for processes to exit. Sep 4 00:03:23.811977 systemd[1]: Started sshd@7-10.0.0.100:22-10.0.0.1:50350.service - OpenSSH per-connection server daemon (10.0.0.1:50350). Sep 4 00:03:23.812671 systemd-logind[1575]: Removed session 7. Sep 4 00:03:23.862396 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 50350 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:23.864101 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:23.868454 systemd-logind[1575]: New session 8 of user core. Sep 4 00:03:23.878975 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 00:03:23.937193 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 00:03:23.937634 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:03:23.946865 sudo[1782]: pam_unix(sudo:session): session closed for user root Sep 4 00:03:23.955329 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 00:03:23.955763 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:03:23.970894 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:03:24.038361 augenrules[1804]: No rules Sep 4 00:03:24.040432 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:03:24.040721 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:03:24.042054 sudo[1781]: pam_unix(sudo:session): session closed for user root Sep 4 00:03:24.043693 sshd[1780]: Connection closed by 10.0.0.1 port 50350 Sep 4 00:03:24.044139 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Sep 4 00:03:24.054619 systemd[1]: sshd@7-10.0.0.100:22-10.0.0.1:50350.service: Deactivated successfully. Sep 4 00:03:24.057322 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 00:03:24.058621 systemd-logind[1575]: Session 8 logged out. Waiting for processes to exit. Sep 4 00:03:24.063407 systemd[1]: Started sshd@8-10.0.0.100:22-10.0.0.1:50366.service - OpenSSH per-connection server daemon (10.0.0.1:50366). Sep 4 00:03:24.064438 systemd-logind[1575]: Removed session 8. Sep 4 00:03:24.119318 sshd[1813]: Accepted publickey for core from 10.0.0.1 port 50366 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:03:24.121744 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:03:24.129204 systemd-logind[1575]: New session 9 of user core. Sep 4 00:03:24.139171 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 00:03:24.195051 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 00:03:24.195367 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:03:24.645937 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 00:03:24.659299 (dockerd)[1836]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 00:03:24.892069 dockerd[1836]: time="2025-09-04T00:03:24.891983736Z" level=info msg="Starting up" Sep 4 00:03:24.893465 dockerd[1836]: time="2025-09-04T00:03:24.893437554Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 00:03:25.376390 dockerd[1836]: time="2025-09-04T00:03:25.376308918Z" level=info msg="Loading containers: start." Sep 4 00:03:25.391815 kernel: Initializing XFRM netlink socket Sep 4 00:03:25.835451 systemd-networkd[1489]: docker0: Link UP Sep 4 00:03:25.850931 dockerd[1836]: time="2025-09-04T00:03:25.850836647Z" level=info msg="Loading containers: done." Sep 4 00:03:25.888386 dockerd[1836]: time="2025-09-04T00:03:25.888271847Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 00:03:25.888641 dockerd[1836]: time="2025-09-04T00:03:25.888412789Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 4 00:03:25.888641 dockerd[1836]: time="2025-09-04T00:03:25.888584408Z" level=info msg="Initializing buildkit" Sep 4 00:03:25.983906 dockerd[1836]: time="2025-09-04T00:03:25.983589715Z" level=info msg="Completed buildkit initialization" Sep 4 00:03:26.025525 dockerd[1836]: time="2025-09-04T00:03:26.025411092Z" level=info msg="Daemon has completed initialization" Sep 4 00:03:26.029231 dockerd[1836]: time="2025-09-04T00:03:26.025739456Z" level=info msg="API listen on /run/docker.sock" Sep 4 00:03:26.026966 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 00:03:26.031938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 00:03:26.038691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:26.378588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:26.395183 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:03:26.436843 kubelet[2054]: E0904 00:03:26.436733 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:03:26.443671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:03:26.443932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:03:26.444310 systemd[1]: kubelet.service: Consumed 294ms CPU time, 110.9M memory peak. Sep 4 00:03:27.011564 containerd[1597]: time="2025-09-04T00:03:27.011479508Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 4 00:03:28.151017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966221959.mount: Deactivated successfully. Sep 4 00:03:31.613066 containerd[1597]: time="2025-09-04T00:03:31.612818917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:31.613967 containerd[1597]: time="2025-09-04T00:03:31.613749987Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 4 00:03:31.615409 containerd[1597]: time="2025-09-04T00:03:31.615336002Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:31.620219 containerd[1597]: time="2025-09-04T00:03:31.619646637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:31.621062 containerd[1597]: time="2025-09-04T00:03:31.621006026Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 4.609456655s" Sep 4 00:03:31.621126 containerd[1597]: time="2025-09-04T00:03:31.621088989Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 4 00:03:31.623038 containerd[1597]: time="2025-09-04T00:03:31.622979104Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 4 00:03:34.639645 containerd[1597]: time="2025-09-04T00:03:34.639561612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:34.640449 containerd[1597]: time="2025-09-04T00:03:34.640399704Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 4 00:03:34.642129 containerd[1597]: time="2025-09-04T00:03:34.642064512Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:34.645247 containerd[1597]: time="2025-09-04T00:03:34.645186122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:34.646332 containerd[1597]: time="2025-09-04T00:03:34.646293262Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 3.023277062s" Sep 4 00:03:34.646332 containerd[1597]: time="2025-09-04T00:03:34.646330228Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 4 00:03:34.647160 containerd[1597]: time="2025-09-04T00:03:34.647120288Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 4 00:03:36.528290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 00:03:36.531239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:36.989313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:37.011213 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:03:37.315332 kubelet[2133]: E0904 00:03:37.313962 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:03:37.332367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:03:37.332627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:03:37.333387 systemd[1]: kubelet.service: Consumed 399ms CPU time, 110.5M memory peak. Sep 4 00:03:38.442842 containerd[1597]: time="2025-09-04T00:03:38.442748764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:38.443958 containerd[1597]: time="2025-09-04T00:03:38.443893718Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 4 00:03:38.445161 containerd[1597]: time="2025-09-04T00:03:38.445107825Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:38.448114 containerd[1597]: time="2025-09-04T00:03:38.448071842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:38.448985 containerd[1597]: time="2025-09-04T00:03:38.448941333Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 3.8017925s" Sep 4 00:03:38.448985 containerd[1597]: time="2025-09-04T00:03:38.448976576Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 4 00:03:38.449639 containerd[1597]: time="2025-09-04T00:03:38.449588752Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 4 00:03:40.018827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2620140698.mount: Deactivated successfully. Sep 4 00:03:41.052882 containerd[1597]: time="2025-09-04T00:03:41.052812509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:41.053572 containerd[1597]: time="2025-09-04T00:03:41.053507099Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 4 00:03:41.054780 containerd[1597]: time="2025-09-04T00:03:41.054752800Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:41.056651 containerd[1597]: time="2025-09-04T00:03:41.056607489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:41.057175 containerd[1597]: time="2025-09-04T00:03:41.057125530Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 2.607484414s" Sep 4 00:03:41.057175 containerd[1597]: time="2025-09-04T00:03:41.057172011Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 4 00:03:41.057679 containerd[1597]: time="2025-09-04T00:03:41.057657554Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 00:03:41.660633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752179835.mount: Deactivated successfully. Sep 4 00:03:42.928476 containerd[1597]: time="2025-09-04T00:03:42.928405816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:42.929226 containerd[1597]: time="2025-09-04T00:03:42.929199527Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 00:03:42.930522 containerd[1597]: time="2025-09-04T00:03:42.930467629Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:42.933049 containerd[1597]: time="2025-09-04T00:03:42.933011366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:42.933794 containerd[1597]: time="2025-09-04T00:03:42.933763802Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.87608018s" Sep 4 00:03:42.933831 containerd[1597]: time="2025-09-04T00:03:42.933811979Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 00:03:42.934372 containerd[1597]: time="2025-09-04T00:03:42.934295988Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 00:03:43.428896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1276463752.mount: Deactivated successfully. Sep 4 00:03:43.435088 containerd[1597]: time="2025-09-04T00:03:43.435040907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:03:43.435804 containerd[1597]: time="2025-09-04T00:03:43.435773121Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 00:03:43.437029 containerd[1597]: time="2025-09-04T00:03:43.436980415Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:03:43.439106 containerd[1597]: time="2025-09-04T00:03:43.439067456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:03:43.439898 containerd[1597]: time="2025-09-04T00:03:43.439860299Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 505.53732ms" Sep 4 00:03:43.439938 containerd[1597]: time="2025-09-04T00:03:43.439896257Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 00:03:43.440470 containerd[1597]: time="2025-09-04T00:03:43.440440727Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 4 00:03:44.743107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564638482.mount: Deactivated successfully. Sep 4 00:03:47.235164 containerd[1597]: time="2025-09-04T00:03:47.234927598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:47.235777 containerd[1597]: time="2025-09-04T00:03:47.235720723Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 4 00:03:47.237273 containerd[1597]: time="2025-09-04T00:03:47.237236136Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:47.239969 containerd[1597]: time="2025-09-04T00:03:47.239918911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:03:47.241195 containerd[1597]: time="2025-09-04T00:03:47.241157547Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.800687445s" Sep 4 00:03:47.241264 containerd[1597]: time="2025-09-04T00:03:47.241196384Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 4 00:03:47.465860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 00:03:47.467685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:47.693005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:47.698162 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:03:47.759400 kubelet[2281]: E0904 00:03:47.759327 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:03:47.764402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:03:47.764619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:03:47.765227 systemd[1]: kubelet.service: Consumed 248ms CPU time, 109.7M memory peak. Sep 4 00:03:50.150026 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:50.150294 systemd[1]: kubelet.service: Consumed 248ms CPU time, 109.7M memory peak. Sep 4 00:03:50.161964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:50.231249 systemd[1]: Reload requested from client PID 2311 ('systemctl') (unit session-9.scope)... Sep 4 00:03:50.231676 systemd[1]: Reloading... Sep 4 00:03:50.486842 zram_generator::config[2358]: No configuration found. Sep 4 00:03:51.397143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:03:51.638196 systemd[1]: Reloading finished in 1405 ms. Sep 4 00:03:51.772817 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 00:03:51.772956 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 00:03:51.773373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:51.773437 systemd[1]: kubelet.service: Consumed 229ms CPU time, 98.3M memory peak. Sep 4 00:03:51.787020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:52.226546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:52.237288 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:03:52.825199 kubelet[2401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:03:52.825199 kubelet[2401]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 00:03:52.825199 kubelet[2401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:03:52.828903 kubelet[2401]: I0904 00:03:52.826001 2401 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:03:53.626818 kubelet[2401]: I0904 00:03:53.626163 2401 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 00:03:53.626818 kubelet[2401]: I0904 00:03:53.626222 2401 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:03:53.626818 kubelet[2401]: I0904 00:03:53.626726 2401 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 00:03:53.666017 kubelet[2401]: E0904 00:03:53.665933 2401 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:03:53.667979 kubelet[2401]: I0904 00:03:53.667947 2401 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:03:53.682467 kubelet[2401]: I0904 00:03:53.682066 2401 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:03:53.692213 kubelet[2401]: I0904 00:03:53.692148 2401 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:03:53.693066 kubelet[2401]: I0904 00:03:53.693024 2401 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 00:03:53.693295 kubelet[2401]: I0904 00:03:53.693239 2401 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:03:53.693980 kubelet[2401]: I0904 00:03:53.693282 2401 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:03:53.693980 kubelet[2401]: I0904 00:03:53.693565 2401 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:03:53.693980 kubelet[2401]: I0904 00:03:53.693580 2401 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 00:03:53.693980 kubelet[2401]: I0904 00:03:53.693767 2401 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:03:53.699850 kubelet[2401]: I0904 00:03:53.699750 2401 kubelet.go:408] "Attempting to sync node with API server" Sep 4 00:03:53.699850 kubelet[2401]: I0904 00:03:53.699840 2401 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:03:53.700117 kubelet[2401]: I0904 00:03:53.699909 2401 kubelet.go:314] "Adding apiserver pod source" Sep 4 00:03:53.700117 kubelet[2401]: I0904 00:03:53.699958 2401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:03:53.703693 kubelet[2401]: W0904 00:03:53.703516 2401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 4 00:03:53.703693 kubelet[2401]: E0904 00:03:53.703661 2401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:03:53.704051 kubelet[2401]: W0904 00:03:53.701302 2401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 4 00:03:53.704051 kubelet[2401]: E0904 00:03:53.703932 2401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:03:53.708507 kubelet[2401]: I0904 00:03:53.708436 2401 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:03:53.709268 kubelet[2401]: I0904 00:03:53.709237 2401 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:03:53.710466 kubelet[2401]: W0904 00:03:53.710345 2401 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 00:03:53.714445 kubelet[2401]: I0904 00:03:53.714385 2401 server.go:1274] "Started kubelet" Sep 4 00:03:53.714952 kubelet[2401]: I0904 00:03:53.714820 2401 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:03:53.715723 kubelet[2401]: I0904 00:03:53.715295 2401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:03:53.715805 kubelet[2401]: I0904 00:03:53.715743 2401 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:03:53.716970 kubelet[2401]: I0904 00:03:53.716945 2401 server.go:449] "Adding debug handlers to kubelet server" Sep 4 00:03:53.718630 kubelet[2401]: I0904 00:03:53.718282 2401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:03:53.720028 kubelet[2401]: I0904 00:03:53.719681 2401 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 00:03:53.720028 kubelet[2401]: I0904 00:03:53.719776 2401 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:03:53.722342 kubelet[2401]: W0904 00:03:53.721721 2401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 4 00:03:53.724357 kubelet[2401]: E0904 00:03:53.723865 2401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:03:53.724357 kubelet[2401]: I0904 00:03:53.723293 2401 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:03:53.724357 kubelet[2401]: I0904 00:03:53.723211 2401 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 00:03:53.724357 kubelet[2401]: E0904 00:03:53.724010 2401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:03:53.724357 kubelet[2401]: E0904 00:03:53.724295 2401 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:03:53.725288 kubelet[2401]: E0904 00:03:53.724631 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="200ms" Sep 4 00:03:53.725288 kubelet[2401]: E0904 00:03:53.722630 2401 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1861eb78ccb67159 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 00:03:53.714315609 +0000 UTC m=+1.465972691,LastTimestamp:2025-09-04 00:03:53.714315609 +0000 UTC m=+1.465972691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 00:03:53.725613 kubelet[2401]: I0904 00:03:53.725377 2401 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:03:53.725613 kubelet[2401]: I0904 00:03:53.725470 2401 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:03:53.727589 kubelet[2401]: I0904 00:03:53.727545 2401 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:03:53.748589 kubelet[2401]: I0904 00:03:53.748536 2401 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 00:03:53.748751 kubelet[2401]: I0904 00:03:53.748621 2401 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 00:03:53.748751 kubelet[2401]: I0904 00:03:53.748655 2401 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:03:53.750147 kubelet[2401]: I0904 00:03:53.750082 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:03:53.752460 kubelet[2401]: I0904 00:03:53.752404 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:03:53.752567 kubelet[2401]: I0904 00:03:53.752474 2401 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 00:03:53.752567 kubelet[2401]: I0904 00:03:53.752514 2401 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 00:03:53.752657 kubelet[2401]: E0904 00:03:53.752581 2401 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:03:53.753406 kubelet[2401]: W0904 00:03:53.753362 2401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 4 00:03:53.753472 kubelet[2401]: E0904 00:03:53.753414 2401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:03:53.757279 kubelet[2401]: I0904 00:03:53.757228 2401 policy_none.go:49] "None policy: Start" Sep 4 00:03:53.759608 kubelet[2401]: I0904 00:03:53.759551 2401 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 00:03:53.759608 kubelet[2401]: I0904 00:03:53.759589 2401 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:03:53.780148 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 00:03:53.805049 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 00:03:53.810137 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 00:03:53.820093 kubelet[2401]: I0904 00:03:53.820051 2401 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:03:53.820355 kubelet[2401]: I0904 00:03:53.820342 2401 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:03:53.820402 kubelet[2401]: I0904 00:03:53.820363 2401 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:03:53.820892 kubelet[2401]: I0904 00:03:53.820603 2401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:03:53.821570 kubelet[2401]: E0904 00:03:53.821545 2401 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 00:03:53.861746 systemd[1]: Created slice kubepods-burstable-pod3e9f26049cb2ccb192b1525c6c241433.slice - libcontainer container kubepods-burstable-pod3e9f26049cb2ccb192b1525c6c241433.slice. Sep 4 00:03:53.894893 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 4 00:03:53.921316 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 4 00:03:53.921971 kubelet[2401]: I0904 00:03:53.921894 2401 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 00:03:53.922344 kubelet[2401]: E0904 00:03:53.922308 2401 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Sep 4 00:03:53.925078 kubelet[2401]: E0904 00:03:53.925031 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="400ms" Sep 4 00:03:54.024838 kubelet[2401]: I0904 00:03:54.024659 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 4 00:03:54.024838 kubelet[2401]: I0904 00:03:54.024728 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e9f26049cb2ccb192b1525c6c241433-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e9f26049cb2ccb192b1525c6c241433\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:03:54.024838 kubelet[2401]: I0904 00:03:54.024817 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:03:54.024838 kubelet[2401]: I0904 00:03:54.024838 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:03:54.024838 kubelet[2401]: I0904 00:03:54.024854 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:03:54.025277 kubelet[2401]: I0904 00:03:54.024871 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e9f26049cb2ccb192b1525c6c241433-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e9f26049cb2ccb192b1525c6c241433\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:03:54.025277 kubelet[2401]: I0904 00:03:54.024886 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e9f26049cb2ccb192b1525c6c241433-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e9f26049cb2ccb192b1525c6c241433\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:03:54.025277 kubelet[2401]: I0904 00:03:54.024920 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:03:54.025277 kubelet[2401]: I0904 00:03:54.024939 2401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:03:54.124694 kubelet[2401]: I0904 00:03:54.124653 2401 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 00:03:54.125095 kubelet[2401]: E0904 00:03:54.125052 2401 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Sep 4 00:03:54.192823 kubelet[2401]: E0904 00:03:54.192656 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:54.193566 containerd[1597]: time="2025-09-04T00:03:54.193525114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e9f26049cb2ccb192b1525c6c241433,Namespace:kube-system,Attempt:0,}" Sep 4 00:03:54.219169 kubelet[2401]: E0904 00:03:54.219124 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:54.219742 containerd[1597]: time="2025-09-04T00:03:54.219700774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 4 00:03:54.224176 kubelet[2401]: E0904 00:03:54.224149 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:54.224523 containerd[1597]: time="2025-09-04T00:03:54.224493115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 4 00:03:54.323078 containerd[1597]: time="2025-09-04T00:03:54.323012378Z" level=info msg="connecting to shim 745b39f8b47aefa4f04558f288e0a016344910a388aec2c46b519f0201f14cdf" address="unix:///run/containerd/s/171b27328353c496cff9a24c57fbc83c2c0b6ed62ba3da95761f179324fc3227" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:03:54.323717 containerd[1597]: time="2025-09-04T00:03:54.323616473Z" level=info msg="connecting to shim 4cdaddc5c9e10e8ab11926593ef99a02123012a7ec9fe3519c3378f27d4d6822" address="unix:///run/containerd/s/0e435aefd0fa051cf18412eff8b5dadcb7315afadd25775f8e748910cd50d56c" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:03:54.326253 kubelet[2401]: E0904 00:03:54.326202 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="800ms" Sep 4 00:03:54.395018 containerd[1597]: time="2025-09-04T00:03:54.394937096Z" level=info msg="connecting to shim dcffe2a30a7d5eb7712810812e21f19dfd87d2c15d213c1fa486c5fbb33fa873" address="unix:///run/containerd/s/b1dfd517ca0e1fc7f31f79b45cfe338dc8bc878c60d2a4a54d70352860072073" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:03:54.466062 systemd[1]: Started cri-containerd-4cdaddc5c9e10e8ab11926593ef99a02123012a7ec9fe3519c3378f27d4d6822.scope - libcontainer container 4cdaddc5c9e10e8ab11926593ef99a02123012a7ec9fe3519c3378f27d4d6822. Sep 4 00:03:54.472721 systemd[1]: Started cri-containerd-745b39f8b47aefa4f04558f288e0a016344910a388aec2c46b519f0201f14cdf.scope - libcontainer container 745b39f8b47aefa4f04558f288e0a016344910a388aec2c46b519f0201f14cdf. Sep 4 00:03:54.476123 systemd[1]: Started cri-containerd-dcffe2a30a7d5eb7712810812e21f19dfd87d2c15d213c1fa486c5fbb33fa873.scope - libcontainer container dcffe2a30a7d5eb7712810812e21f19dfd87d2c15d213c1fa486c5fbb33fa873. Sep 4 00:03:54.553825 kubelet[2401]: I0904 00:03:54.553733 2401 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 00:03:54.554266 kubelet[2401]: E0904 00:03:54.554228 2401 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Sep 4 00:03:54.598499 containerd[1597]: time="2025-09-04T00:03:54.598454580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"745b39f8b47aefa4f04558f288e0a016344910a388aec2c46b519f0201f14cdf\"" Sep 4 00:03:54.600562 kubelet[2401]: E0904 00:03:54.600527 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:54.602995 containerd[1597]: time="2025-09-04T00:03:54.602966750Z" level=info msg="CreateContainer within sandbox \"745b39f8b47aefa4f04558f288e0a016344910a388aec2c46b519f0201f14cdf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 00:03:54.612834 containerd[1597]: time="2025-09-04T00:03:54.612800518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e9f26049cb2ccb192b1525c6c241433,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cdaddc5c9e10e8ab11926593ef99a02123012a7ec9fe3519c3378f27d4d6822\"" Sep 4 00:03:54.613613 kubelet[2401]: E0904 00:03:54.613582 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:54.616020 containerd[1597]: time="2025-09-04T00:03:54.615988509Z" level=info msg="CreateContainer within sandbox \"4cdaddc5c9e10e8ab11926593ef99a02123012a7ec9fe3519c3378f27d4d6822\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 00:03:54.619016 containerd[1597]: time="2025-09-04T00:03:54.618983439Z" level=info msg="Container 001279e45335bb19a0266de19d4209474e5f5a9914106b21019494b3e84bb79c: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:03:54.622851 containerd[1597]: time="2025-09-04T00:03:54.622817513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcffe2a30a7d5eb7712810812e21f19dfd87d2c15d213c1fa486c5fbb33fa873\"" Sep 4 00:03:54.623495 kubelet[2401]: E0904 00:03:54.623471 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:54.625096 containerd[1597]: time="2025-09-04T00:03:54.625058729Z" level=info msg="CreateContainer within sandbox \"dcffe2a30a7d5eb7712810812e21f19dfd87d2c15d213c1fa486c5fbb33fa873\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 00:03:54.627768 containerd[1597]: time="2025-09-04T00:03:54.627705909Z" level=info msg="Container d22726345579674069b5e329b2edbef094754e1ba22cace3c80d833700a3114a: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:03:54.628048 kubelet[2401]: W0904 00:03:54.627676 2401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 4 00:03:54.628048 kubelet[2401]: E0904 00:03:54.627744 2401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:03:54.632378 containerd[1597]: time="2025-09-04T00:03:54.632052986Z" level=info msg="CreateContainer within sandbox \"745b39f8b47aefa4f04558f288e0a016344910a388aec2c46b519f0201f14cdf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"001279e45335bb19a0266de19d4209474e5f5a9914106b21019494b3e84bb79c\"" Sep 4 00:03:54.632999 containerd[1597]: time="2025-09-04T00:03:54.632969475Z" level=info msg="StartContainer for \"001279e45335bb19a0266de19d4209474e5f5a9914106b21019494b3e84bb79c\"" Sep 4 00:03:54.634850 containerd[1597]: time="2025-09-04T00:03:54.634826098Z" level=info msg="connecting to shim 001279e45335bb19a0266de19d4209474e5f5a9914106b21019494b3e84bb79c" address="unix:///run/containerd/s/171b27328353c496cff9a24c57fbc83c2c0b6ed62ba3da95761f179324fc3227" protocol=ttrpc version=3 Sep 4 00:03:54.641581 containerd[1597]: time="2025-09-04T00:03:54.640932642Z" level=info msg="Container 4f19881474ffa890b920ac4139a199273b149b2e7fbb466b119a7943c2ade220: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:03:54.644541 containerd[1597]: time="2025-09-04T00:03:54.644494543Z" level=info msg="CreateContainer within sandbox \"4cdaddc5c9e10e8ab11926593ef99a02123012a7ec9fe3519c3378f27d4d6822\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d22726345579674069b5e329b2edbef094754e1ba22cace3c80d833700a3114a\"" Sep 4 00:03:54.645361 containerd[1597]: time="2025-09-04T00:03:54.645322375Z" level=info msg="StartContainer for \"d22726345579674069b5e329b2edbef094754e1ba22cace3c80d833700a3114a\"" Sep 4 00:03:54.646646 containerd[1597]: time="2025-09-04T00:03:54.646587212Z" level=info msg="connecting to shim d22726345579674069b5e329b2edbef094754e1ba22cace3c80d833700a3114a" address="unix:///run/containerd/s/0e435aefd0fa051cf18412eff8b5dadcb7315afadd25775f8e748910cd50d56c" protocol=ttrpc version=3 Sep 4 00:03:54.648689 containerd[1597]: time="2025-09-04T00:03:54.648520481Z" level=info msg="CreateContainer within sandbox \"dcffe2a30a7d5eb7712810812e21f19dfd87d2c15d213c1fa486c5fbb33fa873\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4f19881474ffa890b920ac4139a199273b149b2e7fbb466b119a7943c2ade220\"" Sep 4 00:03:54.649059 containerd[1597]: time="2025-09-04T00:03:54.649032175Z" level=info msg="StartContainer for \"4f19881474ffa890b920ac4139a199273b149b2e7fbb466b119a7943c2ade220\"" Sep 4 00:03:54.651077 containerd[1597]: time="2025-09-04T00:03:54.651052333Z" level=info msg="connecting to shim 4f19881474ffa890b920ac4139a199273b149b2e7fbb466b119a7943c2ade220" address="unix:///run/containerd/s/b1dfd517ca0e1fc7f31f79b45cfe338dc8bc878c60d2a4a54d70352860072073" protocol=ttrpc version=3 Sep 4 00:03:54.656967 systemd[1]: Started cri-containerd-001279e45335bb19a0266de19d4209474e5f5a9914106b21019494b3e84bb79c.scope - libcontainer container 001279e45335bb19a0266de19d4209474e5f5a9914106b21019494b3e84bb79c. Sep 4 00:03:54.676832 kubelet[2401]: W0904 00:03:54.673376 2401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 4 00:03:54.676832 kubelet[2401]: E0904 00:03:54.673460 2401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:03:54.728919 systemd[1]: Started cri-containerd-4f19881474ffa890b920ac4139a199273b149b2e7fbb466b119a7943c2ade220.scope - libcontainer container 4f19881474ffa890b920ac4139a199273b149b2e7fbb466b119a7943c2ade220. Sep 4 00:03:54.730925 systemd[1]: Started cri-containerd-d22726345579674069b5e329b2edbef094754e1ba22cace3c80d833700a3114a.scope - libcontainer container d22726345579674069b5e329b2edbef094754e1ba22cace3c80d833700a3114a. Sep 4 00:03:54.764055 containerd[1597]: time="2025-09-04T00:03:54.763977664Z" level=info msg="StartContainer for \"001279e45335bb19a0266de19d4209474e5f5a9914106b21019494b3e84bb79c\" returns successfully" Sep 4 00:03:54.777865 kubelet[2401]: E0904 00:03:54.777157 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:55.020816 containerd[1597]: time="2025-09-04T00:03:55.019521986Z" level=info msg="StartContainer for \"d22726345579674069b5e329b2edbef094754e1ba22cace3c80d833700a3114a\" returns successfully" Sep 4 00:03:55.021130 containerd[1597]: time="2025-09-04T00:03:55.021091702Z" level=info msg="StartContainer for \"4f19881474ffa890b920ac4139a199273b149b2e7fbb466b119a7943c2ade220\" returns successfully" Sep 4 00:03:55.356845 kubelet[2401]: I0904 00:03:55.356659 2401 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 00:03:55.497708 update_engine[1583]: I20250904 00:03:55.497586 1583 update_attempter.cc:509] Updating boot flags... Sep 4 00:03:55.805838 kubelet[2401]: E0904 00:03:55.794590 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:55.805838 kubelet[2401]: E0904 00:03:55.795312 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:56.221148 kubelet[2401]: E0904 00:03:56.221072 2401 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 00:03:56.235665 kubelet[2401]: I0904 00:03:56.235616 2401 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 4 00:03:56.235665 kubelet[2401]: E0904 00:03:56.235655 2401 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 4 00:03:56.248966 kubelet[2401]: E0904 00:03:56.248936 2401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:03:56.349870 kubelet[2401]: E0904 00:03:56.349818 2401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:03:56.450531 kubelet[2401]: E0904 00:03:56.450476 2401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 00:03:56.702041 kubelet[2401]: I0904 00:03:56.701909 2401 apiserver.go:52] "Watching apiserver" Sep 4 00:03:56.724390 kubelet[2401]: I0904 00:03:56.724342 2401 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 00:03:56.796361 kubelet[2401]: E0904 00:03:56.796322 2401 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 00:03:56.796521 kubelet[2401]: E0904 00:03:56.796404 2401 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 00:03:56.796521 kubelet[2401]: E0904 00:03:56.796477 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:56.796585 kubelet[2401]: E0904 00:03:56.796571 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:57.802749 kubelet[2401]: E0904 00:03:57.802630 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:57.806377 kubelet[2401]: E0904 00:03:57.806347 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:58.259674 systemd[1]: Reload requested from client PID 2697 ('systemctl') (unit session-9.scope)... Sep 4 00:03:58.262089 systemd[1]: Reloading... Sep 4 00:03:58.452386 zram_generator::config[2743]: No configuration found. Sep 4 00:03:58.605917 kubelet[2401]: E0904 00:03:58.605757 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:58.794691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:03:58.796857 kubelet[2401]: E0904 00:03:58.796598 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:58.796857 kubelet[2401]: E0904 00:03:58.796734 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:58.797130 kubelet[2401]: E0904 00:03:58.797106 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:03:58.983121 systemd[1]: Reloading finished in 719 ms. Sep 4 00:03:59.022175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:59.030675 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 00:03:59.031160 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:59.031231 systemd[1]: kubelet.service: Consumed 2.137s CPU time, 131.5M memory peak. Sep 4 00:03:59.034173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:03:59.303695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:03:59.324697 (kubelet)[2785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:03:59.411509 kubelet[2785]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:03:59.411509 kubelet[2785]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 00:03:59.411509 kubelet[2785]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:03:59.412086 kubelet[2785]: I0904 00:03:59.411582 2785 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:03:59.423578 kubelet[2785]: I0904 00:03:59.423512 2785 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 00:03:59.423578 kubelet[2785]: I0904 00:03:59.423550 2785 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:03:59.424557 kubelet[2785]: I0904 00:03:59.423878 2785 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 00:03:59.425468 kubelet[2785]: I0904 00:03:59.425441 2785 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 00:03:59.428488 kubelet[2785]: I0904 00:03:59.428434 2785 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:03:59.436815 kubelet[2785]: I0904 00:03:59.436349 2785 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:03:59.443943 kubelet[2785]: I0904 00:03:59.443897 2785 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:03:59.444522 kubelet[2785]: I0904 00:03:59.444172 2785 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 00:03:59.444522 kubelet[2785]: I0904 00:03:59.444396 2785 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:03:59.444870 kubelet[2785]: I0904 00:03:59.444433 2785 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:03:59.444870 kubelet[2785]: I0904 00:03:59.444681 2785 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:03:59.444870 kubelet[2785]: I0904 00:03:59.444696 2785 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 00:03:59.444870 kubelet[2785]: I0904 00:03:59.444733 2785 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:03:59.445154 kubelet[2785]: I0904 00:03:59.444926 2785 kubelet.go:408] "Attempting to sync node with API server" Sep 4 00:03:59.445154 kubelet[2785]: I0904 00:03:59.444944 2785 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:03:59.445154 kubelet[2785]: I0904 00:03:59.444984 2785 kubelet.go:314] "Adding apiserver pod source" Sep 4 00:03:59.445154 kubelet[2785]: I0904 00:03:59.444997 2785 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:03:59.446573 kubelet[2785]: I0904 00:03:59.446543 2785 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:03:59.447255 kubelet[2785]: I0904 00:03:59.447227 2785 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:03:59.448596 kubelet[2785]: I0904 00:03:59.448153 2785 server.go:1274] "Started kubelet" Sep 4 00:03:59.449101 kubelet[2785]: I0904 00:03:59.448861 2785 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:03:59.450052 kubelet[2785]: I0904 00:03:59.449852 2785 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:03:59.451207 kubelet[2785]: I0904 00:03:59.451177 2785 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:03:59.452538 kubelet[2785]: I0904 00:03:59.452494 2785 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:03:59.456058 kubelet[2785]: I0904 00:03:59.455169 2785 server.go:449] "Adding debug handlers to kubelet server" Sep 4 00:03:59.461826 kubelet[2785]: I0904 00:03:59.460644 2785 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:03:59.464356 kubelet[2785]: I0904 00:03:59.464331 2785 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 00:03:59.466269 kubelet[2785]: I0904 00:03:59.464685 2785 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 00:03:59.466467 kubelet[2785]: I0904 00:03:59.466423 2785 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:03:59.473840 kubelet[2785]: I0904 00:03:59.473500 2785 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:03:59.473840 kubelet[2785]: I0904 00:03:59.473644 2785 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:03:59.477005 kubelet[2785]: E0904 00:03:59.476958 2785 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:03:59.478407 kubelet[2785]: I0904 00:03:59.478379 2785 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:03:59.481844 kubelet[2785]: I0904 00:03:59.481463 2785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:03:59.489029 kubelet[2785]: I0904 00:03:59.488951 2785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:03:59.489029 kubelet[2785]: I0904 00:03:59.489009 2785 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 00:03:59.489029 kubelet[2785]: I0904 00:03:59.489036 2785 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 00:03:59.489382 kubelet[2785]: E0904 00:03:59.489095 2785 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:03:59.573288 kubelet[2785]: I0904 00:03:59.572951 2785 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 00:03:59.573288 kubelet[2785]: I0904 00:03:59.572973 2785 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 00:03:59.573288 kubelet[2785]: I0904 00:03:59.573000 2785 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:03:59.573288 kubelet[2785]: I0904 00:03:59.573215 2785 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 00:03:59.573288 kubelet[2785]: I0904 00:03:59.573230 2785 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 00:03:59.573288 kubelet[2785]: I0904 00:03:59.573257 2785 policy_none.go:49] "None policy: Start" Sep 4 00:03:59.576728 kubelet[2785]: I0904 00:03:59.576672 2785 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 00:03:59.577056 kubelet[2785]: I0904 00:03:59.577033 2785 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:03:59.577465 kubelet[2785]: I0904 00:03:59.577228 2785 state_mem.go:75] "Updated machine memory state" Sep 4 00:03:59.589525 kubelet[2785]: E0904 00:03:59.589279 2785 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 00:03:59.592333 kubelet[2785]: I0904 00:03:59.592004 2785 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:03:59.592333 kubelet[2785]: I0904 00:03:59.592326 2785 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:03:59.592570 kubelet[2785]: I0904 00:03:59.592340 2785 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:03:59.593176 kubelet[2785]: I0904 00:03:59.593141 2785 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:03:59.704517 kubelet[2785]: I0904 00:03:59.704131 2785 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 00:03:59.728964 kubelet[2785]: I0904 00:03:59.728913 2785 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 4 00:03:59.729121 kubelet[2785]: I0904 00:03:59.729022 2785 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 4 00:03:59.802550 kubelet[2785]: E0904 00:03:59.802463 2785 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 00:03:59.803905 kubelet[2785]: E0904 00:03:59.803775 2785 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 00:03:59.804077 kubelet[2785]: E0904 00:03:59.804053 2785 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 00:03:59.868075 kubelet[2785]: I0904 00:03:59.867901 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:03:59.868075 kubelet[2785]: I0904 00:03:59.867956 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:03:59.868075 kubelet[2785]: I0904 00:03:59.867984 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:03:59.868075 kubelet[2785]: I0904 00:03:59.868013 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 4 00:03:59.868075 kubelet[2785]: I0904 00:03:59.868036 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e9f26049cb2ccb192b1525c6c241433-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e9f26049cb2ccb192b1525c6c241433\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:03:59.868837 kubelet[2785]: I0904 00:03:59.868055 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e9f26049cb2ccb192b1525c6c241433-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e9f26049cb2ccb192b1525c6c241433\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:03:59.868837 kubelet[2785]: I0904 00:03:59.868074 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:03:59.868837 kubelet[2785]: I0904 00:03:59.868093 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e9f26049cb2ccb192b1525c6c241433-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e9f26049cb2ccb192b1525c6c241433\") " pod="kube-system/kube-apiserver-localhost" Sep 4 00:03:59.868837 kubelet[2785]: I0904 00:03:59.868118 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:00.103847 kubelet[2785]: E0904 00:04:00.103722 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:00.105048 kubelet[2785]: E0904 00:04:00.104908 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:00.105437 kubelet[2785]: E0904 00:04:00.105317 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:00.447250 kubelet[2785]: I0904 00:04:00.447194 2785 apiserver.go:52] "Watching apiserver" Sep 4 00:04:00.467042 kubelet[2785]: I0904 00:04:00.466974 2785 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 00:04:00.514886 kubelet[2785]: E0904 00:04:00.514837 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:00.515234 kubelet[2785]: E0904 00:04:00.515077 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:00.525884 kubelet[2785]: E0904 00:04:00.525834 2785 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 00:04:00.526144 kubelet[2785]: E0904 00:04:00.525970 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:00.640599 kubelet[2785]: I0904 00:04:00.640324 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.6402932249999997 podStartE2EDuration="2.640293225s" podCreationTimestamp="2025-09-04 00:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:00.635573575 +0000 UTC m=+1.301948787" watchObservedRunningTime="2025-09-04 00:04:00.640293225 +0000 UTC m=+1.306668447" Sep 4 00:04:00.707098 kubelet[2785]: I0904 00:04:00.706037 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.706007108 podStartE2EDuration="3.706007108s" podCreationTimestamp="2025-09-04 00:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:00.673328242 +0000 UTC m=+1.339703453" watchObservedRunningTime="2025-09-04 00:04:00.706007108 +0000 UTC m=+1.372382320" Sep 4 00:04:00.707098 kubelet[2785]: I0904 00:04:00.706257 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.706248989 podStartE2EDuration="3.706248989s" podCreationTimestamp="2025-09-04 00:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:00.70119865 +0000 UTC m=+1.367573862" watchObservedRunningTime="2025-09-04 00:04:00.706248989 +0000 UTC m=+1.372624201" Sep 4 00:04:01.521049 kubelet[2785]: E0904 00:04:01.520415 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:01.521049 kubelet[2785]: E0904 00:04:01.520777 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:03.983576 kubelet[2785]: I0904 00:04:03.983540 2785 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 00:04:03.984138 containerd[1597]: time="2025-09-04T00:04:03.983903857Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 00:04:03.984373 kubelet[2785]: I0904 00:04:03.984225 2785 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 00:04:04.942500 systemd[1]: Created slice kubepods-besteffort-poda4fcce6e_df2c_4bb2_8634_823981eb1538.slice - libcontainer container kubepods-besteffort-poda4fcce6e_df2c_4bb2_8634_823981eb1538.slice. Sep 4 00:04:05.038474 kubelet[2785]: I0904 00:04:05.038396 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a4fcce6e-df2c-4bb2-8634-823981eb1538-kube-proxy\") pod \"kube-proxy-kpc9q\" (UID: \"a4fcce6e-df2c-4bb2-8634-823981eb1538\") " pod="kube-system/kube-proxy-kpc9q" Sep 4 00:04:05.038474 kubelet[2785]: I0904 00:04:05.038471 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86ww2\" (UniqueName: \"kubernetes.io/projected/a4fcce6e-df2c-4bb2-8634-823981eb1538-kube-api-access-86ww2\") pod \"kube-proxy-kpc9q\" (UID: \"a4fcce6e-df2c-4bb2-8634-823981eb1538\") " pod="kube-system/kube-proxy-kpc9q" Sep 4 00:04:05.038474 kubelet[2785]: I0904 00:04:05.038498 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4fcce6e-df2c-4bb2-8634-823981eb1538-lib-modules\") pod \"kube-proxy-kpc9q\" (UID: \"a4fcce6e-df2c-4bb2-8634-823981eb1538\") " pod="kube-system/kube-proxy-kpc9q" Sep 4 00:04:05.039360 kubelet[2785]: I0904 00:04:05.038524 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4fcce6e-df2c-4bb2-8634-823981eb1538-xtables-lock\") pod \"kube-proxy-kpc9q\" (UID: \"a4fcce6e-df2c-4bb2-8634-823981eb1538\") " pod="kube-system/kube-proxy-kpc9q" Sep 4 00:04:05.083538 systemd[1]: Created slice kubepods-besteffort-podc5235960_fab5_41b0_b9fc_9a52c981a3cb.slice - libcontainer container kubepods-besteffort-podc5235960_fab5_41b0_b9fc_9a52c981a3cb.slice. Sep 4 00:04:05.139860 kubelet[2785]: I0904 00:04:05.139710 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c5235960-fab5-41b0-b9fc-9a52c981a3cb-var-lib-calico\") pod \"tigera-operator-58fc44c59b-686lp\" (UID: \"c5235960-fab5-41b0-b9fc-9a52c981a3cb\") " pod="tigera-operator/tigera-operator-58fc44c59b-686lp" Sep 4 00:04:05.140470 kubelet[2785]: I0904 00:04:05.139978 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvpff\" (UniqueName: \"kubernetes.io/projected/c5235960-fab5-41b0-b9fc-9a52c981a3cb-kube-api-access-lvpff\") pod \"tigera-operator-58fc44c59b-686lp\" (UID: \"c5235960-fab5-41b0-b9fc-9a52c981a3cb\") " pod="tigera-operator/tigera-operator-58fc44c59b-686lp" Sep 4 00:04:05.254018 kubelet[2785]: E0904 00:04:05.253970 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:05.254675 containerd[1597]: time="2025-09-04T00:04:05.254628691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kpc9q,Uid:a4fcce6e-df2c-4bb2-8634-823981eb1538,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:05.274608 containerd[1597]: time="2025-09-04T00:04:05.274562745Z" level=info msg="connecting to shim 994d6b350dea400be478c9b2bf1710b45a7fb5cbd978131f2032d8693d765e26" address="unix:///run/containerd/s/80a3c63ca7c8c4ba8d37e21a726d4d1cd32b02dc3424fec2d9af3234b925a744" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:05.306025 systemd[1]: Started cri-containerd-994d6b350dea400be478c9b2bf1710b45a7fb5cbd978131f2032d8693d765e26.scope - libcontainer container 994d6b350dea400be478c9b2bf1710b45a7fb5cbd978131f2032d8693d765e26. Sep 4 00:04:05.335065 containerd[1597]: time="2025-09-04T00:04:05.335020289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kpc9q,Uid:a4fcce6e-df2c-4bb2-8634-823981eb1538,Namespace:kube-system,Attempt:0,} returns sandbox id \"994d6b350dea400be478c9b2bf1710b45a7fb5cbd978131f2032d8693d765e26\"" Sep 4 00:04:05.335601 kubelet[2785]: E0904 00:04:05.335574 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:05.337717 containerd[1597]: time="2025-09-04T00:04:05.337686789Z" level=info msg="CreateContainer within sandbox \"994d6b350dea400be478c9b2bf1710b45a7fb5cbd978131f2032d8693d765e26\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 00:04:05.349511 containerd[1597]: time="2025-09-04T00:04:05.349449923Z" level=info msg="Container 616acddfe71f00e5c312dec1feb878a32d885be81d9b63ed5710248b0a262ffd: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:05.358719 containerd[1597]: time="2025-09-04T00:04:05.358667533Z" level=info msg="CreateContainer within sandbox \"994d6b350dea400be478c9b2bf1710b45a7fb5cbd978131f2032d8693d765e26\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"616acddfe71f00e5c312dec1feb878a32d885be81d9b63ed5710248b0a262ffd\"" Sep 4 00:04:05.359409 containerd[1597]: time="2025-09-04T00:04:05.359369166Z" level=info msg="StartContainer for \"616acddfe71f00e5c312dec1feb878a32d885be81d9b63ed5710248b0a262ffd\"" Sep 4 00:04:05.360892 containerd[1597]: time="2025-09-04T00:04:05.360854800Z" level=info msg="connecting to shim 616acddfe71f00e5c312dec1feb878a32d885be81d9b63ed5710248b0a262ffd" address="unix:///run/containerd/s/80a3c63ca7c8c4ba8d37e21a726d4d1cd32b02dc3424fec2d9af3234b925a744" protocol=ttrpc version=3 Sep 4 00:04:05.388512 containerd[1597]: time="2025-09-04T00:04:05.388462160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-686lp,Uid:c5235960-fab5-41b0-b9fc-9a52c981a3cb,Namespace:tigera-operator,Attempt:0,}" Sep 4 00:04:05.402838 systemd[1]: Started cri-containerd-616acddfe71f00e5c312dec1feb878a32d885be81d9b63ed5710248b0a262ffd.scope - libcontainer container 616acddfe71f00e5c312dec1feb878a32d885be81d9b63ed5710248b0a262ffd. Sep 4 00:04:05.414050 containerd[1597]: time="2025-09-04T00:04:05.413991051Z" level=info msg="connecting to shim b9288d60ba4f63f29cb9e054d243eabcb181ce9e632b6667812f85bb8749f7b8" address="unix:///run/containerd/s/4626651f17eb8a7ea8da299699f92ac09f06d23da353e6067540c5738f7e5825" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:05.446971 systemd[1]: Started cri-containerd-b9288d60ba4f63f29cb9e054d243eabcb181ce9e632b6667812f85bb8749f7b8.scope - libcontainer container b9288d60ba4f63f29cb9e054d243eabcb181ce9e632b6667812f85bb8749f7b8. Sep 4 00:04:05.453484 containerd[1597]: time="2025-09-04T00:04:05.453454491Z" level=info msg="StartContainer for \"616acddfe71f00e5c312dec1feb878a32d885be81d9b63ed5710248b0a262ffd\" returns successfully" Sep 4 00:04:05.499021 containerd[1597]: time="2025-09-04T00:04:05.498968554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-686lp,Uid:c5235960-fab5-41b0-b9fc-9a52c981a3cb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b9288d60ba4f63f29cb9e054d243eabcb181ce9e632b6667812f85bb8749f7b8\"" Sep 4 00:04:05.501266 containerd[1597]: time="2025-09-04T00:04:05.500972899Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 4 00:04:05.535295 kubelet[2785]: E0904 00:04:05.535188 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:05.547259 kubelet[2785]: I0904 00:04:05.547197 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kpc9q" podStartSLOduration=1.5471718330000002 podStartE2EDuration="1.547171833s" podCreationTimestamp="2025-09-04 00:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:05.547067418 +0000 UTC m=+6.213442800" watchObservedRunningTime="2025-09-04 00:04:05.547171833 +0000 UTC m=+6.213547045" Sep 4 00:04:05.912009 kubelet[2785]: E0904 00:04:05.911868 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:06.164659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount777374439.mount: Deactivated successfully. Sep 4 00:04:06.540027 kubelet[2785]: E0904 00:04:06.539988 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:07.487084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199383346.mount: Deactivated successfully. Sep 4 00:04:07.852764 containerd[1597]: time="2025-09-04T00:04:07.852619936Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:07.853635 containerd[1597]: time="2025-09-04T00:04:07.853559344Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 4 00:04:07.854562 containerd[1597]: time="2025-09-04T00:04:07.854521621Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:07.856742 containerd[1597]: time="2025-09-04T00:04:07.856713210Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:07.857524 containerd[1597]: time="2025-09-04T00:04:07.857492688Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.356489025s" Sep 4 00:04:07.857575 containerd[1597]: time="2025-09-04T00:04:07.857532083Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 4 00:04:07.859468 containerd[1597]: time="2025-09-04T00:04:07.859437092Z" level=info msg="CreateContainer within sandbox \"b9288d60ba4f63f29cb9e054d243eabcb181ce9e632b6667812f85bb8749f7b8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 00:04:07.868618 containerd[1597]: time="2025-09-04T00:04:07.868578632Z" level=info msg="Container aae442c7132c50712e4b82e4fb033577bd6f6ca007e75ea1dbf58def587d3b36: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:07.877089 containerd[1597]: time="2025-09-04T00:04:07.877033534Z" level=info msg="CreateContainer within sandbox \"b9288d60ba4f63f29cb9e054d243eabcb181ce9e632b6667812f85bb8749f7b8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"aae442c7132c50712e4b82e4fb033577bd6f6ca007e75ea1dbf58def587d3b36\"" Sep 4 00:04:07.877680 containerd[1597]: time="2025-09-04T00:04:07.877645988Z" level=info msg="StartContainer for \"aae442c7132c50712e4b82e4fb033577bd6f6ca007e75ea1dbf58def587d3b36\"" Sep 4 00:04:07.878596 containerd[1597]: time="2025-09-04T00:04:07.878556180Z" level=info msg="connecting to shim aae442c7132c50712e4b82e4fb033577bd6f6ca007e75ea1dbf58def587d3b36" address="unix:///run/containerd/s/4626651f17eb8a7ea8da299699f92ac09f06d23da353e6067540c5738f7e5825" protocol=ttrpc version=3 Sep 4 00:04:07.889693 kubelet[2785]: E0904 00:04:07.889340 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:07.941008 systemd[1]: Started cri-containerd-aae442c7132c50712e4b82e4fb033577bd6f6ca007e75ea1dbf58def587d3b36.scope - libcontainer container aae442c7132c50712e4b82e4fb033577bd6f6ca007e75ea1dbf58def587d3b36. Sep 4 00:04:07.962991 kubelet[2785]: E0904 00:04:07.962460 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:08.028499 containerd[1597]: time="2025-09-04T00:04:08.028443344Z" level=info msg="StartContainer for \"aae442c7132c50712e4b82e4fb033577bd6f6ca007e75ea1dbf58def587d3b36\" returns successfully" Sep 4 00:04:08.544472 kubelet[2785]: E0904 00:04:08.544417 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:08.544472 kubelet[2785]: E0904 00:04:08.544417 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:09.547340 kubelet[2785]: E0904 00:04:09.547298 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:13.478091 sudo[1816]: pam_unix(sudo:session): session closed for user root Sep 4 00:04:13.484815 sshd[1815]: Connection closed by 10.0.0.1 port 50366 Sep 4 00:04:13.487285 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:13.503341 systemd-logind[1575]: Session 9 logged out. Waiting for processes to exit. Sep 4 00:04:13.505818 systemd[1]: sshd@8-10.0.0.100:22-10.0.0.1:50366.service: Deactivated successfully. Sep 4 00:04:13.513345 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 00:04:13.513684 systemd[1]: session-9.scope: Consumed 5.992s CPU time, 226.5M memory peak. Sep 4 00:04:13.518540 systemd-logind[1575]: Removed session 9. Sep 4 00:04:17.305933 kubelet[2785]: I0904 00:04:17.304922 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-686lp" podStartSLOduration=9.94663186 podStartE2EDuration="12.304863159s" podCreationTimestamp="2025-09-04 00:04:05 +0000 UTC" firstStartedPulling="2025-09-04 00:04:05.500124341 +0000 UTC m=+6.166499553" lastFinishedPulling="2025-09-04 00:04:07.85835565 +0000 UTC m=+8.524730852" observedRunningTime="2025-09-04 00:04:08.56348663 +0000 UTC m=+9.229861852" watchObservedRunningTime="2025-09-04 00:04:17.304863159 +0000 UTC m=+17.971238372" Sep 4 00:04:17.338524 systemd[1]: Created slice kubepods-besteffort-pod3182b640_26e2_4e94_8ff4_0cfd31b04109.slice - libcontainer container kubepods-besteffort-pod3182b640_26e2_4e94_8ff4_0cfd31b04109.slice. Sep 4 00:04:17.435217 kubelet[2785]: I0904 00:04:17.435013 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3182b640-26e2-4e94-8ff4-0cfd31b04109-tigera-ca-bundle\") pod \"calico-typha-6f98578b74-z2pkb\" (UID: \"3182b640-26e2-4e94-8ff4-0cfd31b04109\") " pod="calico-system/calico-typha-6f98578b74-z2pkb" Sep 4 00:04:17.435217 kubelet[2785]: I0904 00:04:17.435075 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfbcg\" (UniqueName: \"kubernetes.io/projected/3182b640-26e2-4e94-8ff4-0cfd31b04109-kube-api-access-tfbcg\") pod \"calico-typha-6f98578b74-z2pkb\" (UID: \"3182b640-26e2-4e94-8ff4-0cfd31b04109\") " pod="calico-system/calico-typha-6f98578b74-z2pkb" Sep 4 00:04:17.435217 kubelet[2785]: I0904 00:04:17.435101 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3182b640-26e2-4e94-8ff4-0cfd31b04109-typha-certs\") pod \"calico-typha-6f98578b74-z2pkb\" (UID: \"3182b640-26e2-4e94-8ff4-0cfd31b04109\") " pod="calico-system/calico-typha-6f98578b74-z2pkb" Sep 4 00:04:17.520072 systemd[1]: Created slice kubepods-besteffort-pod5d348b9c_203d_439e_9a64_a0eb331db75e.slice - libcontainer container kubepods-besteffort-pod5d348b9c_203d_439e_9a64_a0eb331db75e.slice. Sep 4 00:04:17.636833 kubelet[2785]: I0904 00:04:17.636418 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5d348b9c-203d-439e-9a64-a0eb331db75e-cni-net-dir\") pod \"calico-node-kchnh\" (UID: \"5d348b9c-203d-439e-9a64-a0eb331db75e\") " pod="calico-system/calico-node-kchnh" Sep 4 00:04:17.636833 kubelet[2785]: I0904 00:04:17.636467 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d348b9c-203d-439e-9a64-a0eb331db75e-tigera-ca-bundle\") pod \"calico-node-kchnh\" (UID: \"5d348b9c-203d-439e-9a64-a0eb331db75e\") " pod="calico-system/calico-node-kchnh" Sep 4 00:04:17.636833 kubelet[2785]: I0904 00:04:17.636484 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhsm7\" (UniqueName: \"kubernetes.io/projected/5d348b9c-203d-439e-9a64-a0eb331db75e-kube-api-access-xhsm7\") pod \"calico-node-kchnh\" (UID: \"5d348b9c-203d-439e-9a64-a0eb331db75e\") " pod="calico-system/calico-node-kchnh" Sep 4 00:04:17.636833 kubelet[2785]: I0904 00:04:17.636502 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5d348b9c-203d-439e-9a64-a0eb331db75e-flexvol-driver-host\") pod \"calico-node-kchnh\" (UID: \"5d348b9c-203d-439e-9a64-a0eb331db75e\") " pod="calico-system/calico-node-kchnh" Sep 4 00:04:17.636833 kubelet[2785]: I0904 00:04:17.636519 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d348b9c-203d-439e-9a64-a0eb331db75e-lib-modules\") pod \"calico-node-kchnh\" (UID: \"5d348b9c-203d-439e-9a64-a0eb331db75e\") " pod="calico-system/calico-node-kchnh" Sep 4 00:04:17.637169 kubelet[2785]: I0904 00:04:17.636535 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5d348b9c-203d-439e-9a64-a0eb331db75e-cni-log-dir\") pod \"calico-node-kchnh\" (UID: \"5d348b9c-203d-439e-9a64-a0eb331db75e\") " pod="calico-system/calico-node-kchnh" Sep 4 00:04:17.637169 kubelet[2785]: I0904 00:04:17.636552 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d348b9c-203d-439e-9a64-a0eb331db75e-xtables-lock\") pod \"calico-node-kchnh\" (UID: \"5d348b9c-203d-439e-9a64-a0eb331db75e\") " pod="calico-system/calico-node-kchnh" Sep 4 00:04:17.637169 kubelet[2785]: I0904 00:04:17.636575 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5d348b9c-203d-439e-9a64-a0eb331db75e-var-lib-calico\") pod \"calico-node-kchnh\" (UID: \"5d348b9c-203d-439e-9a64-a0eb331db75e\") " pod="calico-system/calico-node-kchnh" Sep 4 00:04:17.637169 kubelet[2785]: I0904 00:04:17.636594 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5d348b9c-203d-439e-9a64-a0eb331db75e-var-run-calico\") pod \"calico-node-kchnh\" (UID: \"5d348b9c-203d-439e-9a64-a0eb331db75e\") " pod="calico-system/calico-node-kchnh" Sep 4 00:04:17.637169 kubelet[2785]: I0904 00:04:17.636613 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5d348b9c-203d-439e-9a64-a0eb331db75e-policysync\") pod \"calico-node-kchnh\" (UID: \"5d348b9c-203d-439e-9a64-a0eb331db75e\") " pod="calico-system/calico-node-kchnh" Sep 4 00:04:17.637373 kubelet[2785]: I0904 00:04:17.636633 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5d348b9c-203d-439e-9a64-a0eb331db75e-cni-bin-dir\") pod \"calico-node-kchnh\" (UID: \"5d348b9c-203d-439e-9a64-a0eb331db75e\") " pod="calico-system/calico-node-kchnh" Sep 4 00:04:17.637373 kubelet[2785]: I0904 00:04:17.636648 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5d348b9c-203d-439e-9a64-a0eb331db75e-node-certs\") pod \"calico-node-kchnh\" (UID: \"5d348b9c-203d-439e-9a64-a0eb331db75e\") " pod="calico-system/calico-node-kchnh" Sep 4 00:04:17.651618 kubelet[2785]: E0904 00:04:17.651244 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:17.652750 containerd[1597]: time="2025-09-04T00:04:17.652672053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f98578b74-z2pkb,Uid:3182b640-26e2-4e94-8ff4-0cfd31b04109,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:17.709189 kubelet[2785]: E0904 00:04:17.709108 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78n75" podUID="c0c8d5a4-31f5-454a-9128-02468bd71436" Sep 4 00:04:17.741058 kubelet[2785]: E0904 00:04:17.741004 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.741299 kubelet[2785]: W0904 00:04:17.741029 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.741299 kubelet[2785]: E0904 00:04:17.741162 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.750109 kubelet[2785]: E0904 00:04:17.750000 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.750109 kubelet[2785]: W0904 00:04:17.750022 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.750109 kubelet[2785]: E0904 00:04:17.750045 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.756048 kubelet[2785]: E0904 00:04:17.755991 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.756048 kubelet[2785]: W0904 00:04:17.756008 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.756048 kubelet[2785]: E0904 00:04:17.756023 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.759839 containerd[1597]: time="2025-09-04T00:04:17.759039908Z" level=info msg="connecting to shim ca77888fc190137af9a8efa2250f0ed767d4277aaba76ef276ae90c84d6094bf" address="unix:///run/containerd/s/647284aa74a86f0adc7cfefc77a4888282234212d2da4c3e138208c0704a6b6e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:17.788115 systemd[1]: Started cri-containerd-ca77888fc190137af9a8efa2250f0ed767d4277aaba76ef276ae90c84d6094bf.scope - libcontainer container ca77888fc190137af9a8efa2250f0ed767d4277aaba76ef276ae90c84d6094bf. Sep 4 00:04:17.802117 kubelet[2785]: E0904 00:04:17.801970 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.802117 kubelet[2785]: W0904 00:04:17.802031 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.802117 kubelet[2785]: E0904 00:04:17.802082 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.803123 kubelet[2785]: E0904 00:04:17.802997 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.803123 kubelet[2785]: W0904 00:04:17.803022 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.803123 kubelet[2785]: E0904 00:04:17.803051 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.803498 kubelet[2785]: E0904 00:04:17.803435 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.803498 kubelet[2785]: W0904 00:04:17.803446 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.803498 kubelet[2785]: E0904 00:04:17.803456 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.803824 kubelet[2785]: E0904 00:04:17.803723 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.803824 kubelet[2785]: W0904 00:04:17.803733 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.803824 kubelet[2785]: E0904 00:04:17.803743 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.804081 kubelet[2785]: E0904 00:04:17.804070 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.804162 kubelet[2785]: W0904 00:04:17.804151 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.804243 kubelet[2785]: E0904 00:04:17.804206 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.804456 kubelet[2785]: E0904 00:04:17.804405 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.804456 kubelet[2785]: W0904 00:04:17.804415 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.804456 kubelet[2785]: E0904 00:04:17.804424 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.804742 kubelet[2785]: E0904 00:04:17.804686 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.804742 kubelet[2785]: W0904 00:04:17.804697 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.804742 kubelet[2785]: E0904 00:04:17.804705 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.805072 kubelet[2785]: E0904 00:04:17.805011 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.805072 kubelet[2785]: W0904 00:04:17.805023 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.805072 kubelet[2785]: E0904 00:04:17.805032 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.805352 kubelet[2785]: E0904 00:04:17.805288 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.805352 kubelet[2785]: W0904 00:04:17.805308 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.805352 kubelet[2785]: E0904 00:04:17.805318 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.805614 kubelet[2785]: E0904 00:04:17.805563 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.805614 kubelet[2785]: W0904 00:04:17.805573 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.805614 kubelet[2785]: E0904 00:04:17.805582 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.805884 kubelet[2785]: E0904 00:04:17.805872 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.806006 kubelet[2785]: W0904 00:04:17.805944 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.806006 kubelet[2785]: E0904 00:04:17.805966 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.806256 kubelet[2785]: E0904 00:04:17.806196 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.806256 kubelet[2785]: W0904 00:04:17.806207 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.806256 kubelet[2785]: E0904 00:04:17.806216 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.806543 kubelet[2785]: E0904 00:04:17.806488 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.806543 kubelet[2785]: W0904 00:04:17.806499 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.806543 kubelet[2785]: E0904 00:04:17.806508 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.806812 kubelet[2785]: E0904 00:04:17.806748 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.806812 kubelet[2785]: W0904 00:04:17.806760 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.806812 kubelet[2785]: E0904 00:04:17.806769 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.807149 kubelet[2785]: E0904 00:04:17.807091 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.807149 kubelet[2785]: W0904 00:04:17.807102 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.807149 kubelet[2785]: E0904 00:04:17.807112 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.807419 kubelet[2785]: E0904 00:04:17.807357 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.807419 kubelet[2785]: W0904 00:04:17.807367 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.807419 kubelet[2785]: E0904 00:04:17.807376 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.807692 kubelet[2785]: E0904 00:04:17.807640 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.807692 kubelet[2785]: W0904 00:04:17.807650 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.807692 kubelet[2785]: E0904 00:04:17.807659 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.807958 kubelet[2785]: E0904 00:04:17.807944 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.808054 kubelet[2785]: W0904 00:04:17.808008 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.808054 kubelet[2785]: E0904 00:04:17.808021 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.808360 kubelet[2785]: E0904 00:04:17.808348 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.808494 kubelet[2785]: W0904 00:04:17.808417 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.808494 kubelet[2785]: E0904 00:04:17.808430 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.808734 kubelet[2785]: E0904 00:04:17.808651 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.808734 kubelet[2785]: W0904 00:04:17.808661 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.808734 kubelet[2785]: E0904 00:04:17.808670 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.826187 containerd[1597]: time="2025-09-04T00:04:17.826153920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kchnh,Uid:5d348b9c-203d-439e-9a64-a0eb331db75e,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:17.839318 kubelet[2785]: E0904 00:04:17.839293 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.839715 kubelet[2785]: W0904 00:04:17.839528 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.839715 kubelet[2785]: E0904 00:04:17.839558 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.839715 kubelet[2785]: I0904 00:04:17.839601 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdkkt\" (UniqueName: \"kubernetes.io/projected/c0c8d5a4-31f5-454a-9128-02468bd71436-kube-api-access-pdkkt\") pod \"csi-node-driver-78n75\" (UID: \"c0c8d5a4-31f5-454a-9128-02468bd71436\") " pod="calico-system/csi-node-driver-78n75" Sep 4 00:04:17.841286 kubelet[2785]: E0904 00:04:17.841135 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.841286 kubelet[2785]: W0904 00:04:17.841152 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.841286 kubelet[2785]: E0904 00:04:17.841165 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.841286 kubelet[2785]: I0904 00:04:17.841182 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0c8d5a4-31f5-454a-9128-02468bd71436-kubelet-dir\") pod \"csi-node-driver-78n75\" (UID: \"c0c8d5a4-31f5-454a-9128-02468bd71436\") " pod="calico-system/csi-node-driver-78n75" Sep 4 00:04:17.841987 kubelet[2785]: E0904 00:04:17.841973 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.842062 kubelet[2785]: W0904 00:04:17.842050 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.842156 kubelet[2785]: E0904 00:04:17.842145 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.842224 kubelet[2785]: I0904 00:04:17.842211 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c0c8d5a4-31f5-454a-9128-02468bd71436-registration-dir\") pod \"csi-node-driver-78n75\" (UID: \"c0c8d5a4-31f5-454a-9128-02468bd71436\") " pod="calico-system/csi-node-driver-78n75" Sep 4 00:04:17.842466 kubelet[2785]: E0904 00:04:17.842441 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.842466 kubelet[2785]: W0904 00:04:17.842453 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.842598 kubelet[2785]: E0904 00:04:17.842575 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.842844 kubelet[2785]: E0904 00:04:17.842801 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.842844 kubelet[2785]: W0904 00:04:17.842829 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.842996 kubelet[2785]: E0904 00:04:17.842985 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.843212 kubelet[2785]: E0904 00:04:17.843189 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.843212 kubelet[2785]: W0904 00:04:17.843199 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.843294 kubelet[2785]: E0904 00:04:17.843282 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.843448 kubelet[2785]: I0904 00:04:17.843407 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c0c8d5a4-31f5-454a-9128-02468bd71436-varrun\") pod \"csi-node-driver-78n75\" (UID: \"c0c8d5a4-31f5-454a-9128-02468bd71436\") " pod="calico-system/csi-node-driver-78n75" Sep 4 00:04:17.843656 kubelet[2785]: E0904 00:04:17.843645 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.843709 kubelet[2785]: W0904 00:04:17.843698 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.843810 kubelet[2785]: E0904 00:04:17.843761 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.844014 kubelet[2785]: E0904 00:04:17.844004 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.844071 kubelet[2785]: W0904 00:04:17.844061 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.844129 kubelet[2785]: E0904 00:04:17.844119 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.844363 kubelet[2785]: E0904 00:04:17.844340 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.844363 kubelet[2785]: W0904 00:04:17.844350 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.844454 kubelet[2785]: E0904 00:04:17.844442 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.844692 kubelet[2785]: E0904 00:04:17.844659 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.844692 kubelet[2785]: W0904 00:04:17.844670 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.844692 kubelet[2785]: E0904 00:04:17.844680 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.845055 kubelet[2785]: E0904 00:04:17.845029 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.845055 kubelet[2785]: W0904 00:04:17.845041 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.845204 kubelet[2785]: E0904 00:04:17.845150 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.845204 kubelet[2785]: I0904 00:04:17.845174 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c0c8d5a4-31f5-454a-9128-02468bd71436-socket-dir\") pod \"csi-node-driver-78n75\" (UID: \"c0c8d5a4-31f5-454a-9128-02468bd71436\") " pod="calico-system/csi-node-driver-78n75" Sep 4 00:04:17.845496 kubelet[2785]: E0904 00:04:17.845455 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.845496 kubelet[2785]: W0904 00:04:17.845468 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.845496 kubelet[2785]: E0904 00:04:17.845480 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.845865 kubelet[2785]: E0904 00:04:17.845830 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.845865 kubelet[2785]: W0904 00:04:17.845842 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.845865 kubelet[2785]: E0904 00:04:17.845852 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.846449 kubelet[2785]: E0904 00:04:17.846397 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.846449 kubelet[2785]: W0904 00:04:17.846407 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.846449 kubelet[2785]: E0904 00:04:17.846417 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.846888 kubelet[2785]: E0904 00:04:17.846849 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.846888 kubelet[2785]: W0904 00:04:17.846860 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.846888 kubelet[2785]: E0904 00:04:17.846871 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.871704 containerd[1597]: time="2025-09-04T00:04:17.871623142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f98578b74-z2pkb,Uid:3182b640-26e2-4e94-8ff4-0cfd31b04109,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca77888fc190137af9a8efa2250f0ed767d4277aaba76ef276ae90c84d6094bf\"" Sep 4 00:04:17.873321 kubelet[2785]: E0904 00:04:17.873282 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:17.876929 containerd[1597]: time="2025-09-04T00:04:17.876889141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 4 00:04:17.894488 containerd[1597]: time="2025-09-04T00:04:17.893997032Z" level=info msg="connecting to shim c9300f688a7e4e46eda809013452a933fb50805679cb86479a2142908fe9e697" address="unix:///run/containerd/s/e99ae24cddf1d0f466c0a32c61ba59b9ea77dd9f448bb0ec60fa1477c61d830c" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:17.925044 systemd[1]: Started cri-containerd-c9300f688a7e4e46eda809013452a933fb50805679cb86479a2142908fe9e697.scope - libcontainer container c9300f688a7e4e46eda809013452a933fb50805679cb86479a2142908fe9e697. Sep 4 00:04:17.948121 kubelet[2785]: E0904 00:04:17.947417 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.948121 kubelet[2785]: W0904 00:04:17.947451 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.948121 kubelet[2785]: E0904 00:04:17.947477 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.948121 kubelet[2785]: E0904 00:04:17.947762 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.948121 kubelet[2785]: W0904 00:04:17.947773 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.948121 kubelet[2785]: E0904 00:04:17.947812 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.948121 kubelet[2785]: E0904 00:04:17.948044 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.948121 kubelet[2785]: W0904 00:04:17.948052 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.948121 kubelet[2785]: E0904 00:04:17.948061 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.948501 kubelet[2785]: E0904 00:04:17.948263 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.948501 kubelet[2785]: W0904 00:04:17.948275 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.948501 kubelet[2785]: E0904 00:04:17.948283 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.948573 kubelet[2785]: E0904 00:04:17.948507 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.948573 kubelet[2785]: W0904 00:04:17.948515 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.948573 kubelet[2785]: E0904 00:04:17.948523 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.948935 kubelet[2785]: E0904 00:04:17.948746 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.948935 kubelet[2785]: W0904 00:04:17.948760 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.948935 kubelet[2785]: E0904 00:04:17.948768 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.949334 kubelet[2785]: E0904 00:04:17.949039 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.949334 kubelet[2785]: W0904 00:04:17.949048 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.949334 kubelet[2785]: E0904 00:04:17.949056 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.949532 kubelet[2785]: E0904 00:04:17.949355 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.949532 kubelet[2785]: W0904 00:04:17.949363 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.949532 kubelet[2785]: E0904 00:04:17.949371 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.951322 kubelet[2785]: E0904 00:04:17.951174 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.951322 kubelet[2785]: W0904 00:04:17.951219 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.951322 kubelet[2785]: E0904 00:04:17.951255 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.951632 kubelet[2785]: E0904 00:04:17.951610 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.951844 kubelet[2785]: W0904 00:04:17.951816 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.951844 kubelet[2785]: E0904 00:04:17.951854 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.952288 kubelet[2785]: E0904 00:04:17.952244 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.952288 kubelet[2785]: W0904 00:04:17.952264 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.952288 kubelet[2785]: E0904 00:04:17.952275 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.952616 kubelet[2785]: E0904 00:04:17.952591 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.952616 kubelet[2785]: W0904 00:04:17.952612 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.952736 kubelet[2785]: E0904 00:04:17.952626 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.953536 kubelet[2785]: E0904 00:04:17.953515 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.953536 kubelet[2785]: W0904 00:04:17.953532 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.953638 kubelet[2785]: E0904 00:04:17.953543 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.953757 kubelet[2785]: E0904 00:04:17.953726 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.954384 kubelet[2785]: W0904 00:04:17.954357 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.954431 kubelet[2785]: E0904 00:04:17.954397 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.954900 kubelet[2785]: E0904 00:04:17.954864 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.955066 kubelet[2785]: W0904 00:04:17.954999 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.955556 kubelet[2785]: E0904 00:04:17.955385 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.955556 kubelet[2785]: W0904 00:04:17.955399 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.955824 kubelet[2785]: E0904 00:04:17.955740 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.955824 kubelet[2785]: E0904 00:04:17.955771 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.958074 kubelet[2785]: E0904 00:04:17.955745 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.958074 kubelet[2785]: W0904 00:04:17.957910 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.958074 kubelet[2785]: E0904 00:04:17.957927 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.958517 kubelet[2785]: E0904 00:04:17.958365 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.958517 kubelet[2785]: W0904 00:04:17.958378 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.958517 kubelet[2785]: E0904 00:04:17.958409 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.958942 kubelet[2785]: E0904 00:04:17.958682 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.958942 kubelet[2785]: W0904 00:04:17.958694 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.958942 kubelet[2785]: E0904 00:04:17.958704 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.959266 kubelet[2785]: E0904 00:04:17.959109 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.959266 kubelet[2785]: W0904 00:04:17.959120 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.959266 kubelet[2785]: E0904 00:04:17.959154 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.959543 kubelet[2785]: E0904 00:04:17.959419 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.959543 kubelet[2785]: W0904 00:04:17.959430 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.959543 kubelet[2785]: E0904 00:04:17.959465 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.961146 kubelet[2785]: E0904 00:04:17.961102 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.961230 kubelet[2785]: W0904 00:04:17.961144 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.961230 kubelet[2785]: E0904 00:04:17.961182 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.961740 kubelet[2785]: E0904 00:04:17.961716 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.961740 kubelet[2785]: W0904 00:04:17.961737 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.961858 kubelet[2785]: E0904 00:04:17.961752 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.962577 kubelet[2785]: E0904 00:04:17.962158 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.962577 kubelet[2785]: W0904 00:04:17.962174 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.962577 kubelet[2785]: E0904 00:04:17.962195 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.963485 kubelet[2785]: E0904 00:04:17.962749 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.963485 kubelet[2785]: W0904 00:04:17.962760 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.963485 kubelet[2785]: E0904 00:04:17.962822 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:17.975095 kubelet[2785]: E0904 00:04:17.975049 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:17.975095 kubelet[2785]: W0904 00:04:17.975097 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:17.975278 kubelet[2785]: E0904 00:04:17.975122 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:18.018870 containerd[1597]: time="2025-09-04T00:04:18.018815395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kchnh,Uid:5d348b9c-203d-439e-9a64-a0eb331db75e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9300f688a7e4e46eda809013452a933fb50805679cb86479a2142908fe9e697\"" Sep 4 00:04:19.280563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489320646.mount: Deactivated successfully. Sep 4 00:04:19.490830 kubelet[2785]: E0904 00:04:19.490722 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78n75" podUID="c0c8d5a4-31f5-454a-9128-02468bd71436" Sep 4 00:04:19.717616 containerd[1597]: time="2025-09-04T00:04:19.717558231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:19.718368 containerd[1597]: time="2025-09-04T00:04:19.718283724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 4 00:04:19.719716 containerd[1597]: time="2025-09-04T00:04:19.719692996Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:19.721954 containerd[1597]: time="2025-09-04T00:04:19.721911293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:19.722690 containerd[1597]: time="2025-09-04T00:04:19.722650779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.845705675s" Sep 4 00:04:19.722690 containerd[1597]: time="2025-09-04T00:04:19.722686762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 4 00:04:19.726151 containerd[1597]: time="2025-09-04T00:04:19.724310896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 4 00:04:19.731410 containerd[1597]: time="2025-09-04T00:04:19.731361886Z" level=info msg="CreateContainer within sandbox \"ca77888fc190137af9a8efa2250f0ed767d4277aaba76ef276ae90c84d6094bf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 00:04:19.742449 containerd[1597]: time="2025-09-04T00:04:19.742400690Z" level=info msg="Container 73b4c8c5e3ee8f3dec510d842dbee0006e007f45128d25ecc134054ab2ec9dc7: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:19.752922 containerd[1597]: time="2025-09-04T00:04:19.752854713Z" level=info msg="CreateContainer within sandbox \"ca77888fc190137af9a8efa2250f0ed767d4277aaba76ef276ae90c84d6094bf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"73b4c8c5e3ee8f3dec510d842dbee0006e007f45128d25ecc134054ab2ec9dc7\"" Sep 4 00:04:19.753812 containerd[1597]: time="2025-09-04T00:04:19.753602417Z" level=info msg="StartContainer for \"73b4c8c5e3ee8f3dec510d842dbee0006e007f45128d25ecc134054ab2ec9dc7\"" Sep 4 00:04:19.754975 containerd[1597]: time="2025-09-04T00:04:19.754941267Z" level=info msg="connecting to shim 73b4c8c5e3ee8f3dec510d842dbee0006e007f45128d25ecc134054ab2ec9dc7" address="unix:///run/containerd/s/647284aa74a86f0adc7cfefc77a4888282234212d2da4c3e138208c0704a6b6e" protocol=ttrpc version=3 Sep 4 00:04:19.788125 systemd[1]: Started cri-containerd-73b4c8c5e3ee8f3dec510d842dbee0006e007f45128d25ecc134054ab2ec9dc7.scope - libcontainer container 73b4c8c5e3ee8f3dec510d842dbee0006e007f45128d25ecc134054ab2ec9dc7. Sep 4 00:04:19.857814 containerd[1597]: time="2025-09-04T00:04:19.857759491Z" level=info msg="StartContainer for \"73b4c8c5e3ee8f3dec510d842dbee0006e007f45128d25ecc134054ab2ec9dc7\" returns successfully" Sep 4 00:04:20.583948 kubelet[2785]: E0904 00:04:20.583900 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:20.633004 kubelet[2785]: E0904 00:04:20.632937 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.633004 kubelet[2785]: W0904 00:04:20.632980 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.633247 kubelet[2785]: E0904 00:04:20.633015 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.633338 kubelet[2785]: E0904 00:04:20.633317 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.633338 kubelet[2785]: W0904 00:04:20.633330 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.633413 kubelet[2785]: E0904 00:04:20.633342 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.633565 kubelet[2785]: E0904 00:04:20.633544 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.633565 kubelet[2785]: W0904 00:04:20.633557 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.633657 kubelet[2785]: E0904 00:04:20.633568 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.633845 kubelet[2785]: E0904 00:04:20.633822 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.633845 kubelet[2785]: W0904 00:04:20.633839 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.633953 kubelet[2785]: E0904 00:04:20.633852 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.634121 kubelet[2785]: E0904 00:04:20.634094 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.634121 kubelet[2785]: W0904 00:04:20.634112 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.634202 kubelet[2785]: E0904 00:04:20.634122 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.634362 kubelet[2785]: E0904 00:04:20.634347 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.634362 kubelet[2785]: W0904 00:04:20.634359 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.634452 kubelet[2785]: E0904 00:04:20.634368 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.634553 kubelet[2785]: E0904 00:04:20.634537 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.634553 kubelet[2785]: W0904 00:04:20.634549 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.634623 kubelet[2785]: E0904 00:04:20.634558 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.634779 kubelet[2785]: E0904 00:04:20.634763 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.634779 kubelet[2785]: W0904 00:04:20.634775 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.634902 kubelet[2785]: E0904 00:04:20.634824 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.635040 kubelet[2785]: E0904 00:04:20.635023 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.635040 kubelet[2785]: W0904 00:04:20.635035 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.635120 kubelet[2785]: E0904 00:04:20.635045 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.635236 kubelet[2785]: E0904 00:04:20.635221 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.635236 kubelet[2785]: W0904 00:04:20.635232 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.635315 kubelet[2785]: E0904 00:04:20.635242 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.635422 kubelet[2785]: E0904 00:04:20.635407 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.635422 kubelet[2785]: W0904 00:04:20.635418 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.635602 kubelet[2785]: E0904 00:04:20.635427 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.635602 kubelet[2785]: E0904 00:04:20.635593 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.635602 kubelet[2785]: W0904 00:04:20.635601 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.635701 kubelet[2785]: E0904 00:04:20.635611 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.635853 kubelet[2785]: E0904 00:04:20.635838 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.635853 kubelet[2785]: W0904 00:04:20.635850 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.635938 kubelet[2785]: E0904 00:04:20.635860 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.636086 kubelet[2785]: E0904 00:04:20.636070 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.636086 kubelet[2785]: W0904 00:04:20.636082 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.636157 kubelet[2785]: E0904 00:04:20.636093 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.636312 kubelet[2785]: E0904 00:04:20.636297 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.636312 kubelet[2785]: W0904 00:04:20.636309 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.636386 kubelet[2785]: E0904 00:04:20.636319 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.668678 kubelet[2785]: E0904 00:04:20.668649 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.668678 kubelet[2785]: W0904 00:04:20.668672 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.668862 kubelet[2785]: E0904 00:04:20.668692 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.669013 kubelet[2785]: E0904 00:04:20.668985 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.669013 kubelet[2785]: W0904 00:04:20.668999 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.669093 kubelet[2785]: E0904 00:04:20.669019 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.669366 kubelet[2785]: E0904 00:04:20.669351 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.669366 kubelet[2785]: W0904 00:04:20.669364 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.669427 kubelet[2785]: E0904 00:04:20.669384 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.669847 kubelet[2785]: E0904 00:04:20.669778 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.669883 kubelet[2785]: W0904 00:04:20.669847 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.669917 kubelet[2785]: E0904 00:04:20.669899 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.670142 kubelet[2785]: E0904 00:04:20.670123 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.670142 kubelet[2785]: W0904 00:04:20.670138 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.670212 kubelet[2785]: E0904 00:04:20.670159 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.670425 kubelet[2785]: E0904 00:04:20.670400 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.670425 kubelet[2785]: W0904 00:04:20.670415 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.670500 kubelet[2785]: E0904 00:04:20.670432 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.670649 kubelet[2785]: E0904 00:04:20.670633 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.670649 kubelet[2785]: W0904 00:04:20.670645 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.670726 kubelet[2785]: E0904 00:04:20.670684 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.670875 kubelet[2785]: E0904 00:04:20.670860 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.670875 kubelet[2785]: W0904 00:04:20.670873 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.670959 kubelet[2785]: E0904 00:04:20.670906 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.671089 kubelet[2785]: E0904 00:04:20.671072 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.671089 kubelet[2785]: W0904 00:04:20.671085 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.671157 kubelet[2785]: E0904 00:04:20.671101 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.671344 kubelet[2785]: E0904 00:04:20.671324 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.671344 kubelet[2785]: W0904 00:04:20.671338 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.671507 kubelet[2785]: E0904 00:04:20.671356 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.671651 kubelet[2785]: E0904 00:04:20.671625 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.671651 kubelet[2785]: W0904 00:04:20.671644 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.671728 kubelet[2785]: E0904 00:04:20.671666 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.671961 kubelet[2785]: E0904 00:04:20.671939 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.671961 kubelet[2785]: W0904 00:04:20.671956 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.672048 kubelet[2785]: E0904 00:04:20.671990 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.672312 kubelet[2785]: E0904 00:04:20.672280 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.672312 kubelet[2785]: W0904 00:04:20.672297 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.672388 kubelet[2785]: E0904 00:04:20.672317 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.672537 kubelet[2785]: E0904 00:04:20.672509 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.672537 kubelet[2785]: W0904 00:04:20.672527 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.672618 kubelet[2785]: E0904 00:04:20.672545 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.672762 kubelet[2785]: E0904 00:04:20.672745 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.672762 kubelet[2785]: W0904 00:04:20.672756 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.672860 kubelet[2785]: E0904 00:04:20.672773 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.673019 kubelet[2785]: E0904 00:04:20.672999 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.673019 kubelet[2785]: W0904 00:04:20.673011 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.673104 kubelet[2785]: E0904 00:04:20.673027 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.673359 kubelet[2785]: E0904 00:04:20.673331 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.673359 kubelet[2785]: W0904 00:04:20.673347 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.673426 kubelet[2785]: E0904 00:04:20.673369 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:20.673576 kubelet[2785]: E0904 00:04:20.673557 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:20.673576 kubelet[2785]: W0904 00:04:20.673569 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:20.673664 kubelet[2785]: E0904 00:04:20.673581 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.493647 kubelet[2785]: E0904 00:04:21.493542 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78n75" podUID="c0c8d5a4-31f5-454a-9128-02468bd71436" Sep 4 00:04:21.588051 kubelet[2785]: I0904 00:04:21.587991 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:04:21.589963 kubelet[2785]: E0904 00:04:21.589368 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:21.643540 kubelet[2785]: E0904 00:04:21.643488 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.643540 kubelet[2785]: W0904 00:04:21.643520 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.643540 kubelet[2785]: E0904 00:04:21.643550 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.643863 kubelet[2785]: E0904 00:04:21.643847 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.643863 kubelet[2785]: W0904 00:04:21.643859 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.643964 kubelet[2785]: E0904 00:04:21.643871 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.644118 kubelet[2785]: E0904 00:04:21.644090 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.644118 kubelet[2785]: W0904 00:04:21.644104 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.644118 kubelet[2785]: E0904 00:04:21.644115 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.644346 kubelet[2785]: E0904 00:04:21.644323 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.644346 kubelet[2785]: W0904 00:04:21.644338 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.644433 kubelet[2785]: E0904 00:04:21.644351 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.644544 kubelet[2785]: E0904 00:04:21.644529 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.644544 kubelet[2785]: W0904 00:04:21.644541 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.644609 kubelet[2785]: E0904 00:04:21.644551 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.644733 kubelet[2785]: E0904 00:04:21.644718 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.644733 kubelet[2785]: W0904 00:04:21.644730 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.644809 kubelet[2785]: E0904 00:04:21.644740 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.644954 kubelet[2785]: E0904 00:04:21.644940 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.644983 kubelet[2785]: W0904 00:04:21.644952 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.644983 kubelet[2785]: E0904 00:04:21.644963 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.645142 kubelet[2785]: E0904 00:04:21.645129 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.645164 kubelet[2785]: W0904 00:04:21.645140 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.645164 kubelet[2785]: E0904 00:04:21.645153 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.645348 kubelet[2785]: E0904 00:04:21.645333 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.645348 kubelet[2785]: W0904 00:04:21.645344 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.645419 kubelet[2785]: E0904 00:04:21.645354 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.645527 kubelet[2785]: E0904 00:04:21.645511 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.645527 kubelet[2785]: W0904 00:04:21.645524 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.645586 kubelet[2785]: E0904 00:04:21.645535 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.645710 kubelet[2785]: E0904 00:04:21.645695 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.645710 kubelet[2785]: W0904 00:04:21.645708 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.645769 kubelet[2785]: E0904 00:04:21.645719 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.645962 kubelet[2785]: E0904 00:04:21.645948 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.645998 kubelet[2785]: W0904 00:04:21.645961 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.645998 kubelet[2785]: E0904 00:04:21.645972 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.646177 kubelet[2785]: E0904 00:04:21.646162 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.646177 kubelet[2785]: W0904 00:04:21.646174 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.646226 kubelet[2785]: E0904 00:04:21.646185 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.646380 kubelet[2785]: E0904 00:04:21.646361 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.646380 kubelet[2785]: W0904 00:04:21.646374 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.646452 kubelet[2785]: E0904 00:04:21.646386 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.646572 kubelet[2785]: E0904 00:04:21.646559 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.646598 kubelet[2785]: W0904 00:04:21.646571 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.646598 kubelet[2785]: E0904 00:04:21.646582 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.679180 kubelet[2785]: E0904 00:04:21.679128 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.679180 kubelet[2785]: W0904 00:04:21.679156 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.679180 kubelet[2785]: E0904 00:04:21.679181 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.679452 kubelet[2785]: E0904 00:04:21.679431 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.679452 kubelet[2785]: W0904 00:04:21.679442 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.679535 kubelet[2785]: E0904 00:04:21.679456 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.679701 kubelet[2785]: E0904 00:04:21.679680 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.679701 kubelet[2785]: W0904 00:04:21.679697 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.679811 kubelet[2785]: E0904 00:04:21.679718 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.680001 kubelet[2785]: E0904 00:04:21.679985 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.680001 kubelet[2785]: W0904 00:04:21.679998 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.680090 kubelet[2785]: E0904 00:04:21.680016 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.680194 kubelet[2785]: E0904 00:04:21.680177 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.680194 kubelet[2785]: W0904 00:04:21.680189 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.680257 kubelet[2785]: E0904 00:04:21.680202 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.680415 kubelet[2785]: E0904 00:04:21.680391 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.680415 kubelet[2785]: W0904 00:04:21.680402 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.680415 kubelet[2785]: E0904 00:04:21.680415 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.680587 kubelet[2785]: E0904 00:04:21.680572 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.680587 kubelet[2785]: W0904 00:04:21.680582 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.680653 kubelet[2785]: E0904 00:04:21.680594 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.680817 kubelet[2785]: E0904 00:04:21.680779 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.680864 kubelet[2785]: W0904 00:04:21.680815 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.680864 kubelet[2785]: E0904 00:04:21.680846 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.681006 kubelet[2785]: E0904 00:04:21.680991 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.681006 kubelet[2785]: W0904 00:04:21.681000 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.681069 kubelet[2785]: E0904 00:04:21.681030 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.681150 kubelet[2785]: E0904 00:04:21.681136 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.681150 kubelet[2785]: W0904 00:04:21.681145 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.681218 kubelet[2785]: E0904 00:04:21.681156 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.681329 kubelet[2785]: E0904 00:04:21.681314 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.681329 kubelet[2785]: W0904 00:04:21.681323 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.681399 kubelet[2785]: E0904 00:04:21.681334 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.681531 kubelet[2785]: E0904 00:04:21.681514 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.681571 kubelet[2785]: W0904 00:04:21.681529 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.681571 kubelet[2785]: E0904 00:04:21.681546 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.681819 kubelet[2785]: E0904 00:04:21.681801 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.681819 kubelet[2785]: W0904 00:04:21.681815 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.681904 kubelet[2785]: E0904 00:04:21.681832 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.682020 kubelet[2785]: E0904 00:04:21.682002 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.682020 kubelet[2785]: W0904 00:04:21.682016 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.682086 kubelet[2785]: E0904 00:04:21.682032 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.682218 kubelet[2785]: E0904 00:04:21.682203 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.682218 kubelet[2785]: W0904 00:04:21.682215 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.682278 kubelet[2785]: E0904 00:04:21.682229 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.682452 kubelet[2785]: E0904 00:04:21.682435 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.682452 kubelet[2785]: W0904 00:04:21.682447 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.682523 kubelet[2785]: E0904 00:04:21.682472 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.682633 kubelet[2785]: E0904 00:04:21.682618 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.682633 kubelet[2785]: W0904 00:04:21.682628 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.682697 kubelet[2785]: E0904 00:04:21.682639 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.682883 kubelet[2785]: E0904 00:04:21.682868 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:04:21.682883 kubelet[2785]: W0904 00:04:21.682878 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:04:21.682953 kubelet[2785]: E0904 00:04:21.682887 2785 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:04:21.694281 containerd[1597]: time="2025-09-04T00:04:21.694154591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:21.695588 containerd[1597]: time="2025-09-04T00:04:21.695464900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 4 00:04:21.697964 containerd[1597]: time="2025-09-04T00:04:21.697401375Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:21.714921 containerd[1597]: time="2025-09-04T00:04:21.714874265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:21.715897 containerd[1597]: time="2025-09-04T00:04:21.715870154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.991522043s" Sep 4 00:04:21.715943 containerd[1597]: time="2025-09-04T00:04:21.715904800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 4 00:04:21.718848 containerd[1597]: time="2025-09-04T00:04:21.718777834Z" level=info msg="CreateContainer within sandbox \"c9300f688a7e4e46eda809013452a933fb50805679cb86479a2142908fe9e697\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 00:04:21.786730 containerd[1597]: time="2025-09-04T00:04:21.786567151Z" level=info msg="Container 5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:21.797882 containerd[1597]: time="2025-09-04T00:04:21.797813467Z" level=info msg="CreateContainer within sandbox \"c9300f688a7e4e46eda809013452a933fb50805679cb86479a2142908fe9e697\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f\"" Sep 4 00:04:21.798391 containerd[1597]: time="2025-09-04T00:04:21.798356300Z" level=info msg="StartContainer for \"5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f\"" Sep 4 00:04:21.799928 containerd[1597]: time="2025-09-04T00:04:21.799899491Z" level=info msg="connecting to shim 5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f" address="unix:///run/containerd/s/e99ae24cddf1d0f466c0a32c61ba59b9ea77dd9f448bb0ec60fa1477c61d830c" protocol=ttrpc version=3 Sep 4 00:04:21.820934 systemd[1]: Started cri-containerd-5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f.scope - libcontainer container 5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f. Sep 4 00:04:21.873287 containerd[1597]: time="2025-09-04T00:04:21.873238966Z" level=info msg="StartContainer for \"5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f\" returns successfully" Sep 4 00:04:21.889977 systemd[1]: cri-containerd-5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f.scope: Deactivated successfully. Sep 4 00:04:21.892320 containerd[1597]: time="2025-09-04T00:04:21.892273267Z" level=info msg="received exit event container_id:\"5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f\" id:\"5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f\" pid:3496 exited_at:{seconds:1756944261 nanos:891712545}" Sep 4 00:04:21.892635 systemd[1]: cri-containerd-5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f.scope: Consumed 41ms CPU time, 6.4M memory peak, 3.3M written to disk. Sep 4 00:04:21.892809 containerd[1597]: time="2025-09-04T00:04:21.892756579Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f\" id:\"5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f\" pid:3496 exited_at:{seconds:1756944261 nanos:891712545}" Sep 4 00:04:21.919006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b0c53105f7a8aa0c3f99c19e56df8615d0cc2182ee6b28eca438aaae0f3126f-rootfs.mount: Deactivated successfully. Sep 4 00:04:22.596873 containerd[1597]: time="2025-09-04T00:04:22.596771943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 4 00:04:22.615289 kubelet[2785]: I0904 00:04:22.615198 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f98578b74-z2pkb" podStartSLOduration=3.76799148 podStartE2EDuration="5.615176903s" podCreationTimestamp="2025-09-04 00:04:17 +0000 UTC" firstStartedPulling="2025-09-04 00:04:17.876163186 +0000 UTC m=+18.542538398" lastFinishedPulling="2025-09-04 00:04:19.723348609 +0000 UTC m=+20.389723821" observedRunningTime="2025-09-04 00:04:20.596171959 +0000 UTC m=+21.262547181" watchObservedRunningTime="2025-09-04 00:04:22.615176903 +0000 UTC m=+23.281552125" Sep 4 00:04:23.490111 kubelet[2785]: E0904 00:04:23.490041 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78n75" podUID="c0c8d5a4-31f5-454a-9128-02468bd71436" Sep 4 00:04:25.491915 kubelet[2785]: E0904 00:04:25.491812 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78n75" podUID="c0c8d5a4-31f5-454a-9128-02468bd71436" Sep 4 00:04:26.790067 containerd[1597]: time="2025-09-04T00:04:26.790006663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:26.790754 containerd[1597]: time="2025-09-04T00:04:26.790686237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 4 00:04:26.791863 containerd[1597]: time="2025-09-04T00:04:26.791813899Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:26.794186 containerd[1597]: time="2025-09-04T00:04:26.794145708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:26.794766 containerd[1597]: time="2025-09-04T00:04:26.794720972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.197861953s" Sep 4 00:04:26.794766 containerd[1597]: time="2025-09-04T00:04:26.794760261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 4 00:04:26.797033 containerd[1597]: time="2025-09-04T00:04:26.797001868Z" level=info msg="CreateContainer within sandbox \"c9300f688a7e4e46eda809013452a933fb50805679cb86479a2142908fe9e697\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 00:04:26.806848 containerd[1597]: time="2025-09-04T00:04:26.806806399Z" level=info msg="Container d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:26.820668 containerd[1597]: time="2025-09-04T00:04:26.820605394Z" level=info msg="CreateContainer within sandbox \"c9300f688a7e4e46eda809013452a933fb50805679cb86479a2142908fe9e697\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30\"" Sep 4 00:04:26.821213 containerd[1597]: time="2025-09-04T00:04:26.821179055Z" level=info msg="StartContainer for \"d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30\"" Sep 4 00:04:26.822680 containerd[1597]: time="2025-09-04T00:04:26.822655397Z" level=info msg="connecting to shim d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30" address="unix:///run/containerd/s/e99ae24cddf1d0f466c0a32c61ba59b9ea77dd9f448bb0ec60fa1477c61d830c" protocol=ttrpc version=3 Sep 4 00:04:26.846956 systemd[1]: Started cri-containerd-d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30.scope - libcontainer container d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30. Sep 4 00:04:27.432469 containerd[1597]: time="2025-09-04T00:04:27.432423855Z" level=info msg="StartContainer for \"d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30\" returns successfully" Sep 4 00:04:27.490041 kubelet[2785]: E0904 00:04:27.489971 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78n75" podUID="c0c8d5a4-31f5-454a-9128-02468bd71436" Sep 4 00:04:28.198448 systemd[1]: cri-containerd-d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30.scope: Deactivated successfully. Sep 4 00:04:28.199854 systemd[1]: cri-containerd-d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30.scope: Consumed 634ms CPU time, 177.7M memory peak, 4K read from disk, 171.3M written to disk. Sep 4 00:04:28.201307 containerd[1597]: time="2025-09-04T00:04:28.201273770Z" level=info msg="received exit event container_id:\"d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30\" id:\"d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30\" pid:3554 exited_at:{seconds:1756944268 nanos:201034174}" Sep 4 00:04:28.201692 containerd[1597]: time="2025-09-04T00:04:28.201339217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30\" id:\"d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30\" pid:3554 exited_at:{seconds:1756944268 nanos:201034174}" Sep 4 00:04:28.225447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d63be4708b69f46ebd61ec641aa0c84ccdb8b470bfbf4f424fbe99f06902dc30-rootfs.mount: Deactivated successfully. Sep 4 00:04:28.346667 kubelet[2785]: I0904 00:04:28.346603 2785 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 4 00:04:28.386452 systemd[1]: Created slice kubepods-burstable-pod4ecd7f27_3eca_46c7_ae6e_a7dacd4d5eba.slice - libcontainer container kubepods-burstable-pod4ecd7f27_3eca_46c7_ae6e_a7dacd4d5eba.slice. Sep 4 00:04:28.401776 systemd[1]: Created slice kubepods-besteffort-pod9df1ad8b_587e_4cd3_b349_728e4bf3bbf6.slice - libcontainer container kubepods-besteffort-pod9df1ad8b_587e_4cd3_b349_728e4bf3bbf6.slice. Sep 4 00:04:28.407500 systemd[1]: Created slice kubepods-burstable-pod3d897f9a_d881_4326_9bb5_5e48a00efbd2.slice - libcontainer container kubepods-burstable-pod3d897f9a_d881_4326_9bb5_5e48a00efbd2.slice. Sep 4 00:04:28.413155 systemd[1]: Created slice kubepods-besteffort-pod0863bf91_0c5d_4e1c_b033_fc3cb7c1a687.slice - libcontainer container kubepods-besteffort-pod0863bf91_0c5d_4e1c_b033_fc3cb7c1a687.slice. Sep 4 00:04:28.417821 systemd[1]: Created slice kubepods-besteffort-pod6b5d325a_6202_41d1_bdf7_3a8725d4ec52.slice - libcontainer container kubepods-besteffort-pod6b5d325a_6202_41d1_bdf7_3a8725d4ec52.slice. Sep 4 00:04:28.422416 systemd[1]: Created slice kubepods-besteffort-pod873bdebd_b74f_4456_9ddf_7b47d2199016.slice - libcontainer container kubepods-besteffort-pod873bdebd_b74f_4456_9ddf_7b47d2199016.slice. Sep 4 00:04:28.424955 kubelet[2785]: I0904 00:04:28.424918 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8c6w\" (UniqueName: \"kubernetes.io/projected/873bdebd-b74f-4456-9ddf-7b47d2199016-kube-api-access-c8c6w\") pod \"calico-apiserver-598f7f498f-b56pq\" (UID: \"873bdebd-b74f-4456-9ddf-7b47d2199016\") " pod="calico-apiserver/calico-apiserver-598f7f498f-b56pq" Sep 4 00:04:28.424955 kubelet[2785]: I0904 00:04:28.424957 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4798d\" (UniqueName: \"kubernetes.io/projected/9df1ad8b-587e-4cd3-b349-728e4bf3bbf6-kube-api-access-4798d\") pod \"goldmane-7988f88666-fshvr\" (UID: \"9df1ad8b-587e-4cd3-b349-728e4bf3bbf6\") " pod="calico-system/goldmane-7988f88666-fshvr" Sep 4 00:04:28.425130 kubelet[2785]: I0904 00:04:28.424977 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9df1ad8b-587e-4cd3-b349-728e4bf3bbf6-goldmane-ca-bundle\") pod \"goldmane-7988f88666-fshvr\" (UID: \"9df1ad8b-587e-4cd3-b349-728e4bf3bbf6\") " pod="calico-system/goldmane-7988f88666-fshvr" Sep 4 00:04:28.425130 kubelet[2785]: I0904 00:04:28.424994 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d897f9a-d881-4326-9bb5-5e48a00efbd2-config-volume\") pod \"coredns-7c65d6cfc9-pf6tc\" (UID: \"3d897f9a-d881-4326-9bb5-5e48a00efbd2\") " pod="kube-system/coredns-7c65d6cfc9-pf6tc" Sep 4 00:04:28.425130 kubelet[2785]: I0904 00:04:28.425012 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba-config-volume\") pod \"coredns-7c65d6cfc9-j7nwn\" (UID: \"4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba\") " pod="kube-system/coredns-7c65d6cfc9-j7nwn" Sep 4 00:04:28.425130 kubelet[2785]: I0904 00:04:28.425032 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9df1ad8b-587e-4cd3-b349-728e4bf3bbf6-goldmane-key-pair\") pod \"goldmane-7988f88666-fshvr\" (UID: \"9df1ad8b-587e-4cd3-b349-728e4bf3bbf6\") " pod="calico-system/goldmane-7988f88666-fshvr" Sep 4 00:04:28.425130 kubelet[2785]: I0904 00:04:28.425051 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/873bdebd-b74f-4456-9ddf-7b47d2199016-calico-apiserver-certs\") pod \"calico-apiserver-598f7f498f-b56pq\" (UID: \"873bdebd-b74f-4456-9ddf-7b47d2199016\") " pod="calico-apiserver/calico-apiserver-598f7f498f-b56pq" Sep 4 00:04:28.425295 kubelet[2785]: I0904 00:04:28.425068 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xbdw\" (UniqueName: \"kubernetes.io/projected/0863bf91-0c5d-4e1c-b033-fc3cb7c1a687-kube-api-access-5xbdw\") pod \"calico-kube-controllers-7b844b9d59-mwc9s\" (UID: \"0863bf91-0c5d-4e1c-b033-fc3cb7c1a687\") " pod="calico-system/calico-kube-controllers-7b844b9d59-mwc9s" Sep 4 00:04:28.425295 kubelet[2785]: I0904 00:04:28.425087 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df1ad8b-587e-4cd3-b349-728e4bf3bbf6-config\") pod \"goldmane-7988f88666-fshvr\" (UID: \"9df1ad8b-587e-4cd3-b349-728e4bf3bbf6\") " pod="calico-system/goldmane-7988f88666-fshvr" Sep 4 00:04:28.425295 kubelet[2785]: I0904 00:04:28.425107 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qw44\" (UniqueName: \"kubernetes.io/projected/4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba-kube-api-access-9qw44\") pod \"coredns-7c65d6cfc9-j7nwn\" (UID: \"4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba\") " pod="kube-system/coredns-7c65d6cfc9-j7nwn" Sep 4 00:04:28.425295 kubelet[2785]: I0904 00:04:28.425124 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0863bf91-0c5d-4e1c-b033-fc3cb7c1a687-tigera-ca-bundle\") pod \"calico-kube-controllers-7b844b9d59-mwc9s\" (UID: \"0863bf91-0c5d-4e1c-b033-fc3cb7c1a687\") " pod="calico-system/calico-kube-controllers-7b844b9d59-mwc9s" Sep 4 00:04:28.425295 kubelet[2785]: I0904 00:04:28.425140 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eae742d1-9bda-4601-af3d-ef6b66417a43-calico-apiserver-certs\") pod \"calico-apiserver-598f7f498f-g95k2\" (UID: \"eae742d1-9bda-4601-af3d-ef6b66417a43\") " pod="calico-apiserver/calico-apiserver-598f7f498f-g95k2" Sep 4 00:04:28.425459 kubelet[2785]: I0904 00:04:28.425156 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzdrv\" (UniqueName: \"kubernetes.io/projected/3d897f9a-d881-4326-9bb5-5e48a00efbd2-kube-api-access-zzdrv\") pod \"coredns-7c65d6cfc9-pf6tc\" (UID: \"3d897f9a-d881-4326-9bb5-5e48a00efbd2\") " pod="kube-system/coredns-7c65d6cfc9-pf6tc" Sep 4 00:04:28.425459 kubelet[2785]: I0904 00:04:28.425174 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6b5d325a-6202-41d1-bdf7-3a8725d4ec52-whisker-backend-key-pair\") pod \"whisker-86c696d59-lgg6n\" (UID: \"6b5d325a-6202-41d1-bdf7-3a8725d4ec52\") " pod="calico-system/whisker-86c696d59-lgg6n" Sep 4 00:04:28.425459 kubelet[2785]: I0904 00:04:28.425198 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks4hr\" (UniqueName: \"kubernetes.io/projected/6b5d325a-6202-41d1-bdf7-3a8725d4ec52-kube-api-access-ks4hr\") pod \"whisker-86c696d59-lgg6n\" (UID: \"6b5d325a-6202-41d1-bdf7-3a8725d4ec52\") " pod="calico-system/whisker-86c696d59-lgg6n" Sep 4 00:04:28.425459 kubelet[2785]: I0904 00:04:28.425215 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b5d325a-6202-41d1-bdf7-3a8725d4ec52-whisker-ca-bundle\") pod \"whisker-86c696d59-lgg6n\" (UID: \"6b5d325a-6202-41d1-bdf7-3a8725d4ec52\") " pod="calico-system/whisker-86c696d59-lgg6n" Sep 4 00:04:28.425459 kubelet[2785]: I0904 00:04:28.425231 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcpnt\" (UniqueName: \"kubernetes.io/projected/eae742d1-9bda-4601-af3d-ef6b66417a43-kube-api-access-wcpnt\") pod \"calico-apiserver-598f7f498f-g95k2\" (UID: \"eae742d1-9bda-4601-af3d-ef6b66417a43\") " pod="calico-apiserver/calico-apiserver-598f7f498f-g95k2" Sep 4 00:04:28.427857 systemd[1]: Created slice kubepods-besteffort-podeae742d1_9bda_4601_af3d_ef6b66417a43.slice - libcontainer container kubepods-besteffort-podeae742d1_9bda_4601_af3d_ef6b66417a43.slice. Sep 4 00:04:28.613291 containerd[1597]: time="2025-09-04T00:04:28.613230429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 4 00:04:28.699683 kubelet[2785]: E0904 00:04:28.699536 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:28.701399 containerd[1597]: time="2025-09-04T00:04:28.700968842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j7nwn,Uid:4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:28.707230 containerd[1597]: time="2025-09-04T00:04:28.707136752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-fshvr,Uid:9df1ad8b-587e-4cd3-b349-728e4bf3bbf6,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:28.711712 kubelet[2785]: E0904 00:04:28.711641 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:28.712693 containerd[1597]: time="2025-09-04T00:04:28.712629597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pf6tc,Uid:3d897f9a-d881-4326-9bb5-5e48a00efbd2,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:28.717524 containerd[1597]: time="2025-09-04T00:04:28.717440197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b844b9d59-mwc9s,Uid:0863bf91-0c5d-4e1c-b033-fc3cb7c1a687,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:28.722243 containerd[1597]: time="2025-09-04T00:04:28.722149618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c696d59-lgg6n,Uid:6b5d325a-6202-41d1-bdf7-3a8725d4ec52,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:28.729060 containerd[1597]: time="2025-09-04T00:04:28.728998323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598f7f498f-b56pq,Uid:873bdebd-b74f-4456-9ddf-7b47d2199016,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:04:28.731464 containerd[1597]: time="2025-09-04T00:04:28.731398812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598f7f498f-g95k2,Uid:eae742d1-9bda-4601-af3d-ef6b66417a43,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:04:29.075817 containerd[1597]: time="2025-09-04T00:04:29.075737565Z" level=error msg="Failed to destroy network for sandbox \"f3b5cc1cdc84b1c7a51868576f9081765606757649274b9614d0862cad2a30fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.105300 containerd[1597]: time="2025-09-04T00:04:29.104819580Z" level=error msg="Failed to destroy network for sandbox \"3572b9dd617831c016892fa80564d3e7ab9fbfa9e841c2fd859bd5cd3daee63a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.116830 containerd[1597]: time="2025-09-04T00:04:29.116703482Z" level=error msg="Failed to destroy network for sandbox \"e912b1efc3c1c8ec053f471f389c44d0b686c347932c701302d716837ad876e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.122316 containerd[1597]: time="2025-09-04T00:04:29.122249429Z" level=error msg="Failed to destroy network for sandbox \"e045dca2bdf60568634e9342aad70484a474955596aa596cec1a9cfcd6c70361\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.161225 containerd[1597]: time="2025-09-04T00:04:29.161075952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598f7f498f-b56pq,Uid:873bdebd-b74f-4456-9ddf-7b47d2199016,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e045dca2bdf60568634e9342aad70484a474955596aa596cec1a9cfcd6c70361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.162099 containerd[1597]: time="2025-09-04T00:04:29.161076062Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598f7f498f-g95k2,Uid:eae742d1-9bda-4601-af3d-ef6b66417a43,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3b5cc1cdc84b1c7a51868576f9081765606757649274b9614d0862cad2a30fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.162099 containerd[1597]: time="2025-09-04T00:04:29.161107643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pf6tc,Uid:3d897f9a-d881-4326-9bb5-5e48a00efbd2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e912b1efc3c1c8ec053f471f389c44d0b686c347932c701302d716837ad876e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.162099 containerd[1597]: time="2025-09-04T00:04:29.161557937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j7nwn,Uid:4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3572b9dd617831c016892fa80564d3e7ab9fbfa9e841c2fd859bd5cd3daee63a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.172886 containerd[1597]: time="2025-09-04T00:04:29.172811119Z" level=error msg="Failed to destroy network for sandbox \"50b3bddbdd3b307643cdb0d95400cd977bf5baf519c25c1e86bbce8fd4814fac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.177097 containerd[1597]: time="2025-09-04T00:04:29.177008498Z" level=error msg="Failed to destroy network for sandbox \"803536c4be4cc13a77903a07ef4a7fbcda577df6b6b0d2ceb07f0014c674c490\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.181869 containerd[1597]: time="2025-09-04T00:04:29.181655140Z" level=error msg="Failed to destroy network for sandbox \"1729611baf5eb93f3ef4e5758b9afd3552bbcb8bc5212c099fd2cb962da09e9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.183570 containerd[1597]: time="2025-09-04T00:04:29.183447237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b844b9d59-mwc9s,Uid:0863bf91-0c5d-4e1c-b033-fc3cb7c1a687,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b3bddbdd3b307643cdb0d95400cd977bf5baf519c25c1e86bbce8fd4814fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.184712 kubelet[2785]: E0904 00:04:29.183903 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e045dca2bdf60568634e9342aad70484a474955596aa596cec1a9cfcd6c70361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.184712 kubelet[2785]: E0904 00:04:29.184095 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e045dca2bdf60568634e9342aad70484a474955596aa596cec1a9cfcd6c70361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598f7f498f-b56pq" Sep 4 00:04:29.184712 kubelet[2785]: E0904 00:04:29.184232 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e045dca2bdf60568634e9342aad70484a474955596aa596cec1a9cfcd6c70361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598f7f498f-b56pq" Sep 4 00:04:29.184712 kubelet[2785]: E0904 00:04:29.184271 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3572b9dd617831c016892fa80564d3e7ab9fbfa9e841c2fd859bd5cd3daee63a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.185083 kubelet[2785]: E0904 00:04:29.184342 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3572b9dd617831c016892fa80564d3e7ab9fbfa9e841c2fd859bd5cd3daee63a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-j7nwn" Sep 4 00:04:29.185083 kubelet[2785]: E0904 00:04:29.184350 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-598f7f498f-b56pq_calico-apiserver(873bdebd-b74f-4456-9ddf-7b47d2199016)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-598f7f498f-b56pq_calico-apiserver(873bdebd-b74f-4456-9ddf-7b47d2199016)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e045dca2bdf60568634e9342aad70484a474955596aa596cec1a9cfcd6c70361\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598f7f498f-b56pq" podUID="873bdebd-b74f-4456-9ddf-7b47d2199016" Sep 4 00:04:29.185083 kubelet[2785]: E0904 00:04:29.184413 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3572b9dd617831c016892fa80564d3e7ab9fbfa9e841c2fd859bd5cd3daee63a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-j7nwn" Sep 4 00:04:29.185290 kubelet[2785]: E0904 00:04:29.184516 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-j7nwn_kube-system(4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-j7nwn_kube-system(4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3572b9dd617831c016892fa80564d3e7ab9fbfa9e841c2fd859bd5cd3daee63a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-j7nwn" podUID="4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba" Sep 4 00:04:29.185290 kubelet[2785]: E0904 00:04:29.183929 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e912b1efc3c1c8ec053f471f389c44d0b686c347932c701302d716837ad876e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.185290 kubelet[2785]: E0904 00:04:29.184618 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e912b1efc3c1c8ec053f471f389c44d0b686c347932c701302d716837ad876e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pf6tc" Sep 4 00:04:29.185426 kubelet[2785]: E0904 00:04:29.184654 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e912b1efc3c1c8ec053f471f389c44d0b686c347932c701302d716837ad876e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pf6tc" Sep 4 00:04:29.185426 kubelet[2785]: E0904 00:04:29.184652 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3b5cc1cdc84b1c7a51868576f9081765606757649274b9614d0862cad2a30fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.185426 kubelet[2785]: E0904 00:04:29.184737 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3b5cc1cdc84b1c7a51868576f9081765606757649274b9614d0862cad2a30fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598f7f498f-g95k2" Sep 4 00:04:29.185426 kubelet[2785]: E0904 00:04:29.184763 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3b5cc1cdc84b1c7a51868576f9081765606757649274b9614d0862cad2a30fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598f7f498f-g95k2" Sep 4 00:04:29.185542 kubelet[2785]: E0904 00:04:29.184846 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-598f7f498f-g95k2_calico-apiserver(eae742d1-9bda-4601-af3d-ef6b66417a43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-598f7f498f-g95k2_calico-apiserver(eae742d1-9bda-4601-af3d-ef6b66417a43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3b5cc1cdc84b1c7a51868576f9081765606757649274b9614d0862cad2a30fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598f7f498f-g95k2" podUID="eae742d1-9bda-4601-af3d-ef6b66417a43" Sep 4 00:04:29.185542 kubelet[2785]: E0904 00:04:29.184123 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b3bddbdd3b307643cdb0d95400cd977bf5baf519c25c1e86bbce8fd4814fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.185542 kubelet[2785]: E0904 00:04:29.184938 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b3bddbdd3b307643cdb0d95400cd977bf5baf519c25c1e86bbce8fd4814fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b844b9d59-mwc9s" Sep 4 00:04:29.185754 kubelet[2785]: E0904 00:04:29.184980 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b3bddbdd3b307643cdb0d95400cd977bf5baf519c25c1e86bbce8fd4814fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b844b9d59-mwc9s" Sep 4 00:04:29.185754 kubelet[2785]: E0904 00:04:29.184676 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-pf6tc_kube-system(3d897f9a-d881-4326-9bb5-5e48a00efbd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-pf6tc_kube-system(3d897f9a-d881-4326-9bb5-5e48a00efbd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e912b1efc3c1c8ec053f471f389c44d0b686c347932c701302d716837ad876e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pf6tc" podUID="3d897f9a-d881-4326-9bb5-5e48a00efbd2" Sep 4 00:04:29.185754 kubelet[2785]: E0904 00:04:29.185419 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b844b9d59-mwc9s_calico-system(0863bf91-0c5d-4e1c-b033-fc3cb7c1a687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b844b9d59-mwc9s_calico-system(0863bf91-0c5d-4e1c-b033-fc3cb7c1a687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50b3bddbdd3b307643cdb0d95400cd977bf5baf519c25c1e86bbce8fd4814fac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b844b9d59-mwc9s" podUID="0863bf91-0c5d-4e1c-b033-fc3cb7c1a687" Sep 4 00:04:29.187638 containerd[1597]: time="2025-09-04T00:04:29.186481997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c696d59-lgg6n,Uid:6b5d325a-6202-41d1-bdf7-3a8725d4ec52,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1729611baf5eb93f3ef4e5758b9afd3552bbcb8bc5212c099fd2cb962da09e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.187846 containerd[1597]: time="2025-09-04T00:04:29.186969623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-fshvr,Uid:9df1ad8b-587e-4cd3-b349-728e4bf3bbf6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"803536c4be4cc13a77903a07ef4a7fbcda577df6b6b0d2ceb07f0014c674c490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.188144 kubelet[2785]: E0904 00:04:29.188094 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803536c4be4cc13a77903a07ef4a7fbcda577df6b6b0d2ceb07f0014c674c490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.188144 kubelet[2785]: E0904 00:04:29.188117 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1729611baf5eb93f3ef4e5758b9afd3552bbcb8bc5212c099fd2cb962da09e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.188331 kubelet[2785]: E0904 00:04:29.188311 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1729611baf5eb93f3ef4e5758b9afd3552bbcb8bc5212c099fd2cb962da09e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86c696d59-lgg6n" Sep 4 00:04:29.188331 kubelet[2785]: E0904 00:04:29.188315 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803536c4be4cc13a77903a07ef4a7fbcda577df6b6b0d2ceb07f0014c674c490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-fshvr" Sep 4 00:04:29.188331 kubelet[2785]: E0904 00:04:29.188331 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1729611baf5eb93f3ef4e5758b9afd3552bbcb8bc5212c099fd2cb962da09e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86c696d59-lgg6n" Sep 4 00:04:29.188490 kubelet[2785]: E0904 00:04:29.188343 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803536c4be4cc13a77903a07ef4a7fbcda577df6b6b0d2ceb07f0014c674c490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-fshvr" Sep 4 00:04:29.188490 kubelet[2785]: E0904 00:04:29.188380 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-fshvr_calico-system(9df1ad8b-587e-4cd3-b349-728e4bf3bbf6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-fshvr_calico-system(9df1ad8b-587e-4cd3-b349-728e4bf3bbf6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"803536c4be4cc13a77903a07ef4a7fbcda577df6b6b0d2ceb07f0014c674c490\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-fshvr" podUID="9df1ad8b-587e-4cd3-b349-728e4bf3bbf6" Sep 4 00:04:29.188490 kubelet[2785]: E0904 00:04:29.188384 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-86c696d59-lgg6n_calico-system(6b5d325a-6202-41d1-bdf7-3a8725d4ec52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-86c696d59-lgg6n_calico-system(6b5d325a-6202-41d1-bdf7-3a8725d4ec52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1729611baf5eb93f3ef4e5758b9afd3552bbcb8bc5212c099fd2cb962da09e9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-86c696d59-lgg6n" podUID="6b5d325a-6202-41d1-bdf7-3a8725d4ec52" Sep 4 00:04:29.499382 systemd[1]: Created slice kubepods-besteffort-podc0c8d5a4_31f5_454a_9128_02468bd71436.slice - libcontainer container kubepods-besteffort-podc0c8d5a4_31f5_454a_9128_02468bd71436.slice. Sep 4 00:04:29.502872 containerd[1597]: time="2025-09-04T00:04:29.502828147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78n75,Uid:c0c8d5a4-31f5-454a-9128-02468bd71436,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:29.556507 containerd[1597]: time="2025-09-04T00:04:29.556449836Z" level=error msg="Failed to destroy network for sandbox \"e7e49f2945bf7d7f5d82b3a29cd1bc587e2d04a1b2c9d10edf6bd0dd04032370\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.558729 systemd[1]: run-netns-cni\x2d7181a803\x2da335\x2d2f8f\x2d4042\x2d2f94aac15ef6.mount: Deactivated successfully. Sep 4 00:04:29.611034 containerd[1597]: time="2025-09-04T00:04:29.610983184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78n75,Uid:c0c8d5a4-31f5-454a-9128-02468bd71436,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7e49f2945bf7d7f5d82b3a29cd1bc587e2d04a1b2c9d10edf6bd0dd04032370\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.611235 kubelet[2785]: E0904 00:04:29.611144 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7e49f2945bf7d7f5d82b3a29cd1bc587e2d04a1b2c9d10edf6bd0dd04032370\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:04:29.611235 kubelet[2785]: E0904 00:04:29.611196 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7e49f2945bf7d7f5d82b3a29cd1bc587e2d04a1b2c9d10edf6bd0dd04032370\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-78n75" Sep 4 00:04:29.611235 kubelet[2785]: E0904 00:04:29.611214 2785 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7e49f2945bf7d7f5d82b3a29cd1bc587e2d04a1b2c9d10edf6bd0dd04032370\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-78n75" Sep 4 00:04:29.611392 kubelet[2785]: E0904 00:04:29.611277 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-78n75_calico-system(c0c8d5a4-31f5-454a-9128-02468bd71436)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-78n75_calico-system(c0c8d5a4-31f5-454a-9128-02468bd71436)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7e49f2945bf7d7f5d82b3a29cd1bc587e2d04a1b2c9d10edf6bd0dd04032370\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-78n75" podUID="c0c8d5a4-31f5-454a-9128-02468bd71436" Sep 4 00:04:32.944233 kubelet[2785]: I0904 00:04:32.944162 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:04:32.944692 kubelet[2785]: E0904 00:04:32.944652 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:33.620475 kubelet[2785]: E0904 00:04:33.620436 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:37.875206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518364410.mount: Deactivated successfully. Sep 4 00:04:39.492063 containerd[1597]: time="2025-09-04T00:04:39.491991558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:39.493509 containerd[1597]: time="2025-09-04T00:04:39.493444729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 4 00:04:39.496063 containerd[1597]: time="2025-09-04T00:04:39.496027784Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:39.498196 containerd[1597]: time="2025-09-04T00:04:39.498111289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:39.498485 containerd[1597]: time="2025-09-04T00:04:39.498447497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.885154659s" Sep 4 00:04:39.498485 containerd[1597]: time="2025-09-04T00:04:39.498478641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 4 00:04:39.511993 containerd[1597]: time="2025-09-04T00:04:39.510886153Z" level=info msg="CreateContainer within sandbox \"c9300f688a7e4e46eda809013452a933fb50805679cb86479a2142908fe9e697\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 00:04:39.553862 containerd[1597]: time="2025-09-04T00:04:39.553805412Z" level=info msg="Container 5221b88c12d4e96e4e012b9bdd30e1a2d346d6c78be89d3f1aa365dd4761621a: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:39.567300 containerd[1597]: time="2025-09-04T00:04:39.567232499Z" level=info msg="CreateContainer within sandbox \"c9300f688a7e4e46eda809013452a933fb50805679cb86479a2142908fe9e697\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5221b88c12d4e96e4e012b9bdd30e1a2d346d6c78be89d3f1aa365dd4761621a\"" Sep 4 00:04:39.567991 containerd[1597]: time="2025-09-04T00:04:39.567949013Z" level=info msg="StartContainer for \"5221b88c12d4e96e4e012b9bdd30e1a2d346d6c78be89d3f1aa365dd4761621a\"" Sep 4 00:04:39.569886 containerd[1597]: time="2025-09-04T00:04:39.569845646Z" level=info msg="connecting to shim 5221b88c12d4e96e4e012b9bdd30e1a2d346d6c78be89d3f1aa365dd4761621a" address="unix:///run/containerd/s/e99ae24cddf1d0f466c0a32c61ba59b9ea77dd9f448bb0ec60fa1477c61d830c" protocol=ttrpc version=3 Sep 4 00:04:39.591026 systemd[1]: Started cri-containerd-5221b88c12d4e96e4e012b9bdd30e1a2d346d6c78be89d3f1aa365dd4761621a.scope - libcontainer container 5221b88c12d4e96e4e012b9bdd30e1a2d346d6c78be89d3f1aa365dd4761621a. Sep 4 00:04:40.043991 systemd[1]: Started sshd@9-10.0.0.100:22-10.0.0.1:60604.service - OpenSSH per-connection server daemon (10.0.0.1:60604). Sep 4 00:04:40.050032 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 00:04:40.051026 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 00:04:40.053220 containerd[1597]: time="2025-09-04T00:04:40.053088141Z" level=info msg="StartContainer for \"5221b88c12d4e96e4e012b9bdd30e1a2d346d6c78be89d3f1aa365dd4761621a\" returns successfully" Sep 4 00:04:40.118073 sshd[3916]: Accepted publickey for core from 10.0.0.1 port 60604 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:04:40.120611 sshd-session[3916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:40.130983 systemd-logind[1575]: New session 10 of user core. Sep 4 00:04:40.136099 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 00:04:40.310686 kubelet[2785]: I0904 00:04:40.310211 2785 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6b5d325a-6202-41d1-bdf7-3a8725d4ec52-whisker-backend-key-pair\") pod \"6b5d325a-6202-41d1-bdf7-3a8725d4ec52\" (UID: \"6b5d325a-6202-41d1-bdf7-3a8725d4ec52\") " Sep 4 00:04:40.310686 kubelet[2785]: I0904 00:04:40.310272 2785 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks4hr\" (UniqueName: \"kubernetes.io/projected/6b5d325a-6202-41d1-bdf7-3a8725d4ec52-kube-api-access-ks4hr\") pod \"6b5d325a-6202-41d1-bdf7-3a8725d4ec52\" (UID: \"6b5d325a-6202-41d1-bdf7-3a8725d4ec52\") " Sep 4 00:04:40.310686 kubelet[2785]: I0904 00:04:40.310300 2785 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b5d325a-6202-41d1-bdf7-3a8725d4ec52-whisker-ca-bundle\") pod \"6b5d325a-6202-41d1-bdf7-3a8725d4ec52\" (UID: \"6b5d325a-6202-41d1-bdf7-3a8725d4ec52\") " Sep 4 00:04:40.311938 kubelet[2785]: I0904 00:04:40.311862 2785 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b5d325a-6202-41d1-bdf7-3a8725d4ec52-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6b5d325a-6202-41d1-bdf7-3a8725d4ec52" (UID: "6b5d325a-6202-41d1-bdf7-3a8725d4ec52"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 00:04:40.318062 kubelet[2785]: I0904 00:04:40.317935 2785 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b5d325a-6202-41d1-bdf7-3a8725d4ec52-kube-api-access-ks4hr" (OuterVolumeSpecName: "kube-api-access-ks4hr") pod "6b5d325a-6202-41d1-bdf7-3a8725d4ec52" (UID: "6b5d325a-6202-41d1-bdf7-3a8725d4ec52"). InnerVolumeSpecName "kube-api-access-ks4hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 00:04:40.318283 kubelet[2785]: I0904 00:04:40.318261 2785 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b5d325a-6202-41d1-bdf7-3a8725d4ec52-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6b5d325a-6202-41d1-bdf7-3a8725d4ec52" (UID: "6b5d325a-6202-41d1-bdf7-3a8725d4ec52"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 00:04:40.319599 sshd[3918]: Connection closed by 10.0.0.1 port 60604 Sep 4 00:04:40.321005 sshd-session[3916]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:40.326857 systemd[1]: sshd@9-10.0.0.100:22-10.0.0.1:60604.service: Deactivated successfully. Sep 4 00:04:40.329353 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 00:04:40.330442 systemd-logind[1575]: Session 10 logged out. Waiting for processes to exit. Sep 4 00:04:40.332188 systemd-logind[1575]: Removed session 10. Sep 4 00:04:40.411815 kubelet[2785]: I0904 00:04:40.411077 2785 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6b5d325a-6202-41d1-bdf7-3a8725d4ec52-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 4 00:04:40.411815 kubelet[2785]: I0904 00:04:40.411123 2785 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks4hr\" (UniqueName: \"kubernetes.io/projected/6b5d325a-6202-41d1-bdf7-3a8725d4ec52-kube-api-access-ks4hr\") on node \"localhost\" DevicePath \"\"" Sep 4 00:04:40.411815 kubelet[2785]: I0904 00:04:40.411154 2785 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b5d325a-6202-41d1-bdf7-3a8725d4ec52-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 4 00:04:40.490400 containerd[1597]: time="2025-09-04T00:04:40.490340290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b844b9d59-mwc9s,Uid:0863bf91-0c5d-4e1c-b033-fc3cb7c1a687,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:40.490575 containerd[1597]: time="2025-09-04T00:04:40.490446501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78n75,Uid:c0c8d5a4-31f5-454a-9128-02468bd71436,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:40.506412 systemd[1]: var-lib-kubelet-pods-6b5d325a\x2d6202\x2d41d1\x2dbdf7\x2d3a8725d4ec52-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dks4hr.mount: Deactivated successfully. Sep 4 00:04:40.506563 systemd[1]: var-lib-kubelet-pods-6b5d325a\x2d6202\x2d41d1\x2dbdf7\x2d3a8725d4ec52-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 4 00:04:40.678938 systemd[1]: Removed slice kubepods-besteffort-pod6b5d325a_6202_41d1_bdf7_3a8725d4ec52.slice - libcontainer container kubepods-besteffort-pod6b5d325a_6202_41d1_bdf7_3a8725d4ec52.slice. Sep 4 00:04:40.682431 kubelet[2785]: I0904 00:04:40.681662 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kchnh" podStartSLOduration=2.202101073 podStartE2EDuration="23.681637555s" podCreationTimestamp="2025-09-04 00:04:17 +0000 UTC" firstStartedPulling="2025-09-04 00:04:18.020168389 +0000 UTC m=+18.686543601" lastFinishedPulling="2025-09-04 00:04:39.499704861 +0000 UTC m=+40.166080083" observedRunningTime="2025-09-04 00:04:40.679204431 +0000 UTC m=+41.345579643" watchObservedRunningTime="2025-09-04 00:04:40.681637555 +0000 UTC m=+41.348012767" Sep 4 00:04:40.749361 systemd[1]: Created slice kubepods-besteffort-podb6ffd347_dd10_4183_bfe2_2964fa2c4902.slice - libcontainer container kubepods-besteffort-podb6ffd347_dd10_4183_bfe2_2964fa2c4902.slice. Sep 4 00:04:40.815325 kubelet[2785]: I0904 00:04:40.813935 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b6ffd347-dd10-4183-bfe2-2964fa2c4902-whisker-backend-key-pair\") pod \"whisker-bc48c4f66-4h96m\" (UID: \"b6ffd347-dd10-4183-bfe2-2964fa2c4902\") " pod="calico-system/whisker-bc48c4f66-4h96m" Sep 4 00:04:40.815325 kubelet[2785]: I0904 00:04:40.814031 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6ffd347-dd10-4183-bfe2-2964fa2c4902-whisker-ca-bundle\") pod \"whisker-bc48c4f66-4h96m\" (UID: \"b6ffd347-dd10-4183-bfe2-2964fa2c4902\") " pod="calico-system/whisker-bc48c4f66-4h96m" Sep 4 00:04:40.815325 kubelet[2785]: I0904 00:04:40.814054 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q92s\" (UniqueName: \"kubernetes.io/projected/b6ffd347-dd10-4183-bfe2-2964fa2c4902-kube-api-access-5q92s\") pod \"whisker-bc48c4f66-4h96m\" (UID: \"b6ffd347-dd10-4183-bfe2-2964fa2c4902\") " pod="calico-system/whisker-bc48c4f66-4h96m" Sep 4 00:04:41.036898 systemd-networkd[1489]: cali93c44a66293: Link UP Sep 4 00:04:41.038905 systemd-networkd[1489]: cali93c44a66293: Gained carrier Sep 4 00:04:41.053969 containerd[1597]: time="2025-09-04T00:04:41.053913905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bc48c4f66-4h96m,Uid:b6ffd347-dd10-4183-bfe2-2964fa2c4902,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:41.056431 containerd[1597]: 2025-09-04 00:04:40.684 [INFO][3953] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:04:41.056431 containerd[1597]: 2025-09-04 00:04:40.721 [INFO][3953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-eth0 calico-kube-controllers-7b844b9d59- calico-system 0863bf91-0c5d-4e1c-b033-fc3cb7c1a687 869 0 2025-09-04 00:04:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b844b9d59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7b844b9d59-mwc9s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali93c44a66293 [] [] }} ContainerID="b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" Namespace="calico-system" Pod="calico-kube-controllers-7b844b9d59-mwc9s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-" Sep 4 00:04:41.056431 containerd[1597]: 2025-09-04 00:04:40.721 [INFO][3953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" Namespace="calico-system" Pod="calico-kube-controllers-7b844b9d59-mwc9s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-eth0" Sep 4 00:04:41.056431 containerd[1597]: 2025-09-04 00:04:40.810 [INFO][3984] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" HandleID="k8s-pod-network.b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" Workload="localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-eth0" Sep 4 00:04:41.056714 containerd[1597]: 2025-09-04 00:04:40.811 [INFO][3984] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" HandleID="k8s-pod-network.b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" Workload="localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000287600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7b844b9d59-mwc9s", "timestamp":"2025-09-04 00:04:40.810342176 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:41.056714 containerd[1597]: 2025-09-04 00:04:40.811 [INFO][3984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:41.056714 containerd[1597]: 2025-09-04 00:04:40.811 [INFO][3984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:41.056714 containerd[1597]: 2025-09-04 00:04:40.811 [INFO][3984] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 00:04:41.056714 containerd[1597]: 2025-09-04 00:04:40.866 [INFO][3984] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" host="localhost" Sep 4 00:04:41.056714 containerd[1597]: 2025-09-04 00:04:40.873 [INFO][3984] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 00:04:41.056714 containerd[1597]: 2025-09-04 00:04:40.877 [INFO][3984] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 00:04:41.056714 containerd[1597]: 2025-09-04 00:04:40.879 [INFO][3984] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:41.056714 containerd[1597]: 2025-09-04 00:04:40.881 [INFO][3984] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:41.056714 containerd[1597]: 2025-09-04 00:04:40.881 [INFO][3984] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" host="localhost" Sep 4 00:04:41.058612 containerd[1597]: 2025-09-04 00:04:40.882 [INFO][3984] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd Sep 4 00:04:41.058612 containerd[1597]: 2025-09-04 00:04:40.947 [INFO][3984] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" host="localhost" Sep 4 00:04:41.058612 containerd[1597]: 2025-09-04 00:04:41.022 [INFO][3984] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" host="localhost" Sep 4 00:04:41.058612 containerd[1597]: 2025-09-04 00:04:41.022 [INFO][3984] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" host="localhost" Sep 4 00:04:41.058612 containerd[1597]: 2025-09-04 00:04:41.022 [INFO][3984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:41.058612 containerd[1597]: 2025-09-04 00:04:41.022 [INFO][3984] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" HandleID="k8s-pod-network.b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" Workload="localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-eth0" Sep 4 00:04:41.059003 containerd[1597]: 2025-09-04 00:04:41.026 [INFO][3953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" Namespace="calico-system" Pod="calico-kube-controllers-7b844b9d59-mwc9s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-eth0", GenerateName:"calico-kube-controllers-7b844b9d59-", Namespace:"calico-system", SelfLink:"", UID:"0863bf91-0c5d-4e1c-b033-fc3cb7c1a687", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b844b9d59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7b844b9d59-mwc9s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93c44a66293", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:41.059099 containerd[1597]: 2025-09-04 00:04:41.026 [INFO][3953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" Namespace="calico-system" Pod="calico-kube-controllers-7b844b9d59-mwc9s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-eth0" Sep 4 00:04:41.059099 containerd[1597]: 2025-09-04 00:04:41.026 [INFO][3953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93c44a66293 ContainerID="b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" Namespace="calico-system" Pod="calico-kube-controllers-7b844b9d59-mwc9s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-eth0" Sep 4 00:04:41.059099 containerd[1597]: 2025-09-04 00:04:41.039 [INFO][3953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" Namespace="calico-system" Pod="calico-kube-controllers-7b844b9d59-mwc9s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-eth0" Sep 4 00:04:41.059185 containerd[1597]: 2025-09-04 00:04:41.039 [INFO][3953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" Namespace="calico-system" Pod="calico-kube-controllers-7b844b9d59-mwc9s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-eth0", GenerateName:"calico-kube-controllers-7b844b9d59-", Namespace:"calico-system", SelfLink:"", UID:"0863bf91-0c5d-4e1c-b033-fc3cb7c1a687", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b844b9d59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd", Pod:"calico-kube-controllers-7b844b9d59-mwc9s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93c44a66293", MAC:"f6:c1:71:f0:40:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:41.059269 containerd[1597]: 2025-09-04 00:04:41.052 [INFO][3953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" Namespace="calico-system" Pod="calico-kube-controllers-7b844b9d59-mwc9s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b844b9d59--mwc9s-eth0" Sep 4 00:04:41.077774 systemd-networkd[1489]: cali48bedacb7af: Link UP Sep 4 00:04:41.078624 systemd-networkd[1489]: cali48bedacb7af: Gained carrier Sep 4 00:04:41.258760 containerd[1597]: 2025-09-04 00:04:40.689 [INFO][3961] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:04:41.258760 containerd[1597]: 2025-09-04 00:04:40.721 [INFO][3961] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--78n75-eth0 csi-node-driver- calico-system c0c8d5a4-31f5-454a-9128-02468bd71436 752 0 2025-09-04 00:04:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-78n75 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali48bedacb7af [] [] }} ContainerID="cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" Namespace="calico-system" Pod="csi-node-driver-78n75" WorkloadEndpoint="localhost-k8s-csi--node--driver--78n75-" Sep 4 00:04:41.258760 containerd[1597]: 2025-09-04 00:04:40.721 [INFO][3961] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" Namespace="calico-system" Pod="csi-node-driver-78n75" WorkloadEndpoint="localhost-k8s-csi--node--driver--78n75-eth0" Sep 4 00:04:41.258760 containerd[1597]: 2025-09-04 00:04:40.810 [INFO][3982] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" HandleID="k8s-pod-network.cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" Workload="localhost-k8s-csi--node--driver--78n75-eth0" Sep 4 00:04:41.259079 containerd[1597]: 2025-09-04 00:04:40.811 [INFO][3982] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" HandleID="k8s-pod-network.cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" Workload="localhost-k8s-csi--node--driver--78n75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a30a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-78n75", "timestamp":"2025-09-04 00:04:40.810628996 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:41.259079 containerd[1597]: 2025-09-04 00:04:40.811 [INFO][3982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:41.259079 containerd[1597]: 2025-09-04 00:04:41.022 [INFO][3982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:41.259079 containerd[1597]: 2025-09-04 00:04:41.023 [INFO][3982] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 00:04:41.259079 containerd[1597]: 2025-09-04 00:04:41.029 [INFO][3982] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" host="localhost" Sep 4 00:04:41.259079 containerd[1597]: 2025-09-04 00:04:41.037 [INFO][3982] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 00:04:41.259079 containerd[1597]: 2025-09-04 00:04:41.044 [INFO][3982] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 00:04:41.259079 containerd[1597]: 2025-09-04 00:04:41.046 [INFO][3982] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:41.259079 containerd[1597]: 2025-09-04 00:04:41.050 [INFO][3982] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:41.259079 containerd[1597]: 2025-09-04 00:04:41.050 [INFO][3982] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" host="localhost" Sep 4 00:04:41.259401 containerd[1597]: 2025-09-04 00:04:41.053 [INFO][3982] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281 Sep 4 00:04:41.259401 containerd[1597]: 2025-09-04 00:04:41.058 [INFO][3982] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" host="localhost" Sep 4 00:04:41.259401 containerd[1597]: 2025-09-04 00:04:41.069 [INFO][3982] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" host="localhost" Sep 4 00:04:41.259401 containerd[1597]: 2025-09-04 00:04:41.069 [INFO][3982] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" host="localhost" Sep 4 00:04:41.259401 containerd[1597]: 2025-09-04 00:04:41.069 [INFO][3982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:41.259401 containerd[1597]: 2025-09-04 00:04:41.069 [INFO][3982] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" HandleID="k8s-pod-network.cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" Workload="localhost-k8s-csi--node--driver--78n75-eth0" Sep 4 00:04:41.259564 containerd[1597]: 2025-09-04 00:04:41.074 [INFO][3961] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" Namespace="calico-system" Pod="csi-node-driver-78n75" WorkloadEndpoint="localhost-k8s-csi--node--driver--78n75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--78n75-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0c8d5a4-31f5-454a-9128-02468bd71436", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-78n75", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48bedacb7af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:41.259640 containerd[1597]: 2025-09-04 00:04:41.074 [INFO][3961] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" Namespace="calico-system" Pod="csi-node-driver-78n75" WorkloadEndpoint="localhost-k8s-csi--node--driver--78n75-eth0" Sep 4 00:04:41.259640 containerd[1597]: 2025-09-04 00:04:41.074 [INFO][3961] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48bedacb7af ContainerID="cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" Namespace="calico-system" Pod="csi-node-driver-78n75" WorkloadEndpoint="localhost-k8s-csi--node--driver--78n75-eth0" Sep 4 00:04:41.259640 containerd[1597]: 2025-09-04 00:04:41.079 [INFO][3961] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" Namespace="calico-system" Pod="csi-node-driver-78n75" WorkloadEndpoint="localhost-k8s-csi--node--driver--78n75-eth0" Sep 4 00:04:41.259727 containerd[1597]: 2025-09-04 00:04:41.079 [INFO][3961] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" Namespace="calico-system" Pod="csi-node-driver-78n75" WorkloadEndpoint="localhost-k8s-csi--node--driver--78n75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--78n75-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0c8d5a4-31f5-454a-9128-02468bd71436", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281", Pod:"csi-node-driver-78n75", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48bedacb7af", MAC:"de:6c:0a:86:c6:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:41.260816 containerd[1597]: 2025-09-04 00:04:41.255 [INFO][3961] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" Namespace="calico-system" Pod="csi-node-driver-78n75" WorkloadEndpoint="localhost-k8s-csi--node--driver--78n75-eth0" Sep 4 00:04:41.336307 systemd-networkd[1489]: cali0077cde96cc: Link UP Sep 4 00:04:41.337015 systemd-networkd[1489]: cali0077cde96cc: Gained carrier Sep 4 00:04:41.354682 containerd[1597]: 2025-09-04 00:04:41.118 [INFO][4002] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:04:41.354682 containerd[1597]: 2025-09-04 00:04:41.260 [INFO][4002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--bc48c4f66--4h96m-eth0 whisker-bc48c4f66- calico-system b6ffd347-dd10-4183-bfe2-2964fa2c4902 983 0 2025-09-04 00:04:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bc48c4f66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-bc48c4f66-4h96m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0077cde96cc [] [] }} ContainerID="1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" Namespace="calico-system" Pod="whisker-bc48c4f66-4h96m" WorkloadEndpoint="localhost-k8s-whisker--bc48c4f66--4h96m-" Sep 4 00:04:41.354682 containerd[1597]: 2025-09-04 00:04:41.260 [INFO][4002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" Namespace="calico-system" Pod="whisker-bc48c4f66-4h96m" WorkloadEndpoint="localhost-k8s-whisker--bc48c4f66--4h96m-eth0" Sep 4 00:04:41.354682 containerd[1597]: 2025-09-04 00:04:41.291 [INFO][4028] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" HandleID="k8s-pod-network.1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" Workload="localhost-k8s-whisker--bc48c4f66--4h96m-eth0" Sep 4 00:04:41.355104 containerd[1597]: 2025-09-04 00:04:41.291 [INFO][4028] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" HandleID="k8s-pod-network.1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" Workload="localhost-k8s-whisker--bc48c4f66--4h96m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a43c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-bc48c4f66-4h96m", "timestamp":"2025-09-04 00:04:41.291701697 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:41.355104 containerd[1597]: 2025-09-04 00:04:41.292 [INFO][4028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:41.355104 containerd[1597]: 2025-09-04 00:04:41.292 [INFO][4028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:41.355104 containerd[1597]: 2025-09-04 00:04:41.292 [INFO][4028] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 00:04:41.355104 containerd[1597]: 2025-09-04 00:04:41.299 [INFO][4028] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" host="localhost" Sep 4 00:04:41.355104 containerd[1597]: 2025-09-04 00:04:41.306 [INFO][4028] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 00:04:41.355104 containerd[1597]: 2025-09-04 00:04:41.310 [INFO][4028] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 00:04:41.355104 containerd[1597]: 2025-09-04 00:04:41.311 [INFO][4028] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:41.355104 containerd[1597]: 2025-09-04 00:04:41.313 [INFO][4028] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:41.355104 containerd[1597]: 2025-09-04 00:04:41.313 [INFO][4028] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" host="localhost" Sep 4 00:04:41.355444 containerd[1597]: 2025-09-04 00:04:41.315 [INFO][4028] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30 Sep 4 00:04:41.355444 containerd[1597]: 2025-09-04 00:04:41.319 [INFO][4028] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" host="localhost" Sep 4 00:04:41.355444 containerd[1597]: 2025-09-04 00:04:41.325 [INFO][4028] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" host="localhost" Sep 4 00:04:41.355444 containerd[1597]: 2025-09-04 00:04:41.326 [INFO][4028] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" host="localhost" Sep 4 00:04:41.355444 containerd[1597]: 2025-09-04 00:04:41.326 [INFO][4028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:41.355444 containerd[1597]: 2025-09-04 00:04:41.326 [INFO][4028] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" HandleID="k8s-pod-network.1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" Workload="localhost-k8s-whisker--bc48c4f66--4h96m-eth0" Sep 4 00:04:41.355638 containerd[1597]: 2025-09-04 00:04:41.333 [INFO][4002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" Namespace="calico-system" Pod="whisker-bc48c4f66-4h96m" WorkloadEndpoint="localhost-k8s-whisker--bc48c4f66--4h96m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bc48c4f66--4h96m-eth0", GenerateName:"whisker-bc48c4f66-", Namespace:"calico-system", SelfLink:"", UID:"b6ffd347-dd10-4183-bfe2-2964fa2c4902", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bc48c4f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-bc48c4f66-4h96m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0077cde96cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:41.355638 containerd[1597]: 2025-09-04 00:04:41.333 [INFO][4002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" Namespace="calico-system" Pod="whisker-bc48c4f66-4h96m" WorkloadEndpoint="localhost-k8s-whisker--bc48c4f66--4h96m-eth0" Sep 4 00:04:41.355736 containerd[1597]: 2025-09-04 00:04:41.334 [INFO][4002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0077cde96cc ContainerID="1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" Namespace="calico-system" Pod="whisker-bc48c4f66-4h96m" WorkloadEndpoint="localhost-k8s-whisker--bc48c4f66--4h96m-eth0" Sep 4 00:04:41.355736 containerd[1597]: 2025-09-04 00:04:41.337 [INFO][4002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" Namespace="calico-system" Pod="whisker-bc48c4f66-4h96m" WorkloadEndpoint="localhost-k8s-whisker--bc48c4f66--4h96m-eth0" Sep 4 00:04:41.355853 containerd[1597]: 2025-09-04 00:04:41.337 [INFO][4002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" Namespace="calico-system" Pod="whisker-bc48c4f66-4h96m" WorkloadEndpoint="localhost-k8s-whisker--bc48c4f66--4h96m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bc48c4f66--4h96m-eth0", GenerateName:"whisker-bc48c4f66-", Namespace:"calico-system", SelfLink:"", UID:"b6ffd347-dd10-4183-bfe2-2964fa2c4902", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bc48c4f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30", Pod:"whisker-bc48c4f66-4h96m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0077cde96cc", MAC:"aa:6f:75:20:67:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:41.356002 containerd[1597]: 2025-09-04 00:04:41.351 [INFO][4002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" Namespace="calico-system" Pod="whisker-bc48c4f66-4h96m" WorkloadEndpoint="localhost-k8s-whisker--bc48c4f66--4h96m-eth0" Sep 4 00:04:41.464152 containerd[1597]: time="2025-09-04T00:04:41.464080648Z" level=info msg="connecting to shim 1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30" address="unix:///run/containerd/s/88df81c4d158e5b262945a764cb37dde6730178e407be462658cc1bb77339324" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:41.467348 containerd[1597]: time="2025-09-04T00:04:41.467295075Z" level=info msg="connecting to shim cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281" address="unix:///run/containerd/s/d74c24b2c00e4b88e3239d5cdd6f14312acbde1e9ac0505febdb0b1a7e022d49" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:41.470176 containerd[1597]: time="2025-09-04T00:04:41.469730934Z" level=info msg="connecting to shim b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd" address="unix:///run/containerd/s/166e9bee3bba8f207e2ba27a975442f51de4e8c9097e0a53ec28ca81c97afe5e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:41.494986 containerd[1597]: time="2025-09-04T00:04:41.493247384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-fshvr,Uid:9df1ad8b-587e-4cd3-b349-728e4bf3bbf6,Namespace:calico-system,Attempt:0,}" Sep 4 00:04:41.499824 kubelet[2785]: I0904 00:04:41.499442 2785 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b5d325a-6202-41d1-bdf7-3a8725d4ec52" path="/var/lib/kubelet/pods/6b5d325a-6202-41d1-bdf7-3a8725d4ec52/volumes" Sep 4 00:04:41.615944 systemd[1]: Started cri-containerd-cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281.scope - libcontainer container cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281. Sep 4 00:04:41.624202 systemd[1]: Started cri-containerd-1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30.scope - libcontainer container 1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30. Sep 4 00:04:41.626969 systemd[1]: Started cri-containerd-b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd.scope - libcontainer container b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd. Sep 4 00:04:41.653055 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 00:04:41.659016 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 00:04:41.691824 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 00:04:41.809202 containerd[1597]: time="2025-09-04T00:04:41.809145341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78n75,Uid:c0c8d5a4-31f5-454a-9128-02468bd71436,Namespace:calico-system,Attempt:0,} returns sandbox id \"cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281\"" Sep 4 00:04:41.823816 containerd[1597]: time="2025-09-04T00:04:41.822815937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 4 00:04:41.829264 containerd[1597]: time="2025-09-04T00:04:41.829196599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5221b88c12d4e96e4e012b9bdd30e1a2d346d6c78be89d3f1aa365dd4761621a\" id:\"1a76ba186bc5528d02c011ed8e946f8e9424a1c6c66e0a2e729e6198ecd17cd8\" pid:4333 exit_status:1 exited_at:{seconds:1756944281 nanos:814942625}" Sep 4 00:04:41.861766 containerd[1597]: time="2025-09-04T00:04:41.861651547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bc48c4f66-4h96m,Uid:b6ffd347-dd10-4183-bfe2-2964fa2c4902,Namespace:calico-system,Attempt:0,} returns sandbox id \"1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30\"" Sep 4 00:04:41.869264 containerd[1597]: time="2025-09-04T00:04:41.869134398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b844b9d59-mwc9s,Uid:0863bf91-0c5d-4e1c-b033-fc3cb7c1a687,Namespace:calico-system,Attempt:0,} returns sandbox id \"b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd\"" Sep 4 00:04:41.978637 systemd-networkd[1489]: cali7dcb7602522: Link UP Sep 4 00:04:41.979215 systemd-networkd[1489]: cali7dcb7602522: Gained carrier Sep 4 00:04:42.001964 systemd-networkd[1489]: vxlan.calico: Link UP Sep 4 00:04:42.001978 systemd-networkd[1489]: vxlan.calico: Gained carrier Sep 4 00:04:42.011941 containerd[1597]: 2025-09-04 00:04:41.813 [INFO][4269] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--fshvr-eth0 goldmane-7988f88666- calico-system 9df1ad8b-587e-4cd3-b349-728e4bf3bbf6 861 0 2025-09-04 00:04:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-fshvr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7dcb7602522 [] [] }} ContainerID="d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" Namespace="calico-system" Pod="goldmane-7988f88666-fshvr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--fshvr-" Sep 4 00:04:42.011941 containerd[1597]: 2025-09-04 00:04:41.813 [INFO][4269] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" Namespace="calico-system" Pod="goldmane-7988f88666-fshvr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--fshvr-eth0" Sep 4 00:04:42.011941 containerd[1597]: 2025-09-04 00:04:41.855 [INFO][4348] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" HandleID="k8s-pod-network.d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" Workload="localhost-k8s-goldmane--7988f88666--fshvr-eth0" Sep 4 00:04:42.012177 containerd[1597]: 2025-09-04 00:04:41.855 [INFO][4348] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" HandleID="k8s-pod-network.d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" Workload="localhost-k8s-goldmane--7988f88666--fshvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-fshvr", "timestamp":"2025-09-04 00:04:41.855473238 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:42.012177 containerd[1597]: 2025-09-04 00:04:41.855 [INFO][4348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:42.012177 containerd[1597]: 2025-09-04 00:04:41.855 [INFO][4348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:42.012177 containerd[1597]: 2025-09-04 00:04:41.855 [INFO][4348] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 00:04:42.012177 containerd[1597]: 2025-09-04 00:04:41.863 [INFO][4348] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" host="localhost" Sep 4 00:04:42.012177 containerd[1597]: 2025-09-04 00:04:41.873 [INFO][4348] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 00:04:42.012177 containerd[1597]: 2025-09-04 00:04:41.896 [INFO][4348] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 00:04:42.012177 containerd[1597]: 2025-09-04 00:04:41.897 [INFO][4348] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:42.012177 containerd[1597]: 2025-09-04 00:04:41.899 [INFO][4348] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:42.012177 containerd[1597]: 2025-09-04 00:04:41.899 [INFO][4348] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" host="localhost" Sep 4 00:04:42.012468 containerd[1597]: 2025-09-04 00:04:41.901 [INFO][4348] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8 Sep 4 00:04:42.012468 containerd[1597]: 2025-09-04 00:04:41.933 [INFO][4348] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" host="localhost" Sep 4 00:04:42.012468 containerd[1597]: 2025-09-04 00:04:41.967 [INFO][4348] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" host="localhost" Sep 4 00:04:42.012468 containerd[1597]: 2025-09-04 00:04:41.967 [INFO][4348] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" host="localhost" Sep 4 00:04:42.012468 containerd[1597]: 2025-09-04 00:04:41.967 [INFO][4348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:42.012468 containerd[1597]: 2025-09-04 00:04:41.967 [INFO][4348] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" HandleID="k8s-pod-network.d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" Workload="localhost-k8s-goldmane--7988f88666--fshvr-eth0" Sep 4 00:04:42.012636 containerd[1597]: 2025-09-04 00:04:41.971 [INFO][4269] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" Namespace="calico-system" Pod="goldmane-7988f88666-fshvr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--fshvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--fshvr-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"9df1ad8b-587e-4cd3-b349-728e4bf3bbf6", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-fshvr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7dcb7602522", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:42.012636 containerd[1597]: 2025-09-04 00:04:41.971 [INFO][4269] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" Namespace="calico-system" Pod="goldmane-7988f88666-fshvr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--fshvr-eth0" Sep 4 00:04:42.012736 containerd[1597]: 2025-09-04 00:04:41.971 [INFO][4269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7dcb7602522 ContainerID="d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" Namespace="calico-system" Pod="goldmane-7988f88666-fshvr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--fshvr-eth0" Sep 4 00:04:42.012736 containerd[1597]: 2025-09-04 00:04:41.980 [INFO][4269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" Namespace="calico-system" Pod="goldmane-7988f88666-fshvr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--fshvr-eth0" Sep 4 00:04:42.012816 containerd[1597]: 2025-09-04 00:04:41.981 [INFO][4269] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" Namespace="calico-system" Pod="goldmane-7988f88666-fshvr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--fshvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--fshvr-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"9df1ad8b-587e-4cd3-b349-728e4bf3bbf6", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8", Pod:"goldmane-7988f88666-fshvr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7dcb7602522", MAC:"ce:f5:2a:d1:d5:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:42.012896 containerd[1597]: 2025-09-04 00:04:42.008 [INFO][4269] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" Namespace="calico-system" Pod="goldmane-7988f88666-fshvr" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--fshvr-eth0" Sep 4 00:04:42.057060 containerd[1597]: time="2025-09-04T00:04:42.056996766Z" level=info msg="connecting to shim d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8" address="unix:///run/containerd/s/8a8616dbf31fb2b68ce0b46aa19f4989023891723ee5a2a49aba604bfe8b2c6d" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:42.088964 systemd[1]: Started cri-containerd-d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8.scope - libcontainer container d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8. Sep 4 00:04:42.104471 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 00:04:42.143053 containerd[1597]: time="2025-09-04T00:04:42.142812775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-fshvr,Uid:9df1ad8b-587e-4cd3-b349-728e4bf3bbf6,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8\"" Sep 4 00:04:42.490902 containerd[1597]: time="2025-09-04T00:04:42.490843920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598f7f498f-g95k2,Uid:eae742d1-9bda-4601-af3d-ef6b66417a43,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:04:42.491088 containerd[1597]: time="2025-09-04T00:04:42.490966015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598f7f498f-b56pq,Uid:873bdebd-b74f-4456-9ddf-7b47d2199016,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:04:42.583995 systemd-networkd[1489]: cali48bedacb7af: Gained IPv6LL Sep 4 00:04:42.690057 systemd-networkd[1489]: calidd59bb0cb8a: Link UP Sep 4 00:04:42.690339 systemd-networkd[1489]: calidd59bb0cb8a: Gained carrier Sep 4 00:04:42.709414 containerd[1597]: 2025-09-04 00:04:42.582 [INFO][4501] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--598f7f498f--b56pq-eth0 calico-apiserver-598f7f498f- calico-apiserver 873bdebd-b74f-4456-9ddf-7b47d2199016 866 0 2025-09-04 00:04:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:598f7f498f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-598f7f498f-b56pq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd59bb0cb8a [] [] }} ContainerID="3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-b56pq" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--b56pq-" Sep 4 00:04:42.709414 containerd[1597]: 2025-09-04 00:04:42.582 [INFO][4501] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-b56pq" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--b56pq-eth0" Sep 4 00:04:42.709414 containerd[1597]: 2025-09-04 00:04:42.614 [INFO][4521] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" HandleID="k8s-pod-network.3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" Workload="localhost-k8s-calico--apiserver--598f7f498f--b56pq-eth0" Sep 4 00:04:42.709754 containerd[1597]: 2025-09-04 00:04:42.615 [INFO][4521] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" HandleID="k8s-pod-network.3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" Workload="localhost-k8s-calico--apiserver--598f7f498f--b56pq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-598f7f498f-b56pq", "timestamp":"2025-09-04 00:04:42.614970992 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:42.709754 containerd[1597]: 2025-09-04 00:04:42.615 [INFO][4521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:42.709754 containerd[1597]: 2025-09-04 00:04:42.615 [INFO][4521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:42.709754 containerd[1597]: 2025-09-04 00:04:42.615 [INFO][4521] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 00:04:42.709754 containerd[1597]: 2025-09-04 00:04:42.620 [INFO][4521] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" host="localhost" Sep 4 00:04:42.709754 containerd[1597]: 2025-09-04 00:04:42.624 [INFO][4521] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 00:04:42.709754 containerd[1597]: 2025-09-04 00:04:42.629 [INFO][4521] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 00:04:42.709754 containerd[1597]: 2025-09-04 00:04:42.631 [INFO][4521] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:42.709754 containerd[1597]: 2025-09-04 00:04:42.633 [INFO][4521] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:42.709754 containerd[1597]: 2025-09-04 00:04:42.634 [INFO][4521] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" host="localhost" Sep 4 00:04:42.710275 containerd[1597]: 2025-09-04 00:04:42.635 [INFO][4521] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792 Sep 4 00:04:42.710275 containerd[1597]: 2025-09-04 00:04:42.667 [INFO][4521] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" host="localhost" Sep 4 00:04:42.710275 containerd[1597]: 2025-09-04 00:04:42.680 [INFO][4521] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" host="localhost" Sep 4 00:04:42.710275 containerd[1597]: 2025-09-04 00:04:42.680 [INFO][4521] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" host="localhost" Sep 4 00:04:42.710275 containerd[1597]: 2025-09-04 00:04:42.680 [INFO][4521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:42.710275 containerd[1597]: 2025-09-04 00:04:42.680 [INFO][4521] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" HandleID="k8s-pod-network.3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" Workload="localhost-k8s-calico--apiserver--598f7f498f--b56pq-eth0" Sep 4 00:04:42.710486 containerd[1597]: 2025-09-04 00:04:42.684 [INFO][4501] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-b56pq" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--b56pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--598f7f498f--b56pq-eth0", GenerateName:"calico-apiserver-598f7f498f-", Namespace:"calico-apiserver", SelfLink:"", UID:"873bdebd-b74f-4456-9ddf-7b47d2199016", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598f7f498f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-598f7f498f-b56pq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd59bb0cb8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:42.710576 containerd[1597]: 2025-09-04 00:04:42.685 [INFO][4501] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-b56pq" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--b56pq-eth0" Sep 4 00:04:42.710576 containerd[1597]: 2025-09-04 00:04:42.685 [INFO][4501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd59bb0cb8a ContainerID="3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-b56pq" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--b56pq-eth0" Sep 4 00:04:42.710576 containerd[1597]: 2025-09-04 00:04:42.692 [INFO][4501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-b56pq" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--b56pq-eth0" Sep 4 00:04:42.710676 containerd[1597]: 2025-09-04 00:04:42.693 [INFO][4501] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-b56pq" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--b56pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--598f7f498f--b56pq-eth0", GenerateName:"calico-apiserver-598f7f498f-", Namespace:"calico-apiserver", SelfLink:"", UID:"873bdebd-b74f-4456-9ddf-7b47d2199016", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598f7f498f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792", Pod:"calico-apiserver-598f7f498f-b56pq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd59bb0cb8a", MAC:"ee:5c:2d:b6:fd:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:42.710753 containerd[1597]: 2025-09-04 00:04:42.703 [INFO][4501] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-b56pq" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--b56pq-eth0" Sep 4 00:04:42.713905 systemd-networkd[1489]: cali0077cde96cc: Gained IPv6LL Sep 4 00:04:42.759352 containerd[1597]: time="2025-09-04T00:04:42.759201953Z" level=info msg="connecting to shim 3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792" address="unix:///run/containerd/s/197900da04fab038289eda3283a55fda450c845679064b1a2e286614aea3e7fe" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:42.812134 systemd[1]: Started cri-containerd-3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792.scope - libcontainer container 3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792. Sep 4 00:04:42.817501 systemd-networkd[1489]: cali7e8eb9c6aa8: Link UP Sep 4 00:04:42.818636 systemd-networkd[1489]: cali7e8eb9c6aa8: Gained carrier Sep 4 00:04:42.841538 containerd[1597]: 2025-09-04 00:04:42.580 [INFO][4491] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--598f7f498f--g95k2-eth0 calico-apiserver-598f7f498f- calico-apiserver eae742d1-9bda-4601-af3d-ef6b66417a43 867 0 2025-09-04 00:04:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:598f7f498f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-598f7f498f-g95k2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7e8eb9c6aa8 [] [] }} ContainerID="5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-g95k2" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--g95k2-" Sep 4 00:04:42.841538 containerd[1597]: 2025-09-04 00:04:42.580 [INFO][4491] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-g95k2" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--g95k2-eth0" Sep 4 00:04:42.841538 containerd[1597]: 2025-09-04 00:04:42.635 [INFO][4528] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" HandleID="k8s-pod-network.5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" Workload="localhost-k8s-calico--apiserver--598f7f498f--g95k2-eth0" Sep 4 00:04:42.841718 containerd[1597]: 2025-09-04 00:04:42.635 [INFO][4528] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" HandleID="k8s-pod-network.5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" Workload="localhost-k8s-calico--apiserver--598f7f498f--g95k2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-598f7f498f-g95k2", "timestamp":"2025-09-04 00:04:42.634980842 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:42.841718 containerd[1597]: 2025-09-04 00:04:42.635 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:42.841718 containerd[1597]: 2025-09-04 00:04:42.680 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:42.841718 containerd[1597]: 2025-09-04 00:04:42.680 [INFO][4528] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 00:04:42.841718 containerd[1597]: 2025-09-04 00:04:42.725 [INFO][4528] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" host="localhost" Sep 4 00:04:42.841718 containerd[1597]: 2025-09-04 00:04:42.733 [INFO][4528] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 00:04:42.841718 containerd[1597]: 2025-09-04 00:04:42.737 [INFO][4528] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 00:04:42.841718 containerd[1597]: 2025-09-04 00:04:42.739 [INFO][4528] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:42.841718 containerd[1597]: 2025-09-04 00:04:42.742 [INFO][4528] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:42.841718 containerd[1597]: 2025-09-04 00:04:42.742 [INFO][4528] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" host="localhost" Sep 4 00:04:42.842277 containerd[1597]: 2025-09-04 00:04:42.743 [INFO][4528] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f Sep 4 00:04:42.842277 containerd[1597]: 2025-09-04 00:04:42.753 [INFO][4528] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" host="localhost" Sep 4 00:04:42.842277 containerd[1597]: 2025-09-04 00:04:42.773 [INFO][4528] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" host="localhost" Sep 4 00:04:42.842277 containerd[1597]: 2025-09-04 00:04:42.773 [INFO][4528] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" host="localhost" Sep 4 00:04:42.842277 containerd[1597]: 2025-09-04 00:04:42.773 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:42.842277 containerd[1597]: 2025-09-04 00:04:42.774 [INFO][4528] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" HandleID="k8s-pod-network.5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" Workload="localhost-k8s-calico--apiserver--598f7f498f--g95k2-eth0" Sep 4 00:04:42.842400 containerd[1597]: 2025-09-04 00:04:42.799 [INFO][4491] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-g95k2" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--g95k2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--598f7f498f--g95k2-eth0", GenerateName:"calico-apiserver-598f7f498f-", Namespace:"calico-apiserver", SelfLink:"", UID:"eae742d1-9bda-4601-af3d-ef6b66417a43", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598f7f498f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-598f7f498f-g95k2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e8eb9c6aa8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:42.842457 containerd[1597]: 2025-09-04 00:04:42.800 [INFO][4491] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-g95k2" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--g95k2-eth0" Sep 4 00:04:42.842457 containerd[1597]: 2025-09-04 00:04:42.800 [INFO][4491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e8eb9c6aa8 ContainerID="5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-g95k2" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--g95k2-eth0" Sep 4 00:04:42.842457 containerd[1597]: 2025-09-04 00:04:42.818 [INFO][4491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-g95k2" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--g95k2-eth0" Sep 4 00:04:42.842523 containerd[1597]: 2025-09-04 00:04:42.819 [INFO][4491] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-g95k2" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--g95k2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--598f7f498f--g95k2-eth0", GenerateName:"calico-apiserver-598f7f498f-", Namespace:"calico-apiserver", SelfLink:"", UID:"eae742d1-9bda-4601-af3d-ef6b66417a43", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598f7f498f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f", Pod:"calico-apiserver-598f7f498f-g95k2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e8eb9c6aa8", MAC:"da:4d:38:cd:03:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:42.842571 containerd[1597]: 2025-09-04 00:04:42.830 [INFO][4491] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" Namespace="calico-apiserver" Pod="calico-apiserver-598f7f498f-g95k2" WorkloadEndpoint="localhost-k8s-calico--apiserver--598f7f498f--g95k2-eth0" Sep 4 00:04:42.844345 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 00:04:42.875260 containerd[1597]: time="2025-09-04T00:04:42.875199316Z" level=info msg="connecting to shim 5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f" address="unix:///run/containerd/s/47e8ca2083f411e3665f60e9accd12df9fec6e798bfcdc10380108b2153847f2" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:42.886776 containerd[1597]: time="2025-09-04T00:04:42.886718559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5221b88c12d4e96e4e012b9bdd30e1a2d346d6c78be89d3f1aa365dd4761621a\" id:\"cd1427f3618aeab7f20c20dbf877475447ec05019eca32573c919673af281ec8\" pid:4549 exit_status:1 exited_at:{seconds:1756944282 nanos:885998778}" Sep 4 00:04:42.914020 systemd[1]: Started cri-containerd-5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f.scope - libcontainer container 5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f. Sep 4 00:04:42.934172 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 00:04:43.031983 systemd-networkd[1489]: cali93c44a66293: Gained IPv6LL Sep 4 00:04:43.062640 containerd[1597]: time="2025-09-04T00:04:43.062580532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598f7f498f-b56pq,Uid:873bdebd-b74f-4456-9ddf-7b47d2199016,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792\"" Sep 4 00:04:43.177229 containerd[1597]: time="2025-09-04T00:04:43.177158255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598f7f498f-g95k2,Uid:eae742d1-9bda-4601-af3d-ef6b66417a43,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f\"" Sep 4 00:04:43.287995 systemd-networkd[1489]: cali7dcb7602522: Gained IPv6LL Sep 4 00:04:43.490324 kubelet[2785]: E0904 00:04:43.490275 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:43.490324 kubelet[2785]: E0904 00:04:43.490347 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:43.491144 containerd[1597]: time="2025-09-04T00:04:43.490758746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pf6tc,Uid:3d897f9a-d881-4326-9bb5-5e48a00efbd2,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:43.491144 containerd[1597]: time="2025-09-04T00:04:43.490815000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j7nwn,Uid:4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba,Namespace:kube-system,Attempt:0,}" Sep 4 00:04:43.608014 systemd-networkd[1489]: vxlan.calico: Gained IPv6LL Sep 4 00:04:43.645377 systemd-networkd[1489]: calib00072b32f0: Link UP Sep 4 00:04:43.646437 systemd-networkd[1489]: calib00072b32f0: Gained carrier Sep 4 00:04:43.662021 containerd[1597]: 2025-09-04 00:04:43.569 [INFO][4676] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--pf6tc-eth0 coredns-7c65d6cfc9- kube-system 3d897f9a-d881-4326-9bb5-5e48a00efbd2 868 0 2025-09-04 00:04:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-pf6tc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib00072b32f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pf6tc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pf6tc-" Sep 4 00:04:43.662021 containerd[1597]: 2025-09-04 00:04:43.569 [INFO][4676] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pf6tc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pf6tc-eth0" Sep 4 00:04:43.662021 containerd[1597]: 2025-09-04 00:04:43.598 [INFO][4703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" HandleID="k8s-pod-network.a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" Workload="localhost-k8s-coredns--7c65d6cfc9--pf6tc-eth0" Sep 4 00:04:43.662302 containerd[1597]: 2025-09-04 00:04:43.598 [INFO][4703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" HandleID="k8s-pod-network.a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" Workload="localhost-k8s-coredns--7c65d6cfc9--pf6tc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-pf6tc", "timestamp":"2025-09-04 00:04:43.598727041 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:43.662302 containerd[1597]: 2025-09-04 00:04:43.599 [INFO][4703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:43.662302 containerd[1597]: 2025-09-04 00:04:43.599 [INFO][4703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:43.662302 containerd[1597]: 2025-09-04 00:04:43.599 [INFO][4703] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 00:04:43.662302 containerd[1597]: 2025-09-04 00:04:43.606 [INFO][4703] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" host="localhost" Sep 4 00:04:43.662302 containerd[1597]: 2025-09-04 00:04:43.613 [INFO][4703] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 00:04:43.662302 containerd[1597]: 2025-09-04 00:04:43.618 [INFO][4703] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 00:04:43.662302 containerd[1597]: 2025-09-04 00:04:43.619 [INFO][4703] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:43.662302 containerd[1597]: 2025-09-04 00:04:43.621 [INFO][4703] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:43.662302 containerd[1597]: 2025-09-04 00:04:43.622 [INFO][4703] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" host="localhost" Sep 4 00:04:43.662558 containerd[1597]: 2025-09-04 00:04:43.623 [INFO][4703] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964 Sep 4 00:04:43.662558 containerd[1597]: 2025-09-04 00:04:43.629 [INFO][4703] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" host="localhost" Sep 4 00:04:43.662558 containerd[1597]: 2025-09-04 00:04:43.636 [INFO][4703] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" host="localhost" Sep 4 00:04:43.662558 containerd[1597]: 2025-09-04 00:04:43.636 [INFO][4703] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" host="localhost" Sep 4 00:04:43.662558 containerd[1597]: 2025-09-04 00:04:43.636 [INFO][4703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:43.662558 containerd[1597]: 2025-09-04 00:04:43.636 [INFO][4703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" HandleID="k8s-pod-network.a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" Workload="localhost-k8s-coredns--7c65d6cfc9--pf6tc-eth0" Sep 4 00:04:43.662697 containerd[1597]: 2025-09-04 00:04:43.639 [INFO][4676] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pf6tc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pf6tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pf6tc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3d897f9a-d881-4326-9bb5-5e48a00efbd2", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-pf6tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib00072b32f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:43.662770 containerd[1597]: 2025-09-04 00:04:43.640 [INFO][4676] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pf6tc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pf6tc-eth0" Sep 4 00:04:43.662770 containerd[1597]: 2025-09-04 00:04:43.640 [INFO][4676] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib00072b32f0 ContainerID="a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pf6tc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pf6tc-eth0" Sep 4 00:04:43.662770 containerd[1597]: 2025-09-04 00:04:43.646 [INFO][4676] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pf6tc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pf6tc-eth0" Sep 4 00:04:43.662975 containerd[1597]: 2025-09-04 00:04:43.647 [INFO][4676] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pf6tc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pf6tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pf6tc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3d897f9a-d881-4326-9bb5-5e48a00efbd2", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964", Pod:"coredns-7c65d6cfc9-pf6tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib00072b32f0", MAC:"aa:c5:06:28:b8:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:43.662975 containerd[1597]: 2025-09-04 00:04:43.658 [INFO][4676] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pf6tc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pf6tc-eth0" Sep 4 00:04:44.065089 systemd-networkd[1489]: cali8a25c2f7b42: Link UP Sep 4 00:04:44.067845 systemd-networkd[1489]: cali8a25c2f7b42: Gained carrier Sep 4 00:04:44.081316 containerd[1597]: time="2025-09-04T00:04:44.081248784Z" level=info msg="connecting to shim a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964" address="unix:///run/containerd/s/b530496043907c3138741fa304d3ecaa2d83e90e1d760845fbbb1ee578eed871" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.570 [INFO][4688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--j7nwn-eth0 coredns-7c65d6cfc9- kube-system 4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba 858 0 2025-09-04 00:04:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-j7nwn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8a25c2f7b42 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j7nwn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j7nwn-" Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.571 [INFO][4688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j7nwn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j7nwn-eth0" Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.599 [INFO][4706] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" HandleID="k8s-pod-network.e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" Workload="localhost-k8s-coredns--7c65d6cfc9--j7nwn-eth0" Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.599 [INFO][4706] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" HandleID="k8s-pod-network.e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" Workload="localhost-k8s-coredns--7c65d6cfc9--j7nwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005227a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-j7nwn", "timestamp":"2025-09-04 00:04:43.599177585 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.599 [INFO][4706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.636 [INFO][4706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.636 [INFO][4706] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.707 [INFO][4706] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" host="localhost" Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.714 [INFO][4706] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.719 [INFO][4706] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.720 [INFO][4706] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.723 [INFO][4706] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.723 [INFO][4706] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" host="localhost" Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.724 [INFO][4706] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8 Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:43.908 [INFO][4706] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" host="localhost" Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:44.049 [INFO][4706] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" host="localhost" Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:44.049 [INFO][4706] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" host="localhost" Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:44.049 [INFO][4706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:04:44.089919 containerd[1597]: 2025-09-04 00:04:44.049 [INFO][4706] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" HandleID="k8s-pod-network.e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" Workload="localhost-k8s-coredns--7c65d6cfc9--j7nwn-eth0" Sep 4 00:04:44.090701 containerd[1597]: 2025-09-04 00:04:44.053 [INFO][4688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j7nwn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j7nwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--j7nwn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-j7nwn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a25c2f7b42", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:44.090701 containerd[1597]: 2025-09-04 00:04:44.053 [INFO][4688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j7nwn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j7nwn-eth0" Sep 4 00:04:44.090701 containerd[1597]: 2025-09-04 00:04:44.053 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a25c2f7b42 ContainerID="e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j7nwn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j7nwn-eth0" Sep 4 00:04:44.090701 containerd[1597]: 2025-09-04 00:04:44.068 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j7nwn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j7nwn-eth0" Sep 4 00:04:44.090701 containerd[1597]: 2025-09-04 00:04:44.069 [INFO][4688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j7nwn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j7nwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--j7nwn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 4, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8", Pod:"coredns-7c65d6cfc9-j7nwn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a25c2f7b42", MAC:"f2:17:5e:a6:ac:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:04:44.090701 containerd[1597]: 2025-09-04 00:04:44.085 [INFO][4688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j7nwn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j7nwn-eth0" Sep 4 00:04:44.120764 containerd[1597]: time="2025-09-04T00:04:44.120693891Z" level=info msg="connecting to shim e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8" address="unix:///run/containerd/s/1a476372df65f55d29128e11858767a7dca60209c6632c3e362dfd7ded26ec7c" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:04:44.126011 systemd[1]: Started cri-containerd-a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964.scope - libcontainer container a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964. Sep 4 00:04:44.140831 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 00:04:44.147953 systemd[1]: Started cri-containerd-e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8.scope - libcontainer container e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8. Sep 4 00:04:44.169861 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 00:04:44.177048 containerd[1597]: time="2025-09-04T00:04:44.176923727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pf6tc,Uid:3d897f9a-d881-4326-9bb5-5e48a00efbd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964\"" Sep 4 00:04:44.178058 kubelet[2785]: E0904 00:04:44.178017 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:44.184578 systemd-networkd[1489]: cali7e8eb9c6aa8: Gained IPv6LL Sep 4 00:04:44.192749 containerd[1597]: time="2025-09-04T00:04:44.192703026Z" level=info msg="CreateContainer within sandbox \"a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:04:44.212457 containerd[1597]: time="2025-09-04T00:04:44.212412983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j7nwn,Uid:4ecd7f27-3eca-46c7-ae6e-a7dacd4d5eba,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8\"" Sep 4 00:04:44.213681 kubelet[2785]: E0904 00:04:44.213260 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:44.213753 containerd[1597]: time="2025-09-04T00:04:44.213313389Z" level=info msg="Container 9b72c92b34c382f660f898250436601754614a41e157980e24bedba0dd5a1ea4: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:44.215520 containerd[1597]: time="2025-09-04T00:04:44.215450900Z" level=info msg="CreateContainer within sandbox \"e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:04:44.222759 containerd[1597]: time="2025-09-04T00:04:44.222717810Z" level=info msg="CreateContainer within sandbox \"a8a8d55dbc70f3e4795b56c4bbfbf96279b16d6f52d42fc71e4e3e371ee65964\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b72c92b34c382f660f898250436601754614a41e157980e24bedba0dd5a1ea4\"" Sep 4 00:04:44.227002 containerd[1597]: time="2025-09-04T00:04:44.226967728Z" level=info msg="StartContainer for \"9b72c92b34c382f660f898250436601754614a41e157980e24bedba0dd5a1ea4\"" Sep 4 00:04:44.228086 containerd[1597]: time="2025-09-04T00:04:44.228044657Z" level=info msg="connecting to shim 9b72c92b34c382f660f898250436601754614a41e157980e24bedba0dd5a1ea4" address="unix:///run/containerd/s/b530496043907c3138741fa304d3ecaa2d83e90e1d760845fbbb1ee578eed871" protocol=ttrpc version=3 Sep 4 00:04:44.228850 containerd[1597]: time="2025-09-04T00:04:44.228821528Z" level=info msg="Container fbf4dec8e6bc9082f0cb0aa45674f47f92440a33055e765a6252c7962cb149a9: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:44.238105 containerd[1597]: time="2025-09-04T00:04:44.238048533Z" level=info msg="CreateContainer within sandbox \"e4672e9f8d680b6bbbc27f04a27f203167356e575c02f9923830bbeddae08cf8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fbf4dec8e6bc9082f0cb0aa45674f47f92440a33055e765a6252c7962cb149a9\"" Sep 4 00:04:44.238821 containerd[1597]: time="2025-09-04T00:04:44.238589622Z" level=info msg="StartContainer for \"fbf4dec8e6bc9082f0cb0aa45674f47f92440a33055e765a6252c7962cb149a9\"" Sep 4 00:04:44.239694 containerd[1597]: time="2025-09-04T00:04:44.239665029Z" level=info msg="connecting to shim fbf4dec8e6bc9082f0cb0aa45674f47f92440a33055e765a6252c7962cb149a9" address="unix:///run/containerd/s/1a476372df65f55d29128e11858767a7dca60209c6632c3e362dfd7ded26ec7c" protocol=ttrpc version=3 Sep 4 00:04:44.250983 systemd[1]: Started cri-containerd-9b72c92b34c382f660f898250436601754614a41e157980e24bedba0dd5a1ea4.scope - libcontainer container 9b72c92b34c382f660f898250436601754614a41e157980e24bedba0dd5a1ea4. Sep 4 00:04:44.268012 systemd[1]: Started cri-containerd-fbf4dec8e6bc9082f0cb0aa45674f47f92440a33055e765a6252c7962cb149a9.scope - libcontainer container fbf4dec8e6bc9082f0cb0aa45674f47f92440a33055e765a6252c7962cb149a9. Sep 4 00:04:44.333889 containerd[1597]: time="2025-09-04T00:04:44.333770556Z" level=info msg="StartContainer for \"fbf4dec8e6bc9082f0cb0aa45674f47f92440a33055e765a6252c7962cb149a9\" returns successfully" Sep 4 00:04:44.334135 containerd[1597]: time="2025-09-04T00:04:44.333820338Z" level=info msg="StartContainer for \"9b72c92b34c382f660f898250436601754614a41e157980e24bedba0dd5a1ea4\" returns successfully" Sep 4 00:04:44.676574 kubelet[2785]: E0904 00:04:44.676092 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:44.679651 kubelet[2785]: E0904 00:04:44.678652 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:44.696027 systemd-networkd[1489]: calib00072b32f0: Gained IPv6LL Sep 4 00:04:44.729143 kubelet[2785]: I0904 00:04:44.729019 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-j7nwn" podStartSLOduration=39.728993399 podStartE2EDuration="39.728993399s" podCreationTimestamp="2025-09-04 00:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:44.727917442 +0000 UTC m=+45.394292654" watchObservedRunningTime="2025-09-04 00:04:44.728993399 +0000 UTC m=+45.395368611" Sep 4 00:04:44.735314 containerd[1597]: time="2025-09-04T00:04:44.735267080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:44.737362 containerd[1597]: time="2025-09-04T00:04:44.737330196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 4 00:04:44.740649 kubelet[2785]: I0904 00:04:44.740100 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pf6tc" podStartSLOduration=39.740080265 podStartE2EDuration="39.740080265s" podCreationTimestamp="2025-09-04 00:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:04:44.736931454 +0000 UTC m=+45.403306666" watchObservedRunningTime="2025-09-04 00:04:44.740080265 +0000 UTC m=+45.406455477" Sep 4 00:04:44.740814 containerd[1597]: time="2025-09-04T00:04:44.740649811Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:44.747545 containerd[1597]: time="2025-09-04T00:04:44.747263054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:44.749126 containerd[1597]: time="2025-09-04T00:04:44.748998687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.926138617s" Sep 4 00:04:44.749126 containerd[1597]: time="2025-09-04T00:04:44.749058307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 4 00:04:44.752537 containerd[1597]: time="2025-09-04T00:04:44.752496798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 4 00:04:44.753751 containerd[1597]: time="2025-09-04T00:04:44.753723134Z" level=info msg="CreateContainer within sandbox \"cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 00:04:44.760093 systemd-networkd[1489]: calidd59bb0cb8a: Gained IPv6LL Sep 4 00:04:44.782660 containerd[1597]: time="2025-09-04T00:04:44.782591905Z" level=info msg="Container 8dccdc0d229648911fef323789e049a464c6fb8f44d70829953c59e730861001: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:44.785729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438209987.mount: Deactivated successfully. Sep 4 00:04:44.803397 containerd[1597]: time="2025-09-04T00:04:44.803318631Z" level=info msg="CreateContainer within sandbox \"cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8dccdc0d229648911fef323789e049a464c6fb8f44d70829953c59e730861001\"" Sep 4 00:04:44.804190 containerd[1597]: time="2025-09-04T00:04:44.804093769Z" level=info msg="StartContainer for \"8dccdc0d229648911fef323789e049a464c6fb8f44d70829953c59e730861001\"" Sep 4 00:04:44.806556 containerd[1597]: time="2025-09-04T00:04:44.806519348Z" level=info msg="connecting to shim 8dccdc0d229648911fef323789e049a464c6fb8f44d70829953c59e730861001" address="unix:///run/containerd/s/d74c24b2c00e4b88e3239d5cdd6f14312acbde1e9ac0505febdb0b1a7e022d49" protocol=ttrpc version=3 Sep 4 00:04:44.838081 systemd[1]: Started cri-containerd-8dccdc0d229648911fef323789e049a464c6fb8f44d70829953c59e730861001.scope - libcontainer container 8dccdc0d229648911fef323789e049a464c6fb8f44d70829953c59e730861001. Sep 4 00:04:44.896953 containerd[1597]: time="2025-09-04T00:04:44.896908752Z" level=info msg="StartContainer for \"8dccdc0d229648911fef323789e049a464c6fb8f44d70829953c59e730861001\" returns successfully" Sep 4 00:04:45.341353 systemd[1]: Started sshd@10-10.0.0.100:22-10.0.0.1:60614.service - OpenSSH per-connection server daemon (10.0.0.1:60614). Sep 4 00:04:45.400991 systemd-networkd[1489]: cali8a25c2f7b42: Gained IPv6LL Sep 4 00:04:45.402146 sshd[4944]: Accepted publickey for core from 10.0.0.1 port 60614 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:04:45.403920 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:45.409463 systemd-logind[1575]: New session 11 of user core. Sep 4 00:04:45.419050 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 00:04:45.558724 sshd[4947]: Connection closed by 10.0.0.1 port 60614 Sep 4 00:04:45.559079 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:45.562993 systemd[1]: sshd@10-10.0.0.100:22-10.0.0.1:60614.service: Deactivated successfully. Sep 4 00:04:45.564924 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 00:04:45.565763 systemd-logind[1575]: Session 11 logged out. Waiting for processes to exit. Sep 4 00:04:45.566951 systemd-logind[1575]: Removed session 11. Sep 4 00:04:45.682425 kubelet[2785]: E0904 00:04:45.682263 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:45.682425 kubelet[2785]: E0904 00:04:45.682328 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:46.126195 containerd[1597]: time="2025-09-04T00:04:46.126134897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:46.127002 containerd[1597]: time="2025-09-04T00:04:46.126943835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 4 00:04:46.128047 containerd[1597]: time="2025-09-04T00:04:46.127994269Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:46.130179 containerd[1597]: time="2025-09-04T00:04:46.130141244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:46.130681 containerd[1597]: time="2025-09-04T00:04:46.130653565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.37812138s" Sep 4 00:04:46.130734 containerd[1597]: time="2025-09-04T00:04:46.130683145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 4 00:04:46.131832 containerd[1597]: time="2025-09-04T00:04:46.131741341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 4 00:04:46.132996 containerd[1597]: time="2025-09-04T00:04:46.132972313Z" level=info msg="CreateContainer within sandbox \"1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 4 00:04:46.141412 containerd[1597]: time="2025-09-04T00:04:46.141371504Z" level=info msg="Container 4c7ae36d30faee5eeec4ed0e255754dbd4e7eeffa8450361b949b51644986849: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:46.149189 containerd[1597]: time="2025-09-04T00:04:46.149140840Z" level=info msg="CreateContainer within sandbox \"1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4c7ae36d30faee5eeec4ed0e255754dbd4e7eeffa8450361b949b51644986849\"" Sep 4 00:04:46.149737 containerd[1597]: time="2025-09-04T00:04:46.149700470Z" level=info msg="StartContainer for \"4c7ae36d30faee5eeec4ed0e255754dbd4e7eeffa8450361b949b51644986849\"" Sep 4 00:04:46.151244 containerd[1597]: time="2025-09-04T00:04:46.151196716Z" level=info msg="connecting to shim 4c7ae36d30faee5eeec4ed0e255754dbd4e7eeffa8450361b949b51644986849" address="unix:///run/containerd/s/88df81c4d158e5b262945a764cb37dde6730178e407be462658cc1bb77339324" protocol=ttrpc version=3 Sep 4 00:04:46.179136 systemd[1]: Started cri-containerd-4c7ae36d30faee5eeec4ed0e255754dbd4e7eeffa8450361b949b51644986849.scope - libcontainer container 4c7ae36d30faee5eeec4ed0e255754dbd4e7eeffa8450361b949b51644986849. Sep 4 00:04:46.234841 containerd[1597]: time="2025-09-04T00:04:46.234773502Z" level=info msg="StartContainer for \"4c7ae36d30faee5eeec4ed0e255754dbd4e7eeffa8450361b949b51644986849\" returns successfully" Sep 4 00:04:46.685486 kubelet[2785]: E0904 00:04:46.685448 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:46.685486 kubelet[2785]: E0904 00:04:46.685478 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:04:49.519684 containerd[1597]: time="2025-09-04T00:04:49.519603606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:49.532909 containerd[1597]: time="2025-09-04T00:04:49.532821617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 4 00:04:49.567223 containerd[1597]: time="2025-09-04T00:04:49.567139368Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:49.642763 containerd[1597]: time="2025-09-04T00:04:49.642669954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:49.643692 containerd[1597]: time="2025-09-04T00:04:49.643542595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.511764545s" Sep 4 00:04:49.643692 containerd[1597]: time="2025-09-04T00:04:49.643590393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 4 00:04:49.645483 containerd[1597]: time="2025-09-04T00:04:49.645445994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 4 00:04:49.688335 containerd[1597]: time="2025-09-04T00:04:49.688229049Z" level=info msg="CreateContainer within sandbox \"b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 00:04:49.702847 containerd[1597]: time="2025-09-04T00:04:49.702772735Z" level=info msg="Container fcf193fb047f57c8ebe5da215d81d72d530cba25c144c80d2fdce513693866dc: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:49.719023 containerd[1597]: time="2025-09-04T00:04:49.718965318Z" level=info msg="CreateContainer within sandbox \"b141545bd4c7a89bcc77663489f48c78568646ae4da718b56d76a33420afa0dd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fcf193fb047f57c8ebe5da215d81d72d530cba25c144c80d2fdce513693866dc\"" Sep 4 00:04:49.719993 containerd[1597]: time="2025-09-04T00:04:49.719945052Z" level=info msg="StartContainer for \"fcf193fb047f57c8ebe5da215d81d72d530cba25c144c80d2fdce513693866dc\"" Sep 4 00:04:49.721451 containerd[1597]: time="2025-09-04T00:04:49.721422412Z" level=info msg="connecting to shim fcf193fb047f57c8ebe5da215d81d72d530cba25c144c80d2fdce513693866dc" address="unix:///run/containerd/s/166e9bee3bba8f207e2ba27a975442f51de4e8c9097e0a53ec28ca81c97afe5e" protocol=ttrpc version=3 Sep 4 00:04:49.749086 systemd[1]: Started cri-containerd-fcf193fb047f57c8ebe5da215d81d72d530cba25c144c80d2fdce513693866dc.scope - libcontainer container fcf193fb047f57c8ebe5da215d81d72d530cba25c144c80d2fdce513693866dc. Sep 4 00:04:49.804066 containerd[1597]: time="2025-09-04T00:04:49.803855883Z" level=info msg="StartContainer for \"fcf193fb047f57c8ebe5da215d81d72d530cba25c144c80d2fdce513693866dc\" returns successfully" Sep 4 00:04:50.575236 systemd[1]: Started sshd@11-10.0.0.100:22-10.0.0.1:59864.service - OpenSSH per-connection server daemon (10.0.0.1:59864). Sep 4 00:04:50.633716 sshd[5061]: Accepted publickey for core from 10.0.0.1 port 59864 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:04:50.635745 sshd-session[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:50.641122 systemd-logind[1575]: New session 12 of user core. Sep 4 00:04:50.647990 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 00:04:50.729495 kubelet[2785]: I0904 00:04:50.729368 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b844b9d59-mwc9s" podStartSLOduration=25.954704959 podStartE2EDuration="33.729345585s" podCreationTimestamp="2025-09-04 00:04:17 +0000 UTC" firstStartedPulling="2025-09-04 00:04:41.870537215 +0000 UTC m=+42.536912427" lastFinishedPulling="2025-09-04 00:04:49.645177841 +0000 UTC m=+50.311553053" observedRunningTime="2025-09-04 00:04:50.729295024 +0000 UTC m=+51.395670246" watchObservedRunningTime="2025-09-04 00:04:50.729345585 +0000 UTC m=+51.395720797" Sep 4 00:04:50.804143 sshd[5063]: Connection closed by 10.0.0.1 port 59864 Sep 4 00:04:50.804547 sshd-session[5061]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:50.810275 systemd[1]: sshd@11-10.0.0.100:22-10.0.0.1:59864.service: Deactivated successfully. Sep 4 00:04:50.812766 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 00:04:50.814641 systemd-logind[1575]: Session 12 logged out. Waiting for processes to exit. Sep 4 00:04:50.816761 systemd-logind[1575]: Removed session 12. Sep 4 00:04:51.700042 kubelet[2785]: I0904 00:04:51.699998 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:04:51.962742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3414731171.mount: Deactivated successfully. Sep 4 00:04:53.022207 containerd[1597]: time="2025-09-04T00:04:53.022135020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:53.023111 containerd[1597]: time="2025-09-04T00:04:53.023047342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 4 00:04:53.024999 containerd[1597]: time="2025-09-04T00:04:53.024933295Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:53.028834 containerd[1597]: time="2025-09-04T00:04:53.028723738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:53.029187 containerd[1597]: time="2025-09-04T00:04:53.029142093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.383655484s" Sep 4 00:04:53.029187 containerd[1597]: time="2025-09-04T00:04:53.029179983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 4 00:04:53.030481 containerd[1597]: time="2025-09-04T00:04:53.030411925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 00:04:53.032544 containerd[1597]: time="2025-09-04T00:04:53.032482641Z" level=info msg="CreateContainer within sandbox \"d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 4 00:04:53.042882 containerd[1597]: time="2025-09-04T00:04:53.042819402Z" level=info msg="Container c56dc8483faa8fac832ee56ac1df662a2b8feeb7703fede45b927b8079b3b3b7: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:53.054000 containerd[1597]: time="2025-09-04T00:04:53.053951671Z" level=info msg="CreateContainer within sandbox \"d8e11d6d2355df2396ad9c494fcf302147d9ccd4abf1d9473696e946829c7fb8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c56dc8483faa8fac832ee56ac1df662a2b8feeb7703fede45b927b8079b3b3b7\"" Sep 4 00:04:53.054902 containerd[1597]: time="2025-09-04T00:04:53.054871665Z" level=info msg="StartContainer for \"c56dc8483faa8fac832ee56ac1df662a2b8feeb7703fede45b927b8079b3b3b7\"" Sep 4 00:04:53.055922 containerd[1597]: time="2025-09-04T00:04:53.055889406Z" level=info msg="connecting to shim c56dc8483faa8fac832ee56ac1df662a2b8feeb7703fede45b927b8079b3b3b7" address="unix:///run/containerd/s/8a8616dbf31fb2b68ce0b46aa19f4989023891723ee5a2a49aba604bfe8b2c6d" protocol=ttrpc version=3 Sep 4 00:04:53.118247 systemd[1]: Started cri-containerd-c56dc8483faa8fac832ee56ac1df662a2b8feeb7703fede45b927b8079b3b3b7.scope - libcontainer container c56dc8483faa8fac832ee56ac1df662a2b8feeb7703fede45b927b8079b3b3b7. Sep 4 00:04:53.179439 containerd[1597]: time="2025-09-04T00:04:53.179395212Z" level=info msg="StartContainer for \"c56dc8483faa8fac832ee56ac1df662a2b8feeb7703fede45b927b8079b3b3b7\" returns successfully" Sep 4 00:04:53.792054 containerd[1597]: time="2025-09-04T00:04:53.792002739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c56dc8483faa8fac832ee56ac1df662a2b8feeb7703fede45b927b8079b3b3b7\" id:\"fb7b4875a29e59632603e49cc0cd993d11e8dc1de3f3d74da47815bf474f96bf\" pid:5140 exit_status:1 exited_at:{seconds:1756944293 nanos:791528684}" Sep 4 00:04:54.796394 containerd[1597]: time="2025-09-04T00:04:54.796344186Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c56dc8483faa8fac832ee56ac1df662a2b8feeb7703fede45b927b8079b3b3b7\" id:\"8fda71490ab17b9fd2229343f827bba3a3f27725a69a76c0aca58fd56008493c\" pid:5166 exit_status:1 exited_at:{seconds:1756944294 nanos:795973598}" Sep 4 00:04:55.830037 systemd[1]: Started sshd@12-10.0.0.100:22-10.0.0.1:59870.service - OpenSSH per-connection server daemon (10.0.0.1:59870). Sep 4 00:04:55.903164 sshd[5180]: Accepted publickey for core from 10.0.0.1 port 59870 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:04:55.905286 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:55.910681 systemd-logind[1575]: New session 13 of user core. Sep 4 00:04:55.917951 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 00:04:56.110505 sshd[5182]: Connection closed by 10.0.0.1 port 59870 Sep 4 00:04:56.110328 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:56.124283 systemd[1]: sshd@12-10.0.0.100:22-10.0.0.1:59870.service: Deactivated successfully. Sep 4 00:04:56.126589 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 00:04:56.128147 systemd-logind[1575]: Session 13 logged out. Waiting for processes to exit. Sep 4 00:04:56.134147 systemd[1]: Started sshd@13-10.0.0.100:22-10.0.0.1:59882.service - OpenSSH per-connection server daemon (10.0.0.1:59882). Sep 4 00:04:56.135031 systemd-logind[1575]: Removed session 13. Sep 4 00:04:56.185554 sshd[5201]: Accepted publickey for core from 10.0.0.1 port 59882 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:04:56.186903 sshd-session[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:56.195863 systemd-logind[1575]: New session 14 of user core. Sep 4 00:04:56.201141 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 00:04:56.322523 kubelet[2785]: I0904 00:04:56.322466 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:04:56.392317 containerd[1597]: time="2025-09-04T00:04:56.392180475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fcf193fb047f57c8ebe5da215d81d72d530cba25c144c80d2fdce513693866dc\" id:\"212cc656a0543bd1b6eb222262ee6000ba17ba8c513e353f82223ce038a7aba5\" pid:5222 exited_at:{seconds:1756944296 nanos:390946180}" Sep 4 00:04:56.403077 sshd[5203]: Connection closed by 10.0.0.1 port 59882 Sep 4 00:04:56.400726 sshd-session[5201]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:56.412263 kubelet[2785]: I0904 00:04:56.412194 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-fshvr" podStartSLOduration=28.526638575 podStartE2EDuration="39.41217552s" podCreationTimestamp="2025-09-04 00:04:17 +0000 UTC" firstStartedPulling="2025-09-04 00:04:42.144593541 +0000 UTC m=+42.810968753" lastFinishedPulling="2025-09-04 00:04:53.030130486 +0000 UTC m=+53.696505698" observedRunningTime="2025-09-04 00:04:53.951718798 +0000 UTC m=+54.618094010" watchObservedRunningTime="2025-09-04 00:04:56.41217552 +0000 UTC m=+57.078550732" Sep 4 00:04:56.417304 systemd[1]: sshd@13-10.0.0.100:22-10.0.0.1:59882.service: Deactivated successfully. Sep 4 00:04:56.420275 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 00:04:56.426419 systemd-logind[1575]: Session 14 logged out. Waiting for processes to exit. Sep 4 00:04:56.435388 systemd[1]: Started sshd@14-10.0.0.100:22-10.0.0.1:59886.service - OpenSSH per-connection server daemon (10.0.0.1:59886). Sep 4 00:04:56.438512 systemd-logind[1575]: Removed session 14. Sep 4 00:04:56.467180 containerd[1597]: time="2025-09-04T00:04:56.467123656Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fcf193fb047f57c8ebe5da215d81d72d530cba25c144c80d2fdce513693866dc\" id:\"8fcaac29decd1854acb5407f76e4ed9ef06371626aab1e28377f32f7e725bd96\" pid:5250 exited_at:{seconds:1756944296 nanos:466771361}" Sep 4 00:04:56.486327 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 59886 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:04:56.489015 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:56.495575 systemd-logind[1575]: New session 15 of user core. Sep 4 00:04:56.503993 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 00:04:56.655252 sshd[5261]: Connection closed by 10.0.0.1 port 59886 Sep 4 00:04:56.655088 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:56.660372 systemd-logind[1575]: Session 15 logged out. Waiting for processes to exit. Sep 4 00:04:56.662980 systemd[1]: sshd@14-10.0.0.100:22-10.0.0.1:59886.service: Deactivated successfully. Sep 4 00:04:56.666011 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 00:04:56.667713 systemd-logind[1575]: Removed session 15. Sep 4 00:04:57.075486 containerd[1597]: time="2025-09-04T00:04:57.075429812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:57.076445 containerd[1597]: time="2025-09-04T00:04:57.076411915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 4 00:04:57.077965 containerd[1597]: time="2025-09-04T00:04:57.077929250Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:57.080105 containerd[1597]: time="2025-09-04T00:04:57.080054660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:57.080581 containerd[1597]: time="2025-09-04T00:04:57.080557460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.050107905s" Sep 4 00:04:57.080620 containerd[1597]: time="2025-09-04T00:04:57.080584853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 00:04:57.081826 containerd[1597]: time="2025-09-04T00:04:57.081758979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 00:04:57.082819 containerd[1597]: time="2025-09-04T00:04:57.082748253Z" level=info msg="CreateContainer within sandbox \"3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:04:57.091004 containerd[1597]: time="2025-09-04T00:04:57.090956904Z" level=info msg="Container ff3c8fccc53ed79557d8202f88e6cddd798f798d297cf9750300ee489be67b89: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:57.100731 containerd[1597]: time="2025-09-04T00:04:57.100678573Z" level=info msg="CreateContainer within sandbox \"3d1ffcebc95493662c5dda1b5c1bfcccbd65d477e793cbbbb5d678b09e8d3792\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ff3c8fccc53ed79557d8202f88e6cddd798f798d297cf9750300ee489be67b89\"" Sep 4 00:04:57.101312 containerd[1597]: time="2025-09-04T00:04:57.101269671Z" level=info msg="StartContainer for \"ff3c8fccc53ed79557d8202f88e6cddd798f798d297cf9750300ee489be67b89\"" Sep 4 00:04:57.102192 containerd[1597]: time="2025-09-04T00:04:57.102171368Z" level=info msg="connecting to shim ff3c8fccc53ed79557d8202f88e6cddd798f798d297cf9750300ee489be67b89" address="unix:///run/containerd/s/197900da04fab038289eda3283a55fda450c845679064b1a2e286614aea3e7fe" protocol=ttrpc version=3 Sep 4 00:04:57.124931 systemd[1]: Started cri-containerd-ff3c8fccc53ed79557d8202f88e6cddd798f798d297cf9750300ee489be67b89.scope - libcontainer container ff3c8fccc53ed79557d8202f88e6cddd798f798d297cf9750300ee489be67b89. Sep 4 00:04:57.201198 containerd[1597]: time="2025-09-04T00:04:57.201155873Z" level=info msg="StartContainer for \"ff3c8fccc53ed79557d8202f88e6cddd798f798d297cf9750300ee489be67b89\" returns successfully" Sep 4 00:04:57.457238 containerd[1597]: time="2025-09-04T00:04:57.457085042Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:57.458359 containerd[1597]: time="2025-09-04T00:04:57.458332872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 4 00:04:57.460674 containerd[1597]: time="2025-09-04T00:04:57.460597782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 378.766811ms" Sep 4 00:04:57.460674 containerd[1597]: time="2025-09-04T00:04:57.460648852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 00:04:57.462113 containerd[1597]: time="2025-09-04T00:04:57.461872685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 4 00:04:57.464627 containerd[1597]: time="2025-09-04T00:04:57.464576945Z" level=info msg="CreateContainer within sandbox \"5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:04:57.476816 containerd[1597]: time="2025-09-04T00:04:57.474436251Z" level=info msg="Container 1256a76774596991a82a9505ad76c221f238d3e86a7e42abd3cedff895b6d23f: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:57.485348 containerd[1597]: time="2025-09-04T00:04:57.485293904Z" level=info msg="CreateContainer within sandbox \"5d2ae0a6e5b7333816eb57cde4a1ecb3b38bebdae8cbae5781ee632fb320411f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1256a76774596991a82a9505ad76c221f238d3e86a7e42abd3cedff895b6d23f\"" Sep 4 00:04:57.486116 containerd[1597]: time="2025-09-04T00:04:57.486082364Z" level=info msg="StartContainer for \"1256a76774596991a82a9505ad76c221f238d3e86a7e42abd3cedff895b6d23f\"" Sep 4 00:04:57.487572 containerd[1597]: time="2025-09-04T00:04:57.487538743Z" level=info msg="connecting to shim 1256a76774596991a82a9505ad76c221f238d3e86a7e42abd3cedff895b6d23f" address="unix:///run/containerd/s/47e8ca2083f411e3665f60e9accd12df9fec6e798bfcdc10380108b2153847f2" protocol=ttrpc version=3 Sep 4 00:04:57.512936 systemd[1]: Started cri-containerd-1256a76774596991a82a9505ad76c221f238d3e86a7e42abd3cedff895b6d23f.scope - libcontainer container 1256a76774596991a82a9505ad76c221f238d3e86a7e42abd3cedff895b6d23f. Sep 4 00:04:57.576869 containerd[1597]: time="2025-09-04T00:04:57.576766833Z" level=info msg="StartContainer for \"1256a76774596991a82a9505ad76c221f238d3e86a7e42abd3cedff895b6d23f\" returns successfully" Sep 4 00:04:57.822172 kubelet[2785]: I0904 00:04:57.822092 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-598f7f498f-g95k2" podStartSLOduration=29.539182151 podStartE2EDuration="43.822066026s" podCreationTimestamp="2025-09-04 00:04:14 +0000 UTC" firstStartedPulling="2025-09-04 00:04:43.178655797 +0000 UTC m=+43.845031010" lastFinishedPulling="2025-09-04 00:04:57.461539673 +0000 UTC m=+58.127914885" observedRunningTime="2025-09-04 00:04:57.821535072 +0000 UTC m=+58.487910304" watchObservedRunningTime="2025-09-04 00:04:57.822066026 +0000 UTC m=+58.488441248" Sep 4 00:04:57.822775 kubelet[2785]: I0904 00:04:57.822213 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-598f7f498f-b56pq" podStartSLOduration=29.804562766 podStartE2EDuration="43.822205646s" podCreationTimestamp="2025-09-04 00:04:14 +0000 UTC" firstStartedPulling="2025-09-04 00:04:43.063706885 +0000 UTC m=+43.730082097" lastFinishedPulling="2025-09-04 00:04:57.081349765 +0000 UTC m=+57.747724977" observedRunningTime="2025-09-04 00:04:57.808885272 +0000 UTC m=+58.475260484" watchObservedRunningTime="2025-09-04 00:04:57.822205646 +0000 UTC m=+58.488580858" Sep 4 00:04:58.728415 kubelet[2785]: I0904 00:04:58.728008 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:04:59.220820 containerd[1597]: time="2025-09-04T00:04:59.220747895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:59.222001 containerd[1597]: time="2025-09-04T00:04:59.221967070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 4 00:04:59.223370 containerd[1597]: time="2025-09-04T00:04:59.223334498Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:59.225733 containerd[1597]: time="2025-09-04T00:04:59.225700978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:04:59.226380 containerd[1597]: time="2025-09-04T00:04:59.226355261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.764454511s" Sep 4 00:04:59.226441 containerd[1597]: time="2025-09-04T00:04:59.226381993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 4 00:04:59.227538 containerd[1597]: time="2025-09-04T00:04:59.227213032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 4 00:04:59.228452 containerd[1597]: time="2025-09-04T00:04:59.228424135Z" level=info msg="CreateContainer within sandbox \"cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 00:04:59.242718 containerd[1597]: time="2025-09-04T00:04:59.242651718Z" level=info msg="Container 0b6e91ad23ffdb7f259271ac5435b7f4523df4ae111f59c39a2ee288d86a8fd5: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:04:59.283173 containerd[1597]: time="2025-09-04T00:04:59.283112193Z" level=info msg="CreateContainer within sandbox \"cbefce5e91b5ce7492cd73af0b374f25cd16cf386415d041ed24b3ff6bc8a281\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0b6e91ad23ffdb7f259271ac5435b7f4523df4ae111f59c39a2ee288d86a8fd5\"" Sep 4 00:04:59.283749 containerd[1597]: time="2025-09-04T00:04:59.283708304Z" level=info msg="StartContainer for \"0b6e91ad23ffdb7f259271ac5435b7f4523df4ae111f59c39a2ee288d86a8fd5\"" Sep 4 00:04:59.285488 containerd[1597]: time="2025-09-04T00:04:59.285438830Z" level=info msg="connecting to shim 0b6e91ad23ffdb7f259271ac5435b7f4523df4ae111f59c39a2ee288d86a8fd5" address="unix:///run/containerd/s/d74c24b2c00e4b88e3239d5cdd6f14312acbde1e9ac0505febdb0b1a7e022d49" protocol=ttrpc version=3 Sep 4 00:04:59.316981 systemd[1]: Started cri-containerd-0b6e91ad23ffdb7f259271ac5435b7f4523df4ae111f59c39a2ee288d86a8fd5.scope - libcontainer container 0b6e91ad23ffdb7f259271ac5435b7f4523df4ae111f59c39a2ee288d86a8fd5. Sep 4 00:04:59.362771 containerd[1597]: time="2025-09-04T00:04:59.362724526Z" level=info msg="StartContainer for \"0b6e91ad23ffdb7f259271ac5435b7f4523df4ae111f59c39a2ee288d86a8fd5\" returns successfully" Sep 4 00:04:59.636055 kubelet[2785]: I0904 00:04:59.635930 2785 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 00:04:59.636055 kubelet[2785]: I0904 00:04:59.635988 2785 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 00:05:01.666910 systemd[1]: Started sshd@15-10.0.0.100:22-10.0.0.1:38030.service - OpenSSH per-connection server daemon (10.0.0.1:38030). Sep 4 00:05:01.731150 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 38030 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:01.733970 sshd-session[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:01.744679 systemd-logind[1575]: New session 16 of user core. Sep 4 00:05:01.750917 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 00:05:01.795609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719224903.mount: Deactivated successfully. Sep 4 00:05:01.983337 sshd[5406]: Connection closed by 10.0.0.1 port 38030 Sep 4 00:05:01.983639 sshd-session[5403]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:01.987251 systemd[1]: sshd@15-10.0.0.100:22-10.0.0.1:38030.service: Deactivated successfully. Sep 4 00:05:01.989629 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 00:05:01.990580 systemd-logind[1575]: Session 16 logged out. Waiting for processes to exit. Sep 4 00:05:01.992944 systemd-logind[1575]: Removed session 16. Sep 4 00:05:02.064971 containerd[1597]: time="2025-09-04T00:05:02.064918833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:02.066075 containerd[1597]: time="2025-09-04T00:05:02.065910774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 4 00:05:02.067211 containerd[1597]: time="2025-09-04T00:05:02.067178432Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:02.069970 containerd[1597]: time="2025-09-04T00:05:02.069929064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:02.070603 containerd[1597]: time="2025-09-04T00:05:02.070570089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.843332849s" Sep 4 00:05:02.070603 containerd[1597]: time="2025-09-04T00:05:02.070601207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 4 00:05:02.072430 containerd[1597]: time="2025-09-04T00:05:02.072394672Z" level=info msg="CreateContainer within sandbox \"1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 4 00:05:02.081123 containerd[1597]: time="2025-09-04T00:05:02.081070954Z" level=info msg="Container 281d89e1248c9961eeb89fb26c853526d33261f0439058f50189be962f80f063: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:02.090905 containerd[1597]: time="2025-09-04T00:05:02.090845533Z" level=info msg="CreateContainer within sandbox \"1bef050070870c772b869aec13f26ade3759f9bcfcc8080e78c159c7d5c3ad30\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"281d89e1248c9961eeb89fb26c853526d33261f0439058f50189be962f80f063\"" Sep 4 00:05:02.091420 containerd[1597]: time="2025-09-04T00:05:02.091376127Z" level=info msg="StartContainer for \"281d89e1248c9961eeb89fb26c853526d33261f0439058f50189be962f80f063\"" Sep 4 00:05:02.092677 containerd[1597]: time="2025-09-04T00:05:02.092644256Z" level=info msg="connecting to shim 281d89e1248c9961eeb89fb26c853526d33261f0439058f50189be962f80f063" address="unix:///run/containerd/s/88df81c4d158e5b262945a764cb37dde6730178e407be462658cc1bb77339324" protocol=ttrpc version=3 Sep 4 00:05:02.126969 systemd[1]: Started cri-containerd-281d89e1248c9961eeb89fb26c853526d33261f0439058f50189be962f80f063.scope - libcontainer container 281d89e1248c9961eeb89fb26c853526d33261f0439058f50189be962f80f063. Sep 4 00:05:02.179045 containerd[1597]: time="2025-09-04T00:05:02.178998922Z" level=info msg="StartContainer for \"281d89e1248c9961eeb89fb26c853526d33261f0439058f50189be962f80f063\" returns successfully" Sep 4 00:05:02.755365 kubelet[2785]: I0904 00:05:02.755282 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-78n75" podStartSLOduration=28.350648018 podStartE2EDuration="45.755258041s" podCreationTimestamp="2025-09-04 00:04:17 +0000 UTC" firstStartedPulling="2025-09-04 00:04:41.822465203 +0000 UTC m=+42.488840415" lastFinishedPulling="2025-09-04 00:04:59.227075226 +0000 UTC m=+59.893450438" observedRunningTime="2025-09-04 00:04:59.747095709 +0000 UTC m=+60.413470921" watchObservedRunningTime="2025-09-04 00:05:02.755258041 +0000 UTC m=+63.421633253" Sep 4 00:05:06.994996 systemd[1]: Started sshd@16-10.0.0.100:22-10.0.0.1:38040.service - OpenSSH per-connection server daemon (10.0.0.1:38040). Sep 4 00:05:07.048874 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 38040 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:07.050249 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:07.054431 systemd-logind[1575]: New session 17 of user core. Sep 4 00:05:07.061909 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 00:05:07.270520 sshd[5477]: Connection closed by 10.0.0.1 port 38040 Sep 4 00:05:07.270620 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:07.274728 systemd[1]: sshd@16-10.0.0.100:22-10.0.0.1:38040.service: Deactivated successfully. Sep 4 00:05:07.276961 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 00:05:07.277746 systemd-logind[1575]: Session 17 logged out. Waiting for processes to exit. Sep 4 00:05:07.279440 systemd-logind[1575]: Removed session 17. Sep 4 00:05:08.643918 containerd[1597]: time="2025-09-04T00:05:08.643860717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5221b88c12d4e96e4e012b9bdd30e1a2d346d6c78be89d3f1aa365dd4761621a\" id:\"7e06e0a9f29852651bdd71de21bd7c81012c428f64962a49cccf13af36eb9571\" pid:5501 exited_at:{seconds:1756944308 nanos:643452090}" Sep 4 00:05:08.661602 kubelet[2785]: I0904 00:05:08.661348 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-bc48c4f66-4h96m" podStartSLOduration=8.454358816 podStartE2EDuration="28.661326523s" podCreationTimestamp="2025-09-04 00:04:40 +0000 UTC" firstStartedPulling="2025-09-04 00:04:41.864364014 +0000 UTC m=+42.530739216" lastFinishedPulling="2025-09-04 00:05:02.071331711 +0000 UTC m=+62.737706923" observedRunningTime="2025-09-04 00:05:02.755973542 +0000 UTC m=+63.422348754" watchObservedRunningTime="2025-09-04 00:05:08.661326523 +0000 UTC m=+69.327701725" Sep 4 00:05:09.490635 kubelet[2785]: E0904 00:05:09.490577 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:10.490150 kubelet[2785]: E0904 00:05:10.490082 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:11.490267 kubelet[2785]: E0904 00:05:11.490224 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:12.294049 systemd[1]: Started sshd@17-10.0.0.100:22-10.0.0.1:43428.service - OpenSSH per-connection server daemon (10.0.0.1:43428). Sep 4 00:05:12.360447 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 43428 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:12.362573 sshd-session[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:12.368206 systemd-logind[1575]: New session 18 of user core. Sep 4 00:05:12.374030 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 00:05:12.504356 sshd[5519]: Connection closed by 10.0.0.1 port 43428 Sep 4 00:05:12.504761 sshd-session[5517]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:12.510169 systemd[1]: sshd@17-10.0.0.100:22-10.0.0.1:43428.service: Deactivated successfully. Sep 4 00:05:12.512335 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 00:05:12.513125 systemd-logind[1575]: Session 18 logged out. Waiting for processes to exit. Sep 4 00:05:12.514540 systemd-logind[1575]: Removed session 18. Sep 4 00:05:15.926246 containerd[1597]: time="2025-09-04T00:05:15.926186822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c56dc8483faa8fac832ee56ac1df662a2b8feeb7703fede45b927b8079b3b3b7\" id:\"1f2b727024efed68488770d70ea7ac73d904ce354521430f594333b0b081cd68\" pid:5544 exited_at:{seconds:1756944315 nanos:925561499}" Sep 4 00:05:17.516622 systemd[1]: Started sshd@18-10.0.0.100:22-10.0.0.1:43440.service - OpenSSH per-connection server daemon (10.0.0.1:43440). Sep 4 00:05:17.588704 sshd[5559]: Accepted publickey for core from 10.0.0.1 port 43440 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:17.591072 sshd-session[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:17.599134 systemd-logind[1575]: New session 19 of user core. Sep 4 00:05:17.606110 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 00:05:17.856699 sshd[5561]: Connection closed by 10.0.0.1 port 43440 Sep 4 00:05:17.858069 sshd-session[5559]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:17.872169 systemd[1]: sshd@18-10.0.0.100:22-10.0.0.1:43440.service: Deactivated successfully. Sep 4 00:05:17.874241 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 00:05:17.876517 systemd-logind[1575]: Session 19 logged out. Waiting for processes to exit. Sep 4 00:05:17.885196 systemd[1]: Started sshd@19-10.0.0.100:22-10.0.0.1:43452.service - OpenSSH per-connection server daemon (10.0.0.1:43452). Sep 4 00:05:17.888010 systemd-logind[1575]: Removed session 19. Sep 4 00:05:17.938517 sshd[5574]: Accepted publickey for core from 10.0.0.1 port 43452 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:17.940181 sshd-session[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:17.946523 systemd-logind[1575]: New session 20 of user core. Sep 4 00:05:17.957542 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 00:05:18.313841 sshd[5576]: Connection closed by 10.0.0.1 port 43452 Sep 4 00:05:18.311310 sshd-session[5574]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:18.330950 systemd[1]: sshd@19-10.0.0.100:22-10.0.0.1:43452.service: Deactivated successfully. Sep 4 00:05:18.333315 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 00:05:18.334831 systemd-logind[1575]: Session 20 logged out. Waiting for processes to exit. Sep 4 00:05:18.338322 systemd[1]: Started sshd@20-10.0.0.100:22-10.0.0.1:43464.service - OpenSSH per-connection server daemon (10.0.0.1:43464). Sep 4 00:05:18.340886 systemd-logind[1575]: Removed session 20. Sep 4 00:05:18.390672 sshd[5587]: Accepted publickey for core from 10.0.0.1 port 43464 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:18.392532 sshd-session[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:18.397872 systemd-logind[1575]: New session 21 of user core. Sep 4 00:05:18.411082 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 00:05:20.056913 sshd[5589]: Connection closed by 10.0.0.1 port 43464 Sep 4 00:05:20.057506 sshd-session[5587]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:20.072594 systemd[1]: sshd@20-10.0.0.100:22-10.0.0.1:43464.service: Deactivated successfully. Sep 4 00:05:20.074659 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 00:05:20.076638 systemd[1]: session-21.scope: Consumed 642ms CPU time, 71.4M memory peak. Sep 4 00:05:20.079114 systemd-logind[1575]: Session 21 logged out. Waiting for processes to exit. Sep 4 00:05:20.084138 systemd[1]: Started sshd@21-10.0.0.100:22-10.0.0.1:55808.service - OpenSSH per-connection server daemon (10.0.0.1:55808). Sep 4 00:05:20.085646 systemd-logind[1575]: Removed session 21. Sep 4 00:05:20.155208 sshd[5607]: Accepted publickey for core from 10.0.0.1 port 55808 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:20.160827 sshd-session[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:20.176033 systemd-logind[1575]: New session 22 of user core. Sep 4 00:05:20.178560 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 00:05:20.490738 kubelet[2785]: E0904 00:05:20.490699 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 00:05:20.796995 sshd[5611]: Connection closed by 10.0.0.1 port 55808 Sep 4 00:05:20.799007 sshd-session[5607]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:20.810776 systemd[1]: sshd@21-10.0.0.100:22-10.0.0.1:55808.service: Deactivated successfully. Sep 4 00:05:20.813201 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 00:05:20.814140 systemd-logind[1575]: Session 22 logged out. Waiting for processes to exit. Sep 4 00:05:20.818585 systemd[1]: Started sshd@22-10.0.0.100:22-10.0.0.1:55810.service - OpenSSH per-connection server daemon (10.0.0.1:55810). Sep 4 00:05:20.820110 systemd-logind[1575]: Removed session 22. Sep 4 00:05:20.879188 sshd[5622]: Accepted publickey for core from 10.0.0.1 port 55810 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:20.880455 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:20.884935 systemd-logind[1575]: New session 23 of user core. Sep 4 00:05:20.891942 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 00:05:21.046086 sshd[5624]: Connection closed by 10.0.0.1 port 55810 Sep 4 00:05:21.046446 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:21.051459 systemd[1]: sshd@22-10.0.0.100:22-10.0.0.1:55810.service: Deactivated successfully. Sep 4 00:05:21.054030 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 00:05:21.055225 systemd-logind[1575]: Session 23 logged out. Waiting for processes to exit. Sep 4 00:05:21.057310 systemd-logind[1575]: Removed session 23. Sep 4 00:05:23.626548 kubelet[2785]: I0904 00:05:23.626484 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:05:26.068011 systemd[1]: Started sshd@23-10.0.0.100:22-10.0.0.1:55812.service - OpenSSH per-connection server daemon (10.0.0.1:55812). Sep 4 00:05:26.117814 sshd[5652]: Accepted publickey for core from 10.0.0.1 port 55812 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:26.120248 sshd-session[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:26.126982 systemd-logind[1575]: New session 24 of user core. Sep 4 00:05:26.130045 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 00:05:26.381608 sshd[5654]: Connection closed by 10.0.0.1 port 55812 Sep 4 00:05:26.382571 sshd-session[5652]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:26.384310 containerd[1597]: time="2025-09-04T00:05:26.384260033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fcf193fb047f57c8ebe5da215d81d72d530cba25c144c80d2fdce513693866dc\" id:\"d49de16d62caaa7b14493876d1fba799885a7adfc408eb39286a817aa42fc731\" pid:5675 exited_at:{seconds:1756944326 nanos:383957476}" Sep 4 00:05:26.389334 systemd-logind[1575]: Session 24 logged out. Waiting for processes to exit. Sep 4 00:05:26.392357 systemd[1]: sshd@23-10.0.0.100:22-10.0.0.1:55812.service: Deactivated successfully. Sep 4 00:05:26.395757 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 00:05:26.399200 systemd-logind[1575]: Removed session 24. Sep 4 00:05:31.398279 systemd[1]: Started sshd@24-10.0.0.100:22-10.0.0.1:36732.service - OpenSSH per-connection server daemon (10.0.0.1:36732). Sep 4 00:05:31.444713 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 36732 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:31.446621 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:31.452962 systemd-logind[1575]: New session 25 of user core. Sep 4 00:05:31.459065 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 00:05:31.584451 sshd[5691]: Connection closed by 10.0.0.1 port 36732 Sep 4 00:05:31.584829 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:31.591421 systemd[1]: sshd@24-10.0.0.100:22-10.0.0.1:36732.service: Deactivated successfully. Sep 4 00:05:31.593729 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 00:05:31.594772 systemd-logind[1575]: Session 25 logged out. Waiting for processes to exit. Sep 4 00:05:31.596318 systemd-logind[1575]: Removed session 25. Sep 4 00:05:36.599675 systemd[1]: Started sshd@25-10.0.0.100:22-10.0.0.1:36736.service - OpenSSH per-connection server daemon (10.0.0.1:36736). Sep 4 00:05:36.650364 sshd[5706]: Accepted publickey for core from 10.0.0.1 port 36736 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:36.651768 sshd-session[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:36.656281 systemd-logind[1575]: New session 26 of user core. Sep 4 00:05:36.665957 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 00:05:36.778259 sshd[5708]: Connection closed by 10.0.0.1 port 36736 Sep 4 00:05:36.778659 sshd-session[5706]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:36.782804 systemd[1]: sshd@25-10.0.0.100:22-10.0.0.1:36736.service: Deactivated successfully. Sep 4 00:05:36.784928 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 00:05:36.786689 systemd-logind[1575]: Session 26 logged out. Waiting for processes to exit. Sep 4 00:05:36.787921 systemd-logind[1575]: Removed session 26. Sep 4 00:05:38.642173 containerd[1597]: time="2025-09-04T00:05:38.642119559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5221b88c12d4e96e4e012b9bdd30e1a2d346d6c78be89d3f1aa365dd4761621a\" id:\"ce16660b9fce931716895bc7beb47cad716e4ec0612d1eb7cf27cca1d4bde1ed\" pid:5732 exited_at:{seconds:1756944338 nanos:638274539}" Sep 4 00:05:41.796463 systemd[1]: Started sshd@26-10.0.0.100:22-10.0.0.1:44350.service - OpenSSH per-connection server daemon (10.0.0.1:44350). Sep 4 00:05:41.849454 sshd[5746]: Accepted publickey for core from 10.0.0.1 port 44350 ssh2: RSA SHA256:FRkp18PXLSvC/zf2oYaAB+FehlfzglsjijFYtmrSrM8 Sep 4 00:05:41.851244 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:41.856301 systemd-logind[1575]: New session 27 of user core. Sep 4 00:05:41.863051 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 00:05:41.971096 sshd[5748]: Connection closed by 10.0.0.1 port 44350 Sep 4 00:05:41.971416 sshd-session[5746]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:41.975529 systemd[1]: sshd@26-10.0.0.100:22-10.0.0.1:44350.service: Deactivated successfully. Sep 4 00:05:41.977890 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 00:05:41.978661 systemd-logind[1575]: Session 27 logged out. Waiting for processes to exit. Sep 4 00:05:41.980067 systemd-logind[1575]: Removed session 27.