Sep 9 00:21:12.132211 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:16:40 -00 2025 Sep 9 00:21:12.132247 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:21:12.132392 kernel: BIOS-provided physical RAM map: Sep 9 00:21:12.132407 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 9 00:21:12.132416 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 9 00:21:12.132426 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 9 00:21:12.132437 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 9 00:21:12.132452 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 9 00:21:12.132468 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 9 00:21:12.132478 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 9 00:21:12.132487 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:21:12.132496 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 9 00:21:12.132505 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:21:12.132515 kernel: NX (Execute Disable) protection: active Sep 9 00:21:12.132531 kernel: APIC: Static calls initialized Sep 9 00:21:12.132541 kernel: SMBIOS 2.8 present. Sep 9 00:21:12.132551 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 9 00:21:12.132560 kernel: DMI: Memory slots populated: 1/1 Sep 9 00:21:12.132570 kernel: Hypervisor detected: KVM Sep 9 00:21:12.132579 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:21:12.132587 kernel: kvm-clock: using sched offset of 7271225731 cycles Sep 9 00:21:12.132596 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:21:12.132605 kernel: tsc: Detected 2794.748 MHz processor Sep 9 00:21:12.132617 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:21:12.132626 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:21:12.132634 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 9 00:21:12.132643 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 9 00:21:12.132651 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:21:12.132660 kernel: Using GB pages for direct mapping Sep 9 00:21:12.132668 kernel: ACPI: Early table checksum verification disabled Sep 9 00:21:12.132676 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 9 00:21:12.132685 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:21:12.132695 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:21:12.132704 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:21:12.132712 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 9 00:21:12.132721 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:21:12.132729 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:21:12.132738 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:21:12.132746 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:21:12.132755 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 9 00:21:12.132769 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 9 00:21:12.132778 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 9 00:21:12.132786 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 9 00:21:12.132795 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 9 00:21:12.132804 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 9 00:21:12.132813 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 9 00:21:12.132824 kernel: No NUMA configuration found Sep 9 00:21:12.132833 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 9 00:21:12.132841 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 9 00:21:12.132850 kernel: Zone ranges: Sep 9 00:21:12.132859 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:21:12.132868 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 9 00:21:12.132879 kernel: Normal empty Sep 9 00:21:12.132890 kernel: Device empty Sep 9 00:21:12.132899 kernel: Movable zone start for each node Sep 9 00:21:12.132911 kernel: Early memory node ranges Sep 9 00:21:12.132919 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 9 00:21:12.132928 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 9 00:21:12.132937 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 9 00:21:12.132946 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:21:12.132954 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 00:21:12.132963 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 9 00:21:12.132972 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:21:12.132981 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:21:12.132991 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:21:12.133000 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:21:12.133009 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:21:12.133022 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:21:12.133031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:21:12.133039 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:21:12.133048 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:21:12.133057 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:21:12.133065 kernel: TSC deadline timer available Sep 9 00:21:12.133074 kernel: CPU topo: Max. logical packages: 1 Sep 9 00:21:12.133085 kernel: CPU topo: Max. logical dies: 1 Sep 9 00:21:12.133094 kernel: CPU topo: Max. dies per package: 1 Sep 9 00:21:12.133102 kernel: CPU topo: Max. threads per core: 1 Sep 9 00:21:12.133111 kernel: CPU topo: Num. cores per package: 4 Sep 9 00:21:12.133120 kernel: CPU topo: Num. threads per package: 4 Sep 9 00:21:12.133128 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 00:21:12.133137 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:21:12.133146 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:21:12.133155 kernel: kvm-guest: setup PV sched yield Sep 9 00:21:12.133166 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 9 00:21:12.133174 kernel: Booting paravirtualized kernel on KVM Sep 9 00:21:12.133183 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:21:12.133192 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:21:12.133201 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 00:21:12.133210 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 00:21:12.133218 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:21:12.133227 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:21:12.133235 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:21:12.133248 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:21:12.133257 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:21:12.133279 kernel: random: crng init done Sep 9 00:21:12.133287 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:21:12.133296 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:21:12.133305 kernel: Fallback order for Node 0: 0 Sep 9 00:21:12.133314 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 9 00:21:12.133323 kernel: Policy zone: DMA32 Sep 9 00:21:12.133334 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:21:12.133343 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:21:12.133352 kernel: ftrace: allocating 40099 entries in 157 pages Sep 9 00:21:12.133367 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 00:21:12.133376 kernel: Dynamic Preempt: voluntary Sep 9 00:21:12.133385 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:21:12.133395 kernel: rcu: RCU event tracing is enabled. Sep 9 00:21:12.133404 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:21:12.133413 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:21:12.133425 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:21:12.133434 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:21:12.133443 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:21:12.133452 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:21:12.133461 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:21:12.133470 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:21:12.133479 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:21:12.133489 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:21:12.133498 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:21:12.133517 kernel: Console: colour VGA+ 80x25 Sep 9 00:21:12.133527 kernel: printk: legacy console [ttyS0] enabled Sep 9 00:21:12.133549 kernel: ACPI: Core revision 20240827 Sep 9 00:21:12.133562 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:21:12.133572 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:21:12.133581 kernel: x2apic enabled Sep 9 00:21:12.133594 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:21:12.133604 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:21:12.133613 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:21:12.133626 kernel: kvm-guest: setup PV IPIs Sep 9 00:21:12.133636 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:21:12.133647 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:21:12.133658 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 00:21:12.133669 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:21:12.133680 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:21:12.133691 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:21:12.133703 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:21:12.133718 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:21:12.133730 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:21:12.133741 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:21:12.133752 kernel: active return thunk: retbleed_return_thunk Sep 9 00:21:12.133763 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:21:12.133774 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:21:12.133785 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:21:12.133797 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:21:12.133813 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:21:12.133825 kernel: active return thunk: srso_return_thunk Sep 9 00:21:12.133836 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:21:12.133847 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:21:12.133858 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:21:12.133870 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:21:12.133881 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:21:12.133892 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:21:12.133903 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:21:12.133918 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:21:12.133929 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 00:21:12.133941 kernel: landlock: Up and running. Sep 9 00:21:12.133952 kernel: SELinux: Initializing. Sep 9 00:21:12.133963 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:21:12.133974 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:21:12.133985 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:21:12.133996 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:21:12.134008 kernel: ... version: 0 Sep 9 00:21:12.134023 kernel: ... bit width: 48 Sep 9 00:21:12.134034 kernel: ... generic registers: 6 Sep 9 00:21:12.134045 kernel: ... value mask: 0000ffffffffffff Sep 9 00:21:12.134057 kernel: ... max period: 00007fffffffffff Sep 9 00:21:12.134068 kernel: ... fixed-purpose events: 0 Sep 9 00:21:12.134079 kernel: ... event mask: 000000000000003f Sep 9 00:21:12.134090 kernel: signal: max sigframe size: 1776 Sep 9 00:21:12.134101 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:21:12.134113 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:21:12.134129 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 00:21:12.134140 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:21:12.134151 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:21:12.134162 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:21:12.134174 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:21:12.134185 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 00:21:12.134197 kernel: Memory: 2430968K/2571752K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 134856K reserved, 0K cma-reserved) Sep 9 00:21:12.134208 kernel: devtmpfs: initialized Sep 9 00:21:12.134219 kernel: x86/mm: Memory block size: 128MB Sep 9 00:21:12.134235 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:21:12.134246 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:21:12.134257 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:21:12.134289 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:21:12.134300 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:21:12.134312 kernel: audit: type=2000 audit(1757377266.708:1): state=initialized audit_enabled=0 res=1 Sep 9 00:21:12.134323 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:21:12.134334 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:21:12.134345 kernel: cpuidle: using governor menu Sep 9 00:21:12.134373 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:21:12.134385 kernel: dca service started, version 1.12.1 Sep 9 00:21:12.134396 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 9 00:21:12.134407 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 9 00:21:12.134419 kernel: PCI: Using configuration type 1 for base access Sep 9 00:21:12.134430 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:21:12.134442 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:21:12.134453 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:21:12.134464 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:21:12.134480 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:21:12.134491 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:21:12.134502 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:21:12.134514 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:21:12.134526 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:21:12.134537 kernel: ACPI: Interpreter enabled Sep 9 00:21:12.134548 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:21:12.134559 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:21:12.134570 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:21:12.134586 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:21:12.134598 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:21:12.134609 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:21:12.134917 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:21:12.135098 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:21:12.135293 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:21:12.135312 kernel: PCI host bridge to bus 0000:00 Sep 9 00:21:12.135522 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:21:12.135684 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:21:12.135850 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:21:12.136016 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 9 00:21:12.136176 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 00:21:12.136356 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 9 00:21:12.136527 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:21:12.136768 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 00:21:12.136967 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 00:21:12.137142 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 9 00:21:12.138437 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 9 00:21:12.138626 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 9 00:21:12.138796 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:21:12.138996 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 00:21:12.139180 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 9 00:21:12.139387 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 9 00:21:12.139564 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 9 00:21:12.139767 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 00:21:12.139964 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 9 00:21:12.140132 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 9 00:21:12.140328 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 9 00:21:12.140537 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 00:21:12.140712 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 9 00:21:12.140884 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 9 00:21:12.141069 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 9 00:21:12.141243 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 9 00:21:12.141504 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 00:21:12.141724 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:21:12.141913 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 00:21:12.142089 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 9 00:21:12.142282 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 9 00:21:12.142504 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 00:21:12.142681 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 9 00:21:12.142701 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:21:12.142719 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:21:12.142731 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:21:12.142743 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:21:12.142754 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:21:12.142765 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:21:12.142777 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:21:12.142789 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:21:12.142800 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:21:12.142816 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:21:12.142828 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:21:12.142839 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:21:12.142851 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:21:12.142862 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:21:12.142874 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:21:12.142885 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:21:12.142897 kernel: iommu: Default domain type: Translated Sep 9 00:21:12.142908 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:21:12.142924 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:21:12.142935 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:21:12.142947 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 9 00:21:12.142959 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 9 00:21:12.143140 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:21:12.143335 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:21:12.143861 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:21:12.143882 kernel: vgaarb: loaded Sep 9 00:21:12.143895 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:21:12.143914 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:21:12.143926 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:21:12.143937 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:21:12.143950 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:21:12.143961 kernel: pnp: PnP ACPI init Sep 9 00:21:12.144206 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 9 00:21:12.144234 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:21:12.144246 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:21:12.144277 kernel: NET: Registered PF_INET protocol family Sep 9 00:21:12.144290 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:21:12.144302 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:21:12.144314 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:21:12.144325 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:21:12.144336 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:21:12.144347 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:21:12.144359 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:21:12.144380 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:21:12.144396 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:21:12.144408 kernel: NET: Registered PF_XDP protocol family Sep 9 00:21:12.144574 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:21:12.144732 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:21:12.144959 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:21:12.147115 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 9 00:21:12.147325 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 9 00:21:12.147497 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 9 00:21:12.147524 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:21:12.147535 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:21:12.147547 kernel: Initialise system trusted keyrings Sep 9 00:21:12.147559 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:21:12.147570 kernel: Key type asymmetric registered Sep 9 00:21:12.147581 kernel: Asymmetric key parser 'x509' registered Sep 9 00:21:12.147593 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:21:12.147605 kernel: io scheduler mq-deadline registered Sep 9 00:21:12.147616 kernel: io scheduler kyber registered Sep 9 00:21:12.147632 kernel: io scheduler bfq registered Sep 9 00:21:12.147644 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:21:12.147657 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:21:12.147668 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:21:12.147692 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:21:12.147705 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:21:12.147716 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:21:12.147727 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:21:12.147738 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:21:12.147754 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:21:12.147964 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:21:12.147987 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 9 00:21:12.148142 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:21:12.148318 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:21:11 UTC (1757377271) Sep 9 00:21:12.148505 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 9 00:21:12.148524 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:21:12.148536 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:21:12.148555 kernel: Segment Routing with IPv6 Sep 9 00:21:12.148567 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:21:12.148579 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:21:12.148591 kernel: Key type dns_resolver registered Sep 9 00:21:12.148603 kernel: IPI shorthand broadcast: enabled Sep 9 00:21:12.148615 kernel: sched_clock: Marking stable (4725006509, 150034572)->(4927514940, -52473859) Sep 9 00:21:12.148628 kernel: registered taskstats version 1 Sep 9 00:21:12.148640 kernel: Loading compiled-in X.509 certificates Sep 9 00:21:12.148653 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 08d0986253b18b7fd74c2cc5404da4ba92260e75' Sep 9 00:21:12.148669 kernel: Demotion targets for Node 0: null Sep 9 00:21:12.148681 kernel: Key type .fscrypt registered Sep 9 00:21:12.148693 kernel: Key type fscrypt-provisioning registered Sep 9 00:21:12.148705 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:21:12.148717 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:21:12.148728 kernel: ima: No architecture policies found Sep 9 00:21:12.148740 kernel: clk: Disabling unused clocks Sep 9 00:21:12.148751 kernel: Warning: unable to open an initial console. Sep 9 00:21:12.148769 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 9 00:21:12.148781 kernel: Write protecting the kernel read-only data: 24576k Sep 9 00:21:12.148793 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 9 00:21:12.148805 kernel: Run /init as init process Sep 9 00:21:12.148817 kernel: with arguments: Sep 9 00:21:12.148828 kernel: /init Sep 9 00:21:12.148840 kernel: with environment: Sep 9 00:21:12.148851 kernel: HOME=/ Sep 9 00:21:12.148862 kernel: TERM=linux Sep 9 00:21:12.148874 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:21:12.148892 systemd[1]: Successfully made /usr/ read-only. Sep 9 00:21:12.148926 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:21:12.148944 systemd[1]: Detected virtualization kvm. Sep 9 00:21:12.148957 systemd[1]: Detected architecture x86-64. Sep 9 00:21:12.148970 systemd[1]: Running in initrd. Sep 9 00:21:12.148988 systemd[1]: No hostname configured, using default hostname. Sep 9 00:21:12.149002 systemd[1]: Hostname set to . Sep 9 00:21:12.149015 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:21:12.149028 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:21:12.149041 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:21:12.149055 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:21:12.149069 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:21:12.149082 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:21:12.149101 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:21:12.149116 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:21:12.149131 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:21:12.149145 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:21:12.149158 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:21:12.149171 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:21:12.149184 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:21:12.149202 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:21:12.149216 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:21:12.149229 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:21:12.149243 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:21:12.149256 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:21:12.149291 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:21:12.149306 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 00:21:12.149319 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:21:12.149338 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:21:12.149351 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:21:12.149375 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:21:12.149386 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:21:12.149398 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:21:12.149415 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:21:12.149433 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 00:21:12.149447 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:21:12.149461 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:21:12.149474 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:21:12.149488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:21:12.149501 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:21:12.149518 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:21:12.149529 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:21:12.149542 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:21:12.149591 systemd-journald[220]: Collecting audit messages is disabled. Sep 9 00:21:12.149631 systemd-journald[220]: Journal started Sep 9 00:21:12.149660 systemd-journald[220]: Runtime Journal (/run/log/journal/aaa115af37764952be15137433f96ec1) is 6M, max 48.6M, 42.5M free. Sep 9 00:21:12.136463 systemd-modules-load[222]: Inserted module 'overlay' Sep 9 00:21:12.203235 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:21:12.203301 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:21:12.203323 kernel: Bridge firewalling registered Sep 9 00:21:12.198434 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 9 00:21:12.207510 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:21:12.210805 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:21:12.212042 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:21:12.241007 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:21:12.247565 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:21:12.250538 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:21:12.268585 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:21:12.288748 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:21:12.293336 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:21:12.303504 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:21:12.304591 systemd-tmpfiles[244]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 00:21:12.307456 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:21:12.315778 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:21:12.337887 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:21:12.366413 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:21:12.456680 systemd-resolved[262]: Positive Trust Anchors: Sep 9 00:21:12.456699 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:21:12.456738 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:21:12.463567 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 9 00:21:12.466599 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:21:12.475531 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:21:12.625409 kernel: SCSI subsystem initialized Sep 9 00:21:12.636568 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:21:12.665323 kernel: iscsi: registered transport (tcp) Sep 9 00:21:12.708248 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:21:12.708375 kernel: QLogic iSCSI HBA Driver Sep 9 00:21:12.768311 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:21:12.805255 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:21:12.816255 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:21:12.949463 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:21:12.957465 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:21:13.066390 kernel: raid6: avx2x4 gen() 19599 MB/s Sep 9 00:21:13.085385 kernel: raid6: avx2x2 gen() 17575 MB/s Sep 9 00:21:13.103383 kernel: raid6: avx2x1 gen() 15065 MB/s Sep 9 00:21:13.103480 kernel: raid6: using algorithm avx2x4 gen() 19599 MB/s Sep 9 00:21:13.126644 kernel: raid6: .... xor() 5413 MB/s, rmw enabled Sep 9 00:21:13.126735 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:21:13.167287 kernel: xor: automatically using best checksumming function avx Sep 9 00:21:13.667741 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:21:13.681023 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:21:13.689339 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:21:13.760004 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 9 00:21:13.769140 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:21:13.776184 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:21:13.834679 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Sep 9 00:21:13.903589 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:21:13.908471 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:21:14.048868 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:21:14.065883 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:21:14.089314 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:21:14.104773 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:21:14.117948 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:21:14.118030 kernel: GPT:9289727 != 19775487 Sep 9 00:21:14.118049 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:21:14.118065 kernel: GPT:9289727 != 19775487 Sep 9 00:21:14.118080 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:21:14.118095 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:21:14.131301 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:21:14.160918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:21:14.162099 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:21:14.167362 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:21:14.173384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:21:14.181322 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:21:14.188301 kernel: libata version 3.00 loaded. Sep 9 00:21:14.205315 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 9 00:21:14.211297 kernel: AES CTR mode by8 optimization enabled Sep 9 00:21:14.232596 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:21:14.245050 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:21:14.324707 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:21:14.324833 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 00:21:14.325207 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 00:21:14.325485 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:21:14.333295 kernel: scsi host0: ahci Sep 9 00:21:14.335281 kernel: scsi host1: ahci Sep 9 00:21:14.337168 kernel: scsi host2: ahci Sep 9 00:21:14.338287 kernel: scsi host3: ahci Sep 9 00:21:14.340290 kernel: scsi host4: ahci Sep 9 00:21:14.341040 kernel: scsi host5: ahci Sep 9 00:21:14.344371 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 9 00:21:14.344452 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 9 00:21:14.344487 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 9 00:21:14.344515 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 9 00:21:14.344545 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 9 00:21:14.344575 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 9 00:21:14.356749 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:21:14.398599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:21:14.414757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:21:14.427218 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:21:14.431620 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:21:14.436242 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:21:14.499177 disk-uuid[632]: Primary Header is updated. Sep 9 00:21:14.499177 disk-uuid[632]: Secondary Entries is updated. Sep 9 00:21:14.499177 disk-uuid[632]: Secondary Header is updated. Sep 9 00:21:14.522372 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:21:14.529614 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:21:14.663830 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:21:14.672673 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:21:14.672772 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:21:14.672791 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:21:14.678029 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:21:14.688996 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:21:14.689102 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:21:14.689123 kernel: ata3.00: applying bridge limits Sep 9 00:21:14.690331 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:21:14.695740 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:21:14.695887 kernel: ata3.00: configured for UDMA/100 Sep 9 00:21:14.701326 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:21:14.809477 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:21:14.809909 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:21:14.831178 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:21:15.314814 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:21:15.319418 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:21:15.330363 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:21:15.330712 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:21:15.337679 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:21:15.367606 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:21:15.528346 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:21:15.529049 disk-uuid[633]: The operation has completed successfully. Sep 9 00:21:15.599973 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:21:15.600190 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:21:15.666206 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:21:15.706533 sh[663]: Success Sep 9 00:21:15.750487 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:21:15.750555 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:21:15.750571 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 00:21:15.800607 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 00:21:15.862553 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:21:15.873503 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:21:15.907903 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:21:15.928496 kernel: BTRFS: device fsid c483a4f4-f0a7-42f4-ac8d-111955dab3a7 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (675) Sep 9 00:21:15.928569 kernel: BTRFS info (device dm-0): first mount of filesystem c483a4f4-f0a7-42f4-ac8d-111955dab3a7 Sep 9 00:21:15.928587 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:21:15.963528 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:21:15.963627 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 00:21:15.966753 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:21:15.974793 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:21:15.986621 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:21:15.991508 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:21:16.018646 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:21:16.383303 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (712) Sep 9 00:21:16.390491 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:21:16.390591 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:21:16.400009 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:21:16.400202 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:21:16.420331 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:21:16.442405 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:21:16.467622 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:21:16.501634 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:21:16.517469 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:21:16.843457 systemd-networkd[845]: lo: Link UP Sep 9 00:21:16.843478 systemd-networkd[845]: lo: Gained carrier Sep 9 00:21:16.845701 systemd-networkd[845]: Enumeration completed Sep 9 00:21:16.845884 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:21:16.847007 systemd-networkd[845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:21:16.847014 systemd-networkd[845]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:21:16.889356 systemd-networkd[845]: eth0: Link UP Sep 9 00:21:16.889716 systemd[1]: Reached target network.target - Network. Sep 9 00:21:16.893742 systemd-networkd[845]: eth0: Gained carrier Sep 9 00:21:16.893769 systemd-networkd[845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:21:17.107283 systemd-networkd[845]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:21:17.206314 ignition[842]: Ignition 2.21.0 Sep 9 00:21:17.206336 ignition[842]: Stage: fetch-offline Sep 9 00:21:17.206387 ignition[842]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:21:17.206400 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:21:17.206522 ignition[842]: parsed url from cmdline: "" Sep 9 00:21:17.206529 ignition[842]: no config URL provided Sep 9 00:21:17.206538 ignition[842]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:21:17.206552 ignition[842]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:21:17.206592 ignition[842]: op(1): [started] loading QEMU firmware config module Sep 9 00:21:17.206600 ignition[842]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:21:17.256299 ignition[842]: op(1): [finished] loading QEMU firmware config module Sep 9 00:21:17.321624 systemd-resolved[262]: Detected conflict on linux IN A 10.0.0.81 Sep 9 00:21:17.321651 systemd-resolved[262]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Sep 9 00:21:17.330245 ignition[842]: parsing config with SHA512: d40df8f81aecd430e4f71975e487cfe2610bf788bab8905cb17c6d30f6fa179e9de67cb4f074aa6b5fb11e099c981b77a82e4cb3d6df549074e2f8f3615a8e9c Sep 9 00:21:17.543669 unknown[842]: fetched base config from "system" Sep 9 00:21:17.543700 unknown[842]: fetched user config from "qemu" Sep 9 00:21:17.545164 ignition[842]: fetch-offline: fetch-offline passed Sep 9 00:21:17.545808 ignition[842]: Ignition finished successfully Sep 9 00:21:17.554509 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:21:17.562603 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:21:17.573411 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:21:17.797970 ignition[859]: Ignition 2.21.0 Sep 9 00:21:17.799053 ignition[859]: Stage: kargs Sep 9 00:21:17.799316 ignition[859]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:21:17.799331 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:21:17.874540 ignition[859]: kargs: kargs passed Sep 9 00:21:17.874708 ignition[859]: Ignition finished successfully Sep 9 00:21:17.899801 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:21:17.903518 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:21:18.007688 ignition[867]: Ignition 2.21.0 Sep 9 00:21:18.008230 ignition[867]: Stage: disks Sep 9 00:21:18.020921 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:21:18.010424 ignition[867]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:21:18.010447 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:21:18.025896 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:21:18.011698 ignition[867]: disks: disks passed Sep 9 00:21:18.032756 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:21:18.011772 ignition[867]: Ignition finished successfully Sep 9 00:21:18.037470 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:21:18.041324 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:21:18.042875 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:21:18.049765 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:21:18.140520 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 00:21:18.161341 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:21:18.166445 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:21:18.573538 systemd-networkd[845]: eth0: Gained IPv6LL Sep 9 00:21:18.724108 kernel: EXT4-fs (vda9): mounted filesystem 4b59fff7-9272-4156-91f8-37989d927dc6 r/w with ordered data mode. Quota mode: none. Sep 9 00:21:18.729625 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:21:18.730948 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:21:18.737706 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:21:18.743478 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:21:18.751712 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:21:18.751799 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:21:18.751845 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:21:18.788320 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 9 00:21:18.790448 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:21:18.828973 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:21:18.829015 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:21:18.828602 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:21:18.880844 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:21:18.880942 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:21:18.888129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:21:19.356209 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:21:19.390955 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:21:19.406532 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:21:19.416700 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:21:19.820807 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:21:19.831479 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:21:19.839038 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:21:19.991211 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:21:20.002210 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:21:20.059422 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:21:20.225632 ignition[999]: INFO : Ignition 2.21.0 Sep 9 00:21:20.225632 ignition[999]: INFO : Stage: mount Sep 9 00:21:20.229408 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:21:20.229408 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:21:20.229408 ignition[999]: INFO : mount: mount passed Sep 9 00:21:20.229408 ignition[999]: INFO : Ignition finished successfully Sep 9 00:21:20.243581 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:21:20.257497 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:21:20.302510 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:21:20.342725 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Sep 9 00:21:20.346988 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:21:20.347057 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:21:20.362587 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:21:20.362692 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:21:20.365819 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:21:20.438060 ignition[1029]: INFO : Ignition 2.21.0 Sep 9 00:21:20.438060 ignition[1029]: INFO : Stage: files Sep 9 00:21:20.441608 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:21:20.441608 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:21:20.441608 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:21:20.448153 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:21:20.448153 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:21:20.454715 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:21:20.456723 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:21:20.459771 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:21:20.457802 unknown[1029]: wrote ssh authorized keys file for user: core Sep 9 00:21:20.466357 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:21:20.469105 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 9 00:21:20.604442 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:21:20.761485 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:21:20.761485 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:21:20.766422 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:21:20.766422 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:21:20.766422 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:21:20.766422 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:21:20.766422 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:21:20.766422 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:21:20.766422 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:21:20.834632 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:21:20.834632 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:21:20.834632 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:21:20.879899 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:21:20.879899 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:21:20.898433 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 9 00:21:21.391701 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 00:21:23.897476 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:21:23.897476 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 00:21:23.915397 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:21:23.935641 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:21:23.935641 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 00:21:23.935641 ignition[1029]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 00:21:23.953832 ignition[1029]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:21:23.953832 ignition[1029]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:21:23.953832 ignition[1029]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 00:21:23.953832 ignition[1029]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:21:24.075448 ignition[1029]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:21:24.091749 ignition[1029]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:21:24.091749 ignition[1029]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:21:24.091749 ignition[1029]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:21:24.091749 ignition[1029]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:21:24.091749 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:21:24.091749 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:21:24.091749 ignition[1029]: INFO : files: files passed Sep 9 00:21:24.091749 ignition[1029]: INFO : Ignition finished successfully Sep 9 00:21:24.114925 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:21:24.125310 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:21:24.138226 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:21:24.162587 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:21:24.162788 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:21:24.170088 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:21:24.179086 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:21:24.179086 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:21:24.186985 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:21:24.190793 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:21:24.197403 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:21:24.202954 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:21:24.309948 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:21:24.310136 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:21:24.316961 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:21:24.319772 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:21:24.322464 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:21:24.326700 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:21:24.390840 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:21:24.405529 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:21:24.462856 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:21:24.467366 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:21:24.469136 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:21:24.471780 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:21:24.472021 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:21:24.478742 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:21:24.481186 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:21:24.484450 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:21:24.485134 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:21:24.485726 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:21:24.488630 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:21:24.496168 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:21:24.498899 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:21:24.501835 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:21:24.505486 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:21:24.509073 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:21:24.511515 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:21:24.511758 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:21:24.516812 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:21:24.518372 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:21:24.518920 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:21:24.520658 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:21:24.521160 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:21:24.521413 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:21:24.533102 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:21:24.533384 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:21:24.533884 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:21:24.539777 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:21:24.543994 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:21:24.554675 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:21:24.556464 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:21:24.558397 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:21:24.558520 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:21:24.561367 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:21:24.561512 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:21:24.563414 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:21:24.563592 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:21:24.565664 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:21:24.565822 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:21:24.576120 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:21:24.582695 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:21:24.583948 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:21:24.584146 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:21:24.586146 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:21:24.586373 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:21:24.689111 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:21:24.714095 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:21:24.761731 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:21:24.769058 ignition[1084]: INFO : Ignition 2.21.0 Sep 9 00:21:24.769058 ignition[1084]: INFO : Stage: umount Sep 9 00:21:24.775089 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:21:24.775089 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:21:24.800921 ignition[1084]: INFO : umount: umount passed Sep 9 00:21:24.800921 ignition[1084]: INFO : Ignition finished successfully Sep 9 00:21:24.782696 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:21:24.802388 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:21:24.820192 systemd[1]: Stopped target network.target - Network. Sep 9 00:21:24.820701 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:21:24.820848 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:21:24.823204 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:21:24.823318 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:21:24.834046 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:21:24.834174 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:21:24.837149 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:21:24.837241 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:21:24.838161 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:21:24.840416 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:21:24.865932 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:21:24.867539 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:21:24.879432 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 00:21:24.881499 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:21:24.882462 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:21:24.955579 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 00:21:24.962967 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 00:21:24.966046 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:21:24.966156 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:21:24.970176 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:21:24.977671 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:21:24.977785 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:21:24.984167 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:21:24.984261 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:21:24.988367 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:21:24.988488 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:21:24.990027 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:21:24.990089 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:21:24.999004 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:21:25.005591 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:21:25.005709 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:21:25.028092 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:21:25.028371 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:21:25.032382 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:21:25.032565 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:21:25.045794 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:21:25.048687 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:21:25.062161 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:21:25.062297 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:21:25.068350 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:21:25.068444 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:21:25.069957 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:21:25.070047 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:21:25.101241 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:21:25.101443 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:21:25.106645 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:21:25.106764 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:21:25.111914 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:21:25.112048 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:21:25.118067 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:21:25.122818 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 00:21:25.122960 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:21:25.131657 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:21:25.131838 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:21:25.150575 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 00:21:25.150678 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:21:25.153191 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:21:25.153307 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:21:25.155317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:21:25.155396 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:21:25.204254 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 00:21:25.204382 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 00:21:25.204463 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 00:21:25.204541 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:21:25.205363 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:21:25.205523 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:21:25.223215 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:21:25.231305 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:21:25.289697 systemd[1]: Switching root. Sep 9 00:21:25.336027 systemd-journald[220]: Journal stopped Sep 9 00:21:28.308936 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 9 00:21:28.309029 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:21:28.309053 kernel: SELinux: policy capability open_perms=1 Sep 9 00:21:28.309072 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:21:28.309106 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:21:28.309133 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:21:28.309159 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:21:28.309177 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:21:28.309213 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:21:28.309232 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 00:21:28.309250 kernel: audit: type=1403 audit(1757377286.182:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:21:28.309295 systemd[1]: Successfully loaded SELinux policy in 99.192ms. Sep 9 00:21:28.309331 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 27.486ms. Sep 9 00:21:28.309351 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:21:28.309368 systemd[1]: Detected virtualization kvm. Sep 9 00:21:28.309385 systemd[1]: Detected architecture x86-64. Sep 9 00:21:28.309402 systemd[1]: Detected first boot. Sep 9 00:21:28.309431 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:21:28.309452 zram_generator::config[1133]: No configuration found. Sep 9 00:21:28.309472 kernel: Guest personality initialized and is inactive Sep 9 00:21:28.309490 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:21:28.309507 kernel: Initialized host personality Sep 9 00:21:28.309524 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:21:28.309542 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:21:28.309562 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 00:21:28.309603 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:21:28.309624 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:21:28.309643 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:21:28.309662 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:21:28.309681 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:21:28.309699 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:21:28.309718 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:21:28.309739 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:21:28.309759 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:21:28.309789 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:21:28.309808 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:21:28.309828 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:21:28.309847 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:21:28.309866 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:21:28.309891 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:21:28.309911 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:21:28.309949 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:21:28.309970 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:21:28.309989 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:21:28.310007 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:21:28.310027 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:21:28.310062 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:21:28.310081 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:21:28.310100 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:21:28.310119 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:21:28.310149 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:21:28.310170 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:21:28.310189 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:21:28.310208 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:21:28.310238 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:21:28.310258 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 00:21:28.310299 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:21:28.310319 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:21:28.310339 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:21:28.310357 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:21:28.310380 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:21:28.310399 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:21:28.310416 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:21:28.310435 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:21:28.310454 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:21:28.310472 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:21:28.310491 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:21:28.310514 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:21:28.310545 systemd[1]: Reached target machines.target - Containers. Sep 9 00:21:28.310564 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:21:28.310582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:21:28.310600 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:21:28.310619 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:21:28.310638 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:21:28.310656 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:21:28.310675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:21:28.310693 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:21:28.310722 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:21:28.310742 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:21:28.310762 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:21:28.310781 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:21:28.310800 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:21:28.310818 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:21:28.310836 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:21:28.310854 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:21:28.310885 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:21:28.310906 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:21:28.310935 kernel: fuse: init (API version 7.41) Sep 9 00:21:28.310955 kernel: loop: module loaded Sep 9 00:21:28.310974 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:21:28.310993 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 00:21:28.311012 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:21:28.311037 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:21:28.311056 systemd[1]: Stopped verity-setup.service. Sep 9 00:21:28.311076 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:21:28.311148 systemd-journald[1211]: Collecting audit messages is disabled. Sep 9 00:21:28.311193 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:21:28.311214 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:21:28.311233 systemd-journald[1211]: Journal started Sep 9 00:21:28.311286 systemd-journald[1211]: Runtime Journal (/run/log/journal/aaa115af37764952be15137433f96ec1) is 6M, max 48.6M, 42.5M free. Sep 9 00:21:28.321633 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:21:28.321728 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:21:28.321774 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:21:28.321802 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:21:27.608198 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:21:27.637981 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:21:27.639708 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:21:27.641056 systemd[1]: systemd-journald.service: Consumed 1.022s CPU time. Sep 9 00:21:28.326588 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:21:28.328527 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:21:28.331369 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:21:28.333191 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:21:28.333653 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:21:28.335497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:21:28.335954 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:21:28.338914 kernel: ACPI: bus type drm_connector registered Sep 9 00:21:28.338645 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:21:28.338974 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:21:28.341371 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:21:28.341733 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:21:28.344355 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:21:28.344661 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:21:28.348021 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:21:28.348482 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:21:28.350470 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:21:28.355037 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:21:28.357111 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:21:28.377763 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 00:21:28.391280 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:21:28.397095 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:21:28.405121 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:21:28.416497 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:21:28.416571 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:21:28.420638 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 00:21:28.440221 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:21:28.452355 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:21:28.464949 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:21:28.470800 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:21:28.475645 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:21:28.490154 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:21:28.497594 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:21:28.501820 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:21:28.511180 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:21:28.520097 systemd-journald[1211]: Time spent on flushing to /var/log/journal/aaa115af37764952be15137433f96ec1 is 25.723ms for 988 entries. Sep 9 00:21:28.520097 systemd-journald[1211]: System Journal (/var/log/journal/aaa115af37764952be15137433f96ec1) is 8M, max 195.6M, 187.6M free. Sep 9 00:21:28.571454 systemd-journald[1211]: Received client request to flush runtime journal. Sep 9 00:21:28.527096 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:21:28.546563 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:21:28.550874 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:21:28.552624 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:21:28.558662 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:21:28.583094 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:21:28.598297 kernel: loop0: detected capacity change from 0 to 113872 Sep 9 00:21:28.599796 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:21:28.615493 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 00:21:28.621148 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:21:28.660304 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:21:28.672525 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:21:28.676942 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 00:21:28.684341 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Sep 9 00:21:28.684365 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Sep 9 00:21:28.702983 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:21:28.705810 kernel: loop1: detected capacity change from 0 to 146240 Sep 9 00:21:28.711026 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:21:28.766460 kernel: loop2: detected capacity change from 0 to 229808 Sep 9 00:21:28.814979 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:21:28.820574 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:21:28.850318 kernel: loop3: detected capacity change from 0 to 113872 Sep 9 00:21:28.884096 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Sep 9 00:21:28.884680 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Sep 9 00:21:28.894382 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:21:28.931308 kernel: loop4: detected capacity change from 0 to 146240 Sep 9 00:21:28.954492 kernel: loop5: detected capacity change from 0 to 229808 Sep 9 00:21:28.994785 (sd-merge)[1276]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:21:28.995814 (sd-merge)[1276]: Merged extensions into '/usr'. Sep 9 00:21:29.065512 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:21:29.065538 systemd[1]: Reloading... Sep 9 00:21:29.178301 zram_generator::config[1300]: No configuration found. Sep 9 00:21:29.433017 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:21:29.616195 systemd[1]: Reloading finished in 549 ms. Sep 9 00:21:29.682410 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:21:29.700067 systemd[1]: Starting ensure-sysext.service... Sep 9 00:21:29.704785 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:21:29.735561 systemd[1]: Reload requested from client PID 1340 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:21:29.735590 systemd[1]: Reloading... Sep 9 00:21:29.863669 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 00:21:29.863725 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 00:21:29.864148 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:21:29.864521 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:21:29.865717 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:21:29.866100 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Sep 9 00:21:29.866201 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Sep 9 00:21:29.874825 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:21:29.874847 systemd-tmpfiles[1341]: Skipping /boot Sep 9 00:21:29.973316 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:21:29.973335 systemd-tmpfiles[1341]: Skipping /boot Sep 9 00:21:30.015397 zram_generator::config[1371]: No configuration found. Sep 9 00:21:30.296075 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:21:30.368672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:21:30.520942 systemd[1]: Reloading finished in 784 ms. Sep 9 00:21:30.549576 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:21:30.554002 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:21:30.569327 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:21:30.585250 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:21:30.601210 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:21:30.612112 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:21:30.619429 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:21:30.626869 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:21:30.635176 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:21:30.648161 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:21:30.648448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:21:30.657296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:21:30.885635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:21:30.895559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:21:30.897298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:21:30.897466 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:21:30.920404 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:21:30.924045 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:21:30.930084 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:21:30.936599 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:21:30.941046 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:21:30.956242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:21:30.963620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:21:30.964177 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:21:30.967863 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:21:30.968445 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:21:30.995838 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:21:30.996335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:21:30.999703 augenrules[1442]: No rules Sep 9 00:21:31.001718 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:21:31.016627 systemd-udevd[1413]: Using default interface naming scheme 'v255'. Sep 9 00:21:31.025118 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:21:31.031816 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:21:31.035370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:21:31.035729 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:21:31.038695 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:21:31.043389 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:21:31.043725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:21:31.051067 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:21:31.053359 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:21:31.061056 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:21:31.062685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:21:31.065559 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:21:31.066967 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:21:31.071949 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:21:31.072327 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:21:31.079639 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:21:31.088606 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:21:31.107857 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:21:31.113835 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:21:31.117381 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:21:31.118900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:21:31.158668 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:21:31.165919 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:21:31.183901 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:21:31.190601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:21:31.192670 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:21:31.195633 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:21:31.195866 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:21:31.196040 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:21:31.220528 augenrules[1461]: /sbin/augenrules: No change Sep 9 00:21:31.223330 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:21:31.227314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:21:31.229173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:21:31.246427 systemd[1]: Finished ensure-sysext.service. Sep 9 00:21:31.262836 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:21:31.264299 augenrules[1512]: No rules Sep 9 00:21:31.267259 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:21:31.270018 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:21:31.270505 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:21:31.274697 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:21:31.275078 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:21:31.278286 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:21:31.279528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:21:31.301534 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:21:31.303938 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:21:31.304048 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:21:31.310225 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:21:31.481198 systemd-resolved[1411]: Positive Trust Anchors: Sep 9 00:21:31.481690 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:21:31.481804 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:21:31.489544 systemd-resolved[1411]: Defaulting to hostname 'linux'. Sep 9 00:21:31.493012 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:21:31.496188 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:21:31.755529 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:21:31.810005 systemd-networkd[1522]: lo: Link UP Sep 9 00:21:31.810499 systemd-networkd[1522]: lo: Gained carrier Sep 9 00:21:31.814329 systemd-networkd[1522]: Enumeration completed Sep 9 00:21:31.814895 systemd-networkd[1522]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:21:31.814909 systemd-networkd[1522]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:21:31.816459 systemd-networkd[1522]: eth0: Link UP Sep 9 00:21:31.819306 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 9 00:21:31.819371 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:21:31.816667 systemd-networkd[1522]: eth0: Gained carrier Sep 9 00:21:31.816694 systemd-networkd[1522]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:21:31.816925 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:21:31.823121 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:21:31.838413 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:21:31.837721 systemd[1]: Reached target network.target - Network. Sep 9 00:21:31.840390 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:21:31.845405 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:21:31.849249 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:21:31.855320 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 00:21:31.860169 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:21:31.864863 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:21:31.864925 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:21:31.866212 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:21:31.870001 systemd-networkd[1522]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:21:31.870990 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Sep 9 00:21:31.872980 systemd-timesyncd[1523]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:21:31.873054 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:21:31.873063 systemd-timesyncd[1523]: Initial clock synchronization to Tue 2025-09-09 00:21:31.848556 UTC. Sep 9 00:21:31.874696 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:21:31.876300 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:21:31.882246 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:21:31.895663 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:21:31.902187 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 00:21:31.906508 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 00:21:31.908688 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 00:21:31.918473 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:21:31.918893 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:21:31.918094 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:21:31.920870 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 00:21:31.927633 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 00:21:31.936853 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:21:31.941441 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:21:31.963952 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:21:31.965996 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:21:31.967240 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:21:31.968542 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:21:31.968587 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:21:31.976389 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:21:32.158358 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:21:32.164678 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:21:32.176543 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:21:32.235247 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:21:32.236791 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:21:32.244617 jq[1558]: false Sep 9 00:21:32.275828 extend-filesystems[1559]: Found /dev/vda6 Sep 9 00:21:32.288104 extend-filesystems[1559]: Found /dev/vda9 Sep 9 00:21:32.289698 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 00:21:32.294705 extend-filesystems[1559]: Checking size of /dev/vda9 Sep 9 00:21:32.297545 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:21:32.335442 extend-filesystems[1559]: Resized partition /dev/vda9 Sep 9 00:21:32.337630 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:21:32.344066 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:21:32.352009 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing passwd entry cache Sep 9 00:21:32.352077 oslogin_cache_refresh[1560]: Refreshing passwd entry cache Sep 9 00:21:32.353667 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:21:32.357784 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:21:32.367042 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting users, quitting Sep 9 00:21:32.367042 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:21:32.367042 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing group entry cache Sep 9 00:21:32.366670 oslogin_cache_refresh[1560]: Failure getting users, quitting Sep 9 00:21:32.366702 oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:21:32.366783 oslogin_cache_refresh[1560]: Refreshing group entry cache Sep 9 00:21:32.369009 extend-filesystems[1575]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 00:21:32.381499 oslogin_cache_refresh[1560]: Failure getting groups, quitting Sep 9 00:21:32.383836 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting groups, quitting Sep 9 00:21:32.383836 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:21:32.381524 oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:21:32.405397 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:21:32.386847 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:21:32.393038 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:21:32.393907 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:21:32.395071 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:21:32.400540 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:21:32.409742 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 00:21:32.421757 kernel: kvm_amd: TSC scaling supported Sep 9 00:21:32.421905 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:21:32.421933 kernel: kvm_amd: Nested Paging enabled Sep 9 00:21:32.421994 kernel: kvm_amd: LBR virtualization supported Sep 9 00:21:32.422019 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:21:32.421715 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:21:32.427437 kernel: kvm_amd: Virtual GIF supported Sep 9 00:21:32.429769 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:21:32.430695 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:21:32.431218 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 00:21:32.431651 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 00:21:32.439150 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:21:32.439597 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:21:32.444872 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:21:32.453098 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:21:32.453758 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:21:32.479349 update_engine[1578]: I20250909 00:21:32.477242 1578 main.cc:92] Flatcar Update Engine starting Sep 9 00:21:32.519935 jq[1580]: true Sep 9 00:21:32.546029 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:21:32.546162 jq[1603]: true Sep 9 00:21:32.560178 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:21:32.569713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:21:32.581157 tar[1589]: linux-amd64/LICENSE Sep 9 00:21:32.581157 tar[1589]: linux-amd64/helm Sep 9 00:21:32.582868 extend-filesystems[1575]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:21:32.582868 extend-filesystems[1575]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:21:32.582868 extend-filesystems[1575]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:21:32.595490 extend-filesystems[1559]: Resized filesystem in /dev/vda9 Sep 9 00:21:32.587636 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:21:32.588932 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:21:32.601878 systemd-logind[1576]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 00:21:32.601921 systemd-logind[1576]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:21:32.605242 systemd-logind[1576]: New seat seat0. Sep 9 00:21:32.609786 dbus-daemon[1553]: [system] SELinux support is enabled Sep 9 00:21:32.612837 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:21:32.616198 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:21:32.622550 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:21:32.622603 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:21:32.624373 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:21:32.624403 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:21:32.805420 update_engine[1578]: I20250909 00:21:32.653610 1578 update_check_scheduler.cc:74] Next update check in 4m8s Sep 9 00:21:32.817356 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:21:32.827910 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:21:32.833435 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:21:32.871329 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:21:32.948474 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:21:33.116681 locksmithd[1629]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:21:33.149731 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:21:33.150847 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:21:33.175106 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:21:33.189985 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:35988.service - OpenSSH per-connection server daemon (10.0.0.1:35988). Sep 9 00:21:33.201215 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:21:33.212522 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:21:33.311014 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:21:33.311508 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:21:33.318601 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:21:33.398991 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:21:33.450094 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:21:33.479490 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:21:33.480053 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:21:33.567894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:21:33.640838 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 35988 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:21:33.660021 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:33.680382 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:21:33.691416 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:21:33.718629 systemd-logind[1576]: New session 1 of user core. Sep 9 00:21:33.785023 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:21:33.797066 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:21:33.820965 (systemd)[1665]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:21:33.837587 systemd-logind[1576]: New session c1 of user core. Sep 9 00:21:33.841987 containerd[1591]: time="2025-09-09T00:21:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 00:21:33.842982 containerd[1591]: time="2025-09-09T00:21:33.842939252Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 9 00:21:33.869478 systemd-networkd[1522]: eth0: Gained IPv6LL Sep 9 00:21:33.886868 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:21:33.890032 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:21:33.898305 containerd[1591]: time="2025-09-09T00:21:33.896610702Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.276µs" Sep 9 00:21:33.898305 containerd[1591]: time="2025-09-09T00:21:33.896661405Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 00:21:33.898305 containerd[1591]: time="2025-09-09T00:21:33.896682574Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 00:21:33.898305 containerd[1591]: time="2025-09-09T00:21:33.896949515Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 00:21:33.898305 containerd[1591]: time="2025-09-09T00:21:33.896965362Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 00:21:33.898305 containerd[1591]: time="2025-09-09T00:21:33.897005930Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:21:33.898305 containerd[1591]: time="2025-09-09T00:21:33.897091679Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:21:33.898305 containerd[1591]: time="2025-09-09T00:21:33.897105856Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:21:33.898305 containerd[1591]: time="2025-09-09T00:21:33.897773998Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:21:33.898305 containerd[1591]: time="2025-09-09T00:21:33.897789045Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:21:33.898305 containerd[1591]: time="2025-09-09T00:21:33.897800571Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:21:33.898305 containerd[1591]: time="2025-09-09T00:21:33.897810015Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 00:21:33.899159 containerd[1591]: time="2025-09-09T00:21:33.897923666Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 00:21:33.899344 containerd[1591]: time="2025-09-09T00:21:33.899248486Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:21:33.899463 containerd[1591]: time="2025-09-09T00:21:33.899437181Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:21:33.899715 containerd[1591]: time="2025-09-09T00:21:33.899690026Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 00:21:33.899851 containerd[1591]: time="2025-09-09T00:21:33.899828079Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 00:21:33.900281 containerd[1591]: time="2025-09-09T00:21:33.900235213Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 00:21:33.900455 containerd[1591]: time="2025-09-09T00:21:33.900434203Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:21:33.906687 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:21:33.912813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:33.925673 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:21:33.957428 containerd[1591]: time="2025-09-09T00:21:33.957353103Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 00:21:33.958230 containerd[1591]: time="2025-09-09T00:21:33.958162489Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 00:21:33.958409 containerd[1591]: time="2025-09-09T00:21:33.958338719Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 00:21:33.958409 containerd[1591]: time="2025-09-09T00:21:33.958372515Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 00:21:33.958526 containerd[1591]: time="2025-09-09T00:21:33.958504324Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 00:21:33.958675 containerd[1591]: time="2025-09-09T00:21:33.958593885Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 00:21:33.958795 containerd[1591]: time="2025-09-09T00:21:33.958638695Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 00:21:33.958962 containerd[1591]: time="2025-09-09T00:21:33.958888227Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 00:21:33.959027 containerd[1591]: time="2025-09-09T00:21:33.958915461Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 00:21:33.959123 containerd[1591]: time="2025-09-09T00:21:33.959101094Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 00:21:33.959213 containerd[1591]: time="2025-09-09T00:21:33.959192566Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 00:21:33.959388 containerd[1591]: time="2025-09-09T00:21:33.959330898Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 00:21:33.959884 containerd[1591]: time="2025-09-09T00:21:33.959772878Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 00:21:33.959884 containerd[1591]: time="2025-09-09T00:21:33.959835577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 00:21:33.960017 containerd[1591]: time="2025-09-09T00:21:33.959995509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 00:21:33.960172 containerd[1591]: time="2025-09-09T00:21:33.960083589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 00:21:33.960172 containerd[1591]: time="2025-09-09T00:21:33.960102597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 00:21:33.960172 containerd[1591]: time="2025-09-09T00:21:33.960116774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 00:21:33.960338 containerd[1591]: time="2025-09-09T00:21:33.960313933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 00:21:33.960495 containerd[1591]: time="2025-09-09T00:21:33.960431737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 00:21:33.960495 containerd[1591]: time="2025-09-09T00:21:33.960465942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 00:21:33.960667 containerd[1591]: time="2025-09-09T00:21:33.960600023Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 00:21:33.960667 containerd[1591]: time="2025-09-09T00:21:33.960628206Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 00:21:33.964316 containerd[1591]: time="2025-09-09T00:21:33.960855729Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 00:21:33.970878 containerd[1591]: time="2025-09-09T00:21:33.970776200Z" level=info msg="Start snapshots syncer" Sep 9 00:21:33.971148 containerd[1591]: time="2025-09-09T00:21:33.971114343Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 00:21:33.971690 containerd[1591]: time="2025-09-09T00:21:33.971628836Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 00:21:33.972197 containerd[1591]: time="2025-09-09T00:21:33.971985348Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 00:21:33.972506 containerd[1591]: time="2025-09-09T00:21:33.972470497Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 00:21:33.973397 containerd[1591]: time="2025-09-09T00:21:33.973235814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 00:21:33.973507 containerd[1591]: time="2025-09-09T00:21:33.973373587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 00:21:33.973747 containerd[1591]: time="2025-09-09T00:21:33.973680756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.973713330Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.979117226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.979186678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.979223054Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.979334354Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.979365559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.979396333Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.981013464Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.981083486Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.981101675Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.981117321Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.981145165Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.981197378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 00:21:33.984945 containerd[1591]: time="2025-09-09T00:21:33.981229772Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 00:21:33.985512 containerd[1591]: time="2025-09-09T00:21:33.981285497Z" level=info msg="runtime interface created" Sep 9 00:21:33.985512 containerd[1591]: time="2025-09-09T00:21:33.981296453Z" level=info msg="created NRI interface" Sep 9 00:21:33.985512 containerd[1591]: time="2025-09-09T00:21:33.981312790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 00:21:33.985512 containerd[1591]: time="2025-09-09T00:21:33.981345335Z" level=info msg="Connect containerd service" Sep 9 00:21:33.985512 containerd[1591]: time="2025-09-09T00:21:33.981481356Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:21:33.988654 containerd[1591]: time="2025-09-09T00:21:33.988575208Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:21:34.121508 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:21:34.130841 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:21:34.131208 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:21:34.142960 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:21:34.348936 systemd[1665]: Queued start job for default target default.target. Sep 9 00:21:34.374590 systemd[1665]: Created slice app.slice - User Application Slice. Sep 9 00:21:34.374841 systemd[1665]: Reached target paths.target - Paths. Sep 9 00:21:34.374910 systemd[1665]: Reached target timers.target - Timers. Sep 9 00:21:34.381533 systemd[1665]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:21:34.464559 systemd[1665]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:21:34.464821 systemd[1665]: Reached target sockets.target - Sockets. Sep 9 00:21:34.465124 systemd[1665]: Reached target basic.target - Basic System. Sep 9 00:21:34.465306 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:21:34.479070 systemd[1665]: Reached target default.target - Main User Target. Sep 9 00:21:34.479128 systemd[1665]: Startup finished in 616ms. Sep 9 00:21:34.572783 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:21:34.746816 containerd[1591]: time="2025-09-09T00:21:34.746325155Z" level=info msg="Start subscribing containerd event" Sep 9 00:21:34.746816 containerd[1591]: time="2025-09-09T00:21:34.746500880Z" level=info msg="Start recovering state" Sep 9 00:21:34.746816 containerd[1591]: time="2025-09-09T00:21:34.746793959Z" level=info msg="Start event monitor" Sep 9 00:21:34.747056 containerd[1591]: time="2025-09-09T00:21:34.746884468Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:21:34.747056 containerd[1591]: time="2025-09-09T00:21:34.746912663Z" level=info msg="Start streaming server" Sep 9 00:21:34.747056 containerd[1591]: time="2025-09-09T00:21:34.746980260Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 00:21:34.747056 containerd[1591]: time="2025-09-09T00:21:34.747004023Z" level=info msg="runtime interface starting up..." Sep 9 00:21:34.747172 containerd[1591]: time="2025-09-09T00:21:34.747155695Z" level=info msg="starting plugins..." Sep 9 00:21:34.747202 containerd[1591]: time="2025-09-09T00:21:34.747190023Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 00:21:34.747948 containerd[1591]: time="2025-09-09T00:21:34.747701271Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:21:34.747948 containerd[1591]: time="2025-09-09T00:21:34.747813381Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:21:34.747948 containerd[1591]: time="2025-09-09T00:21:34.747902520Z" level=info msg="containerd successfully booted in 0.909233s" Sep 9 00:21:34.775146 tar[1589]: linux-amd64/README.md Sep 9 00:21:35.036696 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:21:35.062201 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:35998.service - OpenSSH per-connection server daemon (10.0.0.1:35998). Sep 9 00:21:35.162564 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:21:35.200897 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 35998 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:21:35.203437 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:35.212086 systemd-logind[1576]: New session 2 of user core. Sep 9 00:21:35.229652 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:21:35.325704 sshd[1716]: Connection closed by 10.0.0.1 port 35998 Sep 9 00:21:35.326204 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:35.352523 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:35998.service: Deactivated successfully. Sep 9 00:21:35.367183 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:21:35.370889 systemd-logind[1576]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:21:35.380321 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:36008.service - OpenSSH per-connection server daemon (10.0.0.1:36008). Sep 9 00:21:35.388918 systemd-logind[1576]: Removed session 2. Sep 9 00:21:35.488892 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 36008 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:21:35.493432 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:35.514063 systemd-logind[1576]: New session 3 of user core. Sep 9 00:21:35.530683 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:21:35.647812 sshd[1724]: Connection closed by 10.0.0.1 port 36008 Sep 9 00:21:35.641837 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:35.673661 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:36008.service: Deactivated successfully. Sep 9 00:21:35.684260 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:21:35.693893 systemd-logind[1576]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:21:35.699492 systemd-logind[1576]: Removed session 3. Sep 9 00:21:37.914986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:37.931716 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:21:37.938023 systemd[1]: Startup finished in 4.870s (kernel) + 14.431s (initrd) + 11.845s (userspace) = 31.147s. Sep 9 00:21:37.939168 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:21:40.407126 kubelet[1734]: E0909 00:21:40.407017 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:21:40.419230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:21:40.419571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:21:40.424373 systemd[1]: kubelet.service: Consumed 3.774s CPU time, 269.3M memory peak. Sep 9 00:21:45.700715 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:53912.service - OpenSSH per-connection server daemon (10.0.0.1:53912). Sep 9 00:21:45.877609 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 53912 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:21:45.882905 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:45.899064 systemd-logind[1576]: New session 4 of user core. Sep 9 00:21:45.909641 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:21:46.005877 sshd[1749]: Connection closed by 10.0.0.1 port 53912 Sep 9 00:21:46.003632 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:46.033309 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:53912.service: Deactivated successfully. Sep 9 00:21:46.035915 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:21:46.040963 systemd-logind[1576]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:21:46.053298 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:53924.service - OpenSSH per-connection server daemon (10.0.0.1:53924). Sep 9 00:21:46.054478 systemd-logind[1576]: Removed session 4. Sep 9 00:21:46.155900 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 53924 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:21:46.157408 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:46.181029 systemd-logind[1576]: New session 5 of user core. Sep 9 00:21:46.202071 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:21:46.275588 sshd[1757]: Connection closed by 10.0.0.1 port 53924 Sep 9 00:21:46.277127 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:46.306153 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:53924.service: Deactivated successfully. Sep 9 00:21:46.314244 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:21:46.326775 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:53938.service - OpenSSH per-connection server daemon (10.0.0.1:53938). Sep 9 00:21:46.328912 systemd-logind[1576]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:21:46.330728 systemd-logind[1576]: Removed session 5. Sep 9 00:21:46.471330 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 53938 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:21:46.483915 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:46.515779 systemd-logind[1576]: New session 6 of user core. Sep 9 00:21:46.543016 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:21:46.655879 sshd[1765]: Connection closed by 10.0.0.1 port 53938 Sep 9 00:21:46.655582 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:46.691204 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:53938.service: Deactivated successfully. Sep 9 00:21:46.698844 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:21:46.702679 systemd-logind[1576]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:21:46.719411 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:53948.service - OpenSSH per-connection server daemon (10.0.0.1:53948). Sep 9 00:21:46.721331 systemd-logind[1576]: Removed session 6. Sep 9 00:21:46.823246 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 53948 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:21:46.839163 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:46.889369 systemd-logind[1576]: New session 7 of user core. Sep 9 00:21:46.900666 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:21:47.026981 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:21:47.030958 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:21:47.071971 sudo[1774]: pam_unix(sudo:session): session closed for user root Sep 9 00:21:47.081540 sshd[1773]: Connection closed by 10.0.0.1 port 53948 Sep 9 00:21:47.076674 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:47.106117 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:53948.service: Deactivated successfully. Sep 9 00:21:47.113844 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:21:47.117575 systemd-logind[1576]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:21:47.132501 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:53964.service - OpenSSH per-connection server daemon (10.0.0.1:53964). Sep 9 00:21:47.145180 systemd-logind[1576]: Removed session 7. Sep 9 00:21:47.237367 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 53964 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:21:47.240488 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:47.265985 systemd-logind[1576]: New session 8 of user core. Sep 9 00:21:47.279196 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:21:47.360060 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:21:47.362679 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:21:47.398138 sudo[1784]: pam_unix(sudo:session): session closed for user root Sep 9 00:21:47.413582 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 00:21:47.414817 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:21:47.466619 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:21:47.633570 augenrules[1806]: No rules Sep 9 00:21:47.651131 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:21:47.655981 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:21:47.659214 sudo[1783]: pam_unix(sudo:session): session closed for user root Sep 9 00:21:47.667615 sshd[1782]: Connection closed by 10.0.0.1 port 53964 Sep 9 00:21:47.666413 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:47.698306 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:53964.service: Deactivated successfully. Sep 9 00:21:47.708081 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:21:47.716144 systemd-logind[1576]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:21:47.724180 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:53980.service - OpenSSH per-connection server daemon (10.0.0.1:53980). Sep 9 00:21:47.725288 systemd-logind[1576]: Removed session 8. Sep 9 00:21:47.826406 sshd[1815]: Accepted publickey for core from 10.0.0.1 port 53980 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:21:47.827884 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:47.866441 systemd-logind[1576]: New session 9 of user core. Sep 9 00:21:47.887937 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:21:47.965006 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:21:47.968313 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:21:50.349790 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:21:50.377449 (dockerd)[1840]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:21:50.571535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:21:50.580497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:51.423699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:51.438758 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:21:51.749052 kubelet[1853]: E0909 00:21:51.745690 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:21:51.846433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:21:51.846710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:21:51.849323 systemd[1]: kubelet.service: Consumed 657ms CPU time, 111.5M memory peak. Sep 9 00:21:52.025869 dockerd[1840]: time="2025-09-09T00:21:52.023685298Z" level=info msg="Starting up" Sep 9 00:21:52.028659 dockerd[1840]: time="2025-09-09T00:21:52.027883061Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 00:21:52.603807 systemd[1]: var-lib-docker-metacopy\x2dcheck2803264152-merged.mount: Deactivated successfully. Sep 9 00:21:52.689167 dockerd[1840]: time="2025-09-09T00:21:52.688538021Z" level=info msg="Loading containers: start." Sep 9 00:21:52.732032 kernel: Initializing XFRM netlink socket Sep 9 00:21:53.631970 systemd-networkd[1522]: docker0: Link UP Sep 9 00:21:53.647437 dockerd[1840]: time="2025-09-09T00:21:53.647311624Z" level=info msg="Loading containers: done." Sep 9 00:21:53.679578 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck937995111-merged.mount: Deactivated successfully. Sep 9 00:21:53.695453 dockerd[1840]: time="2025-09-09T00:21:53.692925739Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:21:53.695453 dockerd[1840]: time="2025-09-09T00:21:53.693982187Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 9 00:21:53.697929 dockerd[1840]: time="2025-09-09T00:21:53.696749736Z" level=info msg="Initializing buildkit" Sep 9 00:21:53.793733 dockerd[1840]: time="2025-09-09T00:21:53.793654443Z" level=info msg="Completed buildkit initialization" Sep 9 00:21:53.811301 dockerd[1840]: time="2025-09-09T00:21:53.811118832Z" level=info msg="Daemon has completed initialization" Sep 9 00:21:53.815081 dockerd[1840]: time="2025-09-09T00:21:53.811566567Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:21:53.814038 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:21:56.846974 containerd[1591]: time="2025-09-09T00:21:56.846876219Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 00:21:58.126761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1850381564.mount: Deactivated successfully. Sep 9 00:22:02.071713 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:22:02.074939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:22:02.543702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:22:02.548244 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:22:02.773368 kubelet[2132]: E0909 00:22:02.773282 2132 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:22:02.779593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:22:02.779836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:22:02.780420 systemd[1]: kubelet.service: Consumed 457ms CPU time, 108.9M memory peak. Sep 9 00:22:03.804123 containerd[1591]: time="2025-09-09T00:22:03.803965527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:03.834473 containerd[1591]: time="2025-09-09T00:22:03.834369072Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 9 00:22:03.867605 containerd[1591]: time="2025-09-09T00:22:03.867489676Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:03.911296 containerd[1591]: time="2025-09-09T00:22:03.911206423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:03.912516 containerd[1591]: time="2025-09-09T00:22:03.912383209Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 7.065422317s" Sep 9 00:22:03.912516 containerd[1591]: time="2025-09-09T00:22:03.912433744Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 9 00:22:03.913865 containerd[1591]: time="2025-09-09T00:22:03.913829656Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 00:22:08.811200 containerd[1591]: time="2025-09-09T00:22:08.811097913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:08.884777 containerd[1591]: time="2025-09-09T00:22:08.884641020Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 9 00:22:08.927434 containerd[1591]: time="2025-09-09T00:22:08.927359575Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:08.959528 containerd[1591]: time="2025-09-09T00:22:08.959401421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:08.960707 containerd[1591]: time="2025-09-09T00:22:08.960641904Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 5.046777189s" Sep 9 00:22:08.960707 containerd[1591]: time="2025-09-09T00:22:08.960683956Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 9 00:22:08.961329 containerd[1591]: time="2025-09-09T00:22:08.961304718Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 00:22:12.821749 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:22:12.824320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:22:13.152971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:22:13.159203 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:22:13.255169 kubelet[2156]: E0909 00:22:13.255050 2156 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:22:13.261176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:22:13.261465 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:22:13.261981 systemd[1]: kubelet.service: Consumed 370ms CPU time, 108.9M memory peak. Sep 9 00:22:15.451339 containerd[1591]: time="2025-09-09T00:22:15.451228060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:15.453849 containerd[1591]: time="2025-09-09T00:22:15.453688162Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 9 00:22:15.458018 containerd[1591]: time="2025-09-09T00:22:15.457916704Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:15.463418 containerd[1591]: time="2025-09-09T00:22:15.463345925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:15.464711 containerd[1591]: time="2025-09-09T00:22:15.464652201Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 6.503309887s" Sep 9 00:22:15.464711 containerd[1591]: time="2025-09-09T00:22:15.464702662Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 9 00:22:15.465583 containerd[1591]: time="2025-09-09T00:22:15.465465059Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 00:22:17.815699 update_engine[1578]: I20250909 00:22:17.815507 1578 update_attempter.cc:509] Updating boot flags... Sep 9 00:22:20.039724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4270426240.mount: Deactivated successfully. Sep 9 00:22:21.113970 containerd[1591]: time="2025-09-09T00:22:21.113871097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:21.173778 containerd[1591]: time="2025-09-09T00:22:21.173672261Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 9 00:22:21.299913 containerd[1591]: time="2025-09-09T00:22:21.299828779Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:21.424569 containerd[1591]: time="2025-09-09T00:22:21.424354957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:21.425278 containerd[1591]: time="2025-09-09T00:22:21.425215537Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 5.959653413s" Sep 9 00:22:21.425349 containerd[1591]: time="2025-09-09T00:22:21.425279903Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 9 00:22:21.426110 containerd[1591]: time="2025-09-09T00:22:21.425847641Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 00:22:23.320940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 00:22:23.322723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:22:23.516033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:22:23.520808 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:22:24.071706 kubelet[2199]: E0909 00:22:24.071640 2199 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:22:24.076528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:22:24.076725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:22:24.077131 systemd[1]: kubelet.service: Consumed 222ms CPU time, 110.5M memory peak. Sep 9 00:22:25.415689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113706092.mount: Deactivated successfully. Sep 9 00:22:28.247970 containerd[1591]: time="2025-09-09T00:22:28.247887796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:28.248717 containerd[1591]: time="2025-09-09T00:22:28.248655225Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 9 00:22:28.249844 containerd[1591]: time="2025-09-09T00:22:28.249809081Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:28.256274 containerd[1591]: time="2025-09-09T00:22:28.256216338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:28.257046 containerd[1591]: time="2025-09-09T00:22:28.257006448Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 6.831120777s" Sep 9 00:22:28.257046 containerd[1591]: time="2025-09-09T00:22:28.257040070Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 9 00:22:28.257579 containerd[1591]: time="2025-09-09T00:22:28.257545337Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:22:29.013288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045531285.mount: Deactivated successfully. Sep 9 00:22:29.020256 containerd[1591]: time="2025-09-09T00:22:29.020197871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:22:29.020994 containerd[1591]: time="2025-09-09T00:22:29.020945024Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:22:29.022251 containerd[1591]: time="2025-09-09T00:22:29.022209477Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:22:29.024525 containerd[1591]: time="2025-09-09T00:22:29.024496209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:22:29.025214 containerd[1591]: time="2025-09-09T00:22:29.025159107Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 767.589677ms" Sep 9 00:22:29.025214 containerd[1591]: time="2025-09-09T00:22:29.025209500Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:22:29.026150 containerd[1591]: time="2025-09-09T00:22:29.025821985Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 00:22:29.516009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289436520.mount: Deactivated successfully. Sep 9 00:22:32.253563 containerd[1591]: time="2025-09-09T00:22:32.253457538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:32.256020 containerd[1591]: time="2025-09-09T00:22:32.255961445Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 9 00:22:32.257351 containerd[1591]: time="2025-09-09T00:22:32.257294524Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:32.260532 containerd[1591]: time="2025-09-09T00:22:32.260458489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:32.261580 containerd[1591]: time="2025-09-09T00:22:32.261513474Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.235638331s" Sep 9 00:22:32.261580 containerd[1591]: time="2025-09-09T00:22:32.261562395Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 9 00:22:34.321155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 9 00:22:34.324046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:22:34.563770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:22:34.577601 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:22:34.619565 kubelet[2351]: E0909 00:22:34.619467 2351 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:22:34.624142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:22:34.624416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:22:34.624872 systemd[1]: kubelet.service: Consumed 236ms CPU time, 108.4M memory peak. Sep 9 00:22:35.338397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:22:35.338617 systemd[1]: kubelet.service: Consumed 236ms CPU time, 108.4M memory peak. Sep 9 00:22:35.341485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:22:35.370811 systemd[1]: Reload requested from client PID 2366 ('systemctl') (unit session-9.scope)... Sep 9 00:22:35.370843 systemd[1]: Reloading... Sep 9 00:22:35.496329 zram_generator::config[2412]: No configuration found. Sep 9 00:22:36.216906 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:22:36.349438 systemd[1]: Reloading finished in 978 ms. Sep 9 00:22:36.424469 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:22:36.424663 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:22:36.425145 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:22:36.425206 systemd[1]: kubelet.service: Consumed 166ms CPU time, 98.3M memory peak. Sep 9 00:22:36.427752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:22:36.637427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:22:36.652702 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:22:36.695087 kubelet[2457]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:22:36.695087 kubelet[2457]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:22:36.695087 kubelet[2457]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:22:36.695616 kubelet[2457]: I0909 00:22:36.695111 2457 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:22:39.090085 kubelet[2457]: I0909 00:22:39.090016 2457 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:22:39.090085 kubelet[2457]: I0909 00:22:39.090057 2457 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:22:39.090735 kubelet[2457]: I0909 00:22:39.090353 2457 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:22:39.297465 kubelet[2457]: I0909 00:22:39.297392 2457 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:22:39.304899 kubelet[2457]: E0909 00:22:39.304743 2457 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:22:39.342196 kubelet[2457]: I0909 00:22:39.342048 2457 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:22:39.349172 kubelet[2457]: I0909 00:22:39.349102 2457 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:22:39.349512 kubelet[2457]: I0909 00:22:39.349460 2457 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:22:39.349723 kubelet[2457]: I0909 00:22:39.349494 2457 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:22:39.349925 kubelet[2457]: I0909 00:22:39.349730 2457 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:22:39.349925 kubelet[2457]: I0909 00:22:39.349745 2457 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:22:39.357288 kubelet[2457]: I0909 00:22:39.357219 2457 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:22:39.386472 kubelet[2457]: I0909 00:22:39.386410 2457 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:22:39.386472 kubelet[2457]: I0909 00:22:39.386449 2457 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:22:39.389246 kubelet[2457]: I0909 00:22:39.389216 2457 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:22:39.389295 kubelet[2457]: I0909 00:22:39.389251 2457 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:22:39.401734 kubelet[2457]: E0909 00:22:39.401685 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:22:39.403049 kubelet[2457]: E0909 00:22:39.403018 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:22:39.438105 kubelet[2457]: I0909 00:22:39.438046 2457 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:22:39.438736 kubelet[2457]: I0909 00:22:39.438697 2457 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:22:39.439624 kubelet[2457]: W0909 00:22:39.439566 2457 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:22:39.442564 kubelet[2457]: I0909 00:22:39.442506 2457 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:22:39.442564 kubelet[2457]: I0909 00:22:39.442566 2457 server.go:1289] "Started kubelet" Sep 9 00:22:39.444542 kubelet[2457]: I0909 00:22:39.444481 2457 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:22:39.455757 kubelet[2457]: I0909 00:22:39.455704 2457 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:22:39.455757 kubelet[2457]: I0909 00:22:39.455705 2457 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:22:39.455928 kubelet[2457]: I0909 00:22:39.455707 2457 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:22:39.455928 kubelet[2457]: I0909 00:22:39.455819 2457 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:22:39.456809 kubelet[2457]: I0909 00:22:39.456765 2457 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:22:39.487989 kubelet[2457]: E0909 00:22:39.487938 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:22:39.488154 kubelet[2457]: I0909 00:22:39.488070 2457 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:22:39.488514 kubelet[2457]: I0909 00:22:39.488494 2457 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:22:39.488594 kubelet[2457]: I0909 00:22:39.488581 2457 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:22:39.488717 kubelet[2457]: E0909 00:22:39.488683 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" Sep 9 00:22:39.488905 kubelet[2457]: I0909 00:22:39.488882 2457 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:22:39.488994 kubelet[2457]: I0909 00:22:39.488968 2457 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:22:39.489108 kubelet[2457]: E0909 00:22:39.489081 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:22:39.491292 kubelet[2457]: I0909 00:22:39.490207 2457 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:22:39.491998 kubelet[2457]: E0909 00:22:39.491971 2457 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:22:39.502509 kubelet[2457]: E0909 00:22:39.501034 2457 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18637565bde2ccb9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:22:39.442529465 +0000 UTC m=+2.785652129,LastTimestamp:2025-09-09 00:22:39.442529465 +0000 UTC m=+2.785652129,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:22:39.511455 kubelet[2457]: I0909 00:22:39.511423 2457 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:22:39.511455 kubelet[2457]: I0909 00:22:39.511443 2457 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:22:39.511455 kubelet[2457]: I0909 00:22:39.511463 2457 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:22:39.546060 kubelet[2457]: I0909 00:22:39.546028 2457 policy_none.go:49] "None policy: Start" Sep 9 00:22:39.546201 kubelet[2457]: I0909 00:22:39.546075 2457 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:22:39.546201 kubelet[2457]: I0909 00:22:39.546103 2457 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:22:39.546439 kubelet[2457]: I0909 00:22:39.546394 2457 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:22:39.548147 kubelet[2457]: I0909 00:22:39.548110 2457 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:22:39.548212 kubelet[2457]: I0909 00:22:39.548162 2457 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:22:39.548212 kubelet[2457]: I0909 00:22:39.548193 2457 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:22:39.548212 kubelet[2457]: I0909 00:22:39.548203 2457 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:22:39.548329 kubelet[2457]: E0909 00:22:39.548251 2457 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:22:39.548953 kubelet[2457]: E0909 00:22:39.548905 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:22:39.556618 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:22:39.571518 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:22:39.575413 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:22:39.588412 kubelet[2457]: E0909 00:22:39.588373 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:22:39.589323 kubelet[2457]: E0909 00:22:39.589216 2457 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:22:39.589455 kubelet[2457]: I0909 00:22:39.589442 2457 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:22:39.589516 kubelet[2457]: I0909 00:22:39.589454 2457 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:22:39.589747 kubelet[2457]: I0909 00:22:39.589687 2457 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:22:39.590675 kubelet[2457]: E0909 00:22:39.590652 2457 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:22:39.590753 kubelet[2457]: E0909 00:22:39.590698 2457 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:22:39.661611 systemd[1]: Created slice kubepods-burstable-pod0432c95726d47637e527e4de0e2f7109.slice - libcontainer container kubepods-burstable-pod0432c95726d47637e527e4de0e2f7109.slice. Sep 9 00:22:39.682422 kubelet[2457]: E0909 00:22:39.682365 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:22:39.686819 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 00:22:39.689091 kubelet[2457]: E0909 00:22:39.689052 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:22:39.689235 kubelet[2457]: I0909 00:22:39.689213 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0432c95726d47637e527e4de0e2f7109-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0432c95726d47637e527e4de0e2f7109\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:39.689332 kubelet[2457]: I0909 00:22:39.689238 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:39.689332 kubelet[2457]: I0909 00:22:39.689284 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:39.689332 kubelet[2457]: I0909 00:22:39.689300 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:39.689332 kubelet[2457]: I0909 00:22:39.689313 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:39.689332 kubelet[2457]: I0909 00:22:39.689327 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0432c95726d47637e527e4de0e2f7109-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0432c95726d47637e527e4de0e2f7109\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:39.689596 kubelet[2457]: I0909 00:22:39.689339 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0432c95726d47637e527e4de0e2f7109-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0432c95726d47637e527e4de0e2f7109\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:39.689596 kubelet[2457]: I0909 00:22:39.689352 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:39.689596 kubelet[2457]: I0909 00:22:39.689395 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:22:39.689596 kubelet[2457]: E0909 00:22:39.689471 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" Sep 9 00:22:39.690834 kubelet[2457]: I0909 00:22:39.690802 2457 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:22:39.691194 kubelet[2457]: E0909 00:22:39.691168 2457 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Sep 9 00:22:39.691348 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 00:22:39.693410 kubelet[2457]: E0909 00:22:39.693375 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:22:39.893244 kubelet[2457]: I0909 00:22:39.893118 2457 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:22:39.894042 kubelet[2457]: E0909 00:22:39.893977 2457 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Sep 9 00:22:39.984075 kubelet[2457]: E0909 00:22:39.983926 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:39.984854 containerd[1591]: time="2025-09-09T00:22:39.984809619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0432c95726d47637e527e4de0e2f7109,Namespace:kube-system,Attempt:0,}" Sep 9 00:22:39.990360 kubelet[2457]: E0909 00:22:39.990318 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:39.990790 containerd[1591]: time="2025-09-09T00:22:39.990759817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 00:22:39.994426 kubelet[2457]: E0909 00:22:39.994394 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:39.994808 containerd[1591]: time="2025-09-09T00:22:39.994779693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 00:22:40.037459 containerd[1591]: time="2025-09-09T00:22:40.037412074Z" level=info msg="connecting to shim 8e0e83400c6019876d0642b9ee6ecb2882986ff6c59a8c4a670d209ff189d205" address="unix:///run/containerd/s/91fdcee88f0469bdbdfcf66142b5af946c381b34937678123997e719f919043c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:22:40.051613 containerd[1591]: time="2025-09-09T00:22:40.051543991Z" level=info msg="connecting to shim 5cdd8257af040a12ddf8af4c8781421feba33bfc4d3417da047c5c60c343e62a" address="unix:///run/containerd/s/caeb57ae65e5360c27351ebe10db2951fe3eab91acc0688c5d9b19c45e43c04b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:22:40.078301 containerd[1591]: time="2025-09-09T00:22:40.073521840Z" level=info msg="connecting to shim d1e100f5df2925dd4d5b2117012e9586b76de8e2dfb712c1a978e23b156a1e4a" address="unix:///run/containerd/s/73e14f76657598396b3e158be8e643e7607d490c0a7f0ae94df6714fed07b39e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:22:40.091015 kubelet[2457]: E0909 00:22:40.090961 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" Sep 9 00:22:40.101560 systemd[1]: Started cri-containerd-8e0e83400c6019876d0642b9ee6ecb2882986ff6c59a8c4a670d209ff189d205.scope - libcontainer container 8e0e83400c6019876d0642b9ee6ecb2882986ff6c59a8c4a670d209ff189d205. Sep 9 00:22:40.116786 systemd[1]: Started cri-containerd-5cdd8257af040a12ddf8af4c8781421feba33bfc4d3417da047c5c60c343e62a.scope - libcontainer container 5cdd8257af040a12ddf8af4c8781421feba33bfc4d3417da047c5c60c343e62a. Sep 9 00:22:40.147507 systemd[1]: Started cri-containerd-d1e100f5df2925dd4d5b2117012e9586b76de8e2dfb712c1a978e23b156a1e4a.scope - libcontainer container d1e100f5df2925dd4d5b2117012e9586b76de8e2dfb712c1a978e23b156a1e4a. Sep 9 00:22:40.182897 containerd[1591]: time="2025-09-09T00:22:40.182801191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0432c95726d47637e527e4de0e2f7109,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e0e83400c6019876d0642b9ee6ecb2882986ff6c59a8c4a670d209ff189d205\"" Sep 9 00:22:40.188346 kubelet[2457]: E0909 00:22:40.188308 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:40.196337 containerd[1591]: time="2025-09-09T00:22:40.195920959Z" level=info msg="CreateContainer within sandbox \"8e0e83400c6019876d0642b9ee6ecb2882986ff6c59a8c4a670d209ff189d205\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:22:40.200168 containerd[1591]: time="2025-09-09T00:22:40.200119975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cdd8257af040a12ddf8af4c8781421feba33bfc4d3417da047c5c60c343e62a\"" Sep 9 00:22:40.200930 kubelet[2457]: E0909 00:22:40.200874 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:40.207055 containerd[1591]: time="2025-09-09T00:22:40.207015206Z" level=info msg="CreateContainer within sandbox \"5cdd8257af040a12ddf8af4c8781421feba33bfc4d3417da047c5c60c343e62a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:22:40.209675 containerd[1591]: time="2025-09-09T00:22:40.209539743Z" level=info msg="Container 8a704dc2eb0cb953412505902641e83a585db3a0242008a673741f533fa1877d: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:40.216456 containerd[1591]: time="2025-09-09T00:22:40.216415807Z" level=info msg="Container 6c9f2fa0f1f1edc38fa736abc0aee99d542626cce2c60f0321f15065741a5198: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:40.224939 containerd[1591]: time="2025-09-09T00:22:40.224880896Z" level=info msg="CreateContainer within sandbox \"8e0e83400c6019876d0642b9ee6ecb2882986ff6c59a8c4a670d209ff189d205\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8a704dc2eb0cb953412505902641e83a585db3a0242008a673741f533fa1877d\"" Sep 9 00:22:40.225786 containerd[1591]: time="2025-09-09T00:22:40.225761263Z" level=info msg="StartContainer for \"8a704dc2eb0cb953412505902641e83a585db3a0242008a673741f533fa1877d\"" Sep 9 00:22:40.228559 containerd[1591]: time="2025-09-09T00:22:40.228536090Z" level=info msg="connecting to shim 8a704dc2eb0cb953412505902641e83a585db3a0242008a673741f533fa1877d" address="unix:///run/containerd/s/91fdcee88f0469bdbdfcf66142b5af946c381b34937678123997e719f919043c" protocol=ttrpc version=3 Sep 9 00:22:40.230587 containerd[1591]: time="2025-09-09T00:22:40.230175892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1e100f5df2925dd4d5b2117012e9586b76de8e2dfb712c1a978e23b156a1e4a\"" Sep 9 00:22:40.230897 kubelet[2457]: E0909 00:22:40.230869 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:40.231760 containerd[1591]: time="2025-09-09T00:22:40.231728296Z" level=info msg="CreateContainer within sandbox \"5cdd8257af040a12ddf8af4c8781421feba33bfc4d3417da047c5c60c343e62a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6c9f2fa0f1f1edc38fa736abc0aee99d542626cce2c60f0321f15065741a5198\"" Sep 9 00:22:40.232478 containerd[1591]: time="2025-09-09T00:22:40.232453095Z" level=info msg="StartContainer for \"6c9f2fa0f1f1edc38fa736abc0aee99d542626cce2c60f0321f15065741a5198\"" Sep 9 00:22:40.234034 containerd[1591]: time="2025-09-09T00:22:40.233669405Z" level=info msg="connecting to shim 6c9f2fa0f1f1edc38fa736abc0aee99d542626cce2c60f0321f15065741a5198" address="unix:///run/containerd/s/caeb57ae65e5360c27351ebe10db2951fe3eab91acc0688c5d9b19c45e43c04b" protocol=ttrpc version=3 Sep 9 00:22:40.237982 containerd[1591]: time="2025-09-09T00:22:40.237877688Z" level=info msg="CreateContainer within sandbox \"d1e100f5df2925dd4d5b2117012e9586b76de8e2dfb712c1a978e23b156a1e4a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:22:40.251246 containerd[1591]: time="2025-09-09T00:22:40.250660741Z" level=info msg="Container 5e4146706d405c6143bd31dfebc6af2cd5f5c860f319448659dff936ab63a878: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:40.252513 systemd[1]: Started cri-containerd-8a704dc2eb0cb953412505902641e83a585db3a0242008a673741f533fa1877d.scope - libcontainer container 8a704dc2eb0cb953412505902641e83a585db3a0242008a673741f533fa1877d. Sep 9 00:22:40.258435 systemd[1]: Started cri-containerd-6c9f2fa0f1f1edc38fa736abc0aee99d542626cce2c60f0321f15065741a5198.scope - libcontainer container 6c9f2fa0f1f1edc38fa736abc0aee99d542626cce2c60f0321f15065741a5198. Sep 9 00:22:40.260218 containerd[1591]: time="2025-09-09T00:22:40.260168478Z" level=info msg="CreateContainer within sandbox \"d1e100f5df2925dd4d5b2117012e9586b76de8e2dfb712c1a978e23b156a1e4a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5e4146706d405c6143bd31dfebc6af2cd5f5c860f319448659dff936ab63a878\"" Sep 9 00:22:40.260842 containerd[1591]: time="2025-09-09T00:22:40.260765241Z" level=info msg="StartContainer for \"5e4146706d405c6143bd31dfebc6af2cd5f5c860f319448659dff936ab63a878\"" Sep 9 00:22:40.262549 containerd[1591]: time="2025-09-09T00:22:40.262509413Z" level=info msg="connecting to shim 5e4146706d405c6143bd31dfebc6af2cd5f5c860f319448659dff936ab63a878" address="unix:///run/containerd/s/73e14f76657598396b3e158be8e643e7607d490c0a7f0ae94df6714fed07b39e" protocol=ttrpc version=3 Sep 9 00:22:40.288501 systemd[1]: Started cri-containerd-5e4146706d405c6143bd31dfebc6af2cd5f5c860f319448659dff936ab63a878.scope - libcontainer container 5e4146706d405c6143bd31dfebc6af2cd5f5c860f319448659dff936ab63a878. Sep 9 00:22:40.296559 kubelet[2457]: I0909 00:22:40.296526 2457 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:22:40.297412 kubelet[2457]: E0909 00:22:40.297381 2457 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Sep 9 00:22:40.448833 containerd[1591]: time="2025-09-09T00:22:40.448779629Z" level=info msg="StartContainer for \"6c9f2fa0f1f1edc38fa736abc0aee99d542626cce2c60f0321f15065741a5198\" returns successfully" Sep 9 00:22:40.449905 containerd[1591]: time="2025-09-09T00:22:40.448987086Z" level=info msg="StartContainer for \"5e4146706d405c6143bd31dfebc6af2cd5f5c860f319448659dff936ab63a878\" returns successfully" Sep 9 00:22:40.450112 containerd[1591]: time="2025-09-09T00:22:40.450055253Z" level=info msg="StartContainer for \"8a704dc2eb0cb953412505902641e83a585db3a0242008a673741f533fa1877d\" returns successfully" Sep 9 00:22:40.560770 kubelet[2457]: E0909 00:22:40.560324 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:22:40.560770 kubelet[2457]: E0909 00:22:40.560474 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:40.560770 kubelet[2457]: E0909 00:22:40.560727 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:22:40.560924 kubelet[2457]: E0909 00:22:40.560820 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:40.564824 kubelet[2457]: E0909 00:22:40.564797 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:22:40.564975 kubelet[2457]: E0909 00:22:40.564956 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:41.099908 kubelet[2457]: I0909 00:22:41.099860 2457 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:22:41.569191 kubelet[2457]: E0909 00:22:41.568802 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:22:41.569191 kubelet[2457]: E0909 00:22:41.568972 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:41.570699 kubelet[2457]: E0909 00:22:41.570657 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:22:41.570825 kubelet[2457]: E0909 00:22:41.570808 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:41.957239 kubelet[2457]: E0909 00:22:41.957196 2457 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:22:42.103713 kubelet[2457]: E0909 00:22:42.103668 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:22:42.104127 kubelet[2457]: E0909 00:22:42.103815 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:42.124765 kubelet[2457]: I0909 00:22:42.124693 2457 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:22:42.189220 kubelet[2457]: I0909 00:22:42.189111 2457 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:42.196321 kubelet[2457]: E0909 00:22:42.196242 2457 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:42.196321 kubelet[2457]: I0909 00:22:42.196293 2457 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:22:42.197819 kubelet[2457]: E0909 00:22:42.197791 2457 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:22:42.197819 kubelet[2457]: I0909 00:22:42.197810 2457 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:42.199071 kubelet[2457]: E0909 00:22:42.199051 2457 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:42.402066 kubelet[2457]: I0909 00:22:42.402032 2457 apiserver.go:52] "Watching apiserver" Sep 9 00:22:42.489143 kubelet[2457]: I0909 00:22:42.489103 2457 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:22:42.961408 kubelet[2457]: I0909 00:22:42.961372 2457 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:22:42.966868 kubelet[2457]: E0909 00:22:42.966820 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:43.569524 kubelet[2457]: E0909 00:22:43.569488 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:44.132435 systemd[1]: Reload requested from client PID 2743 ('systemctl') (unit session-9.scope)... Sep 9 00:22:44.132452 systemd[1]: Reloading... Sep 9 00:22:44.216367 zram_generator::config[2786]: No configuration found. Sep 9 00:22:44.347690 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:22:44.481813 systemd[1]: Reloading finished in 348 ms. Sep 9 00:22:44.521545 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:22:44.530015 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:22:44.530450 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:22:44.530526 systemd[1]: kubelet.service: Consumed 1.217s CPU time, 131.5M memory peak. Sep 9 00:22:44.534925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:22:44.769312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:22:44.781841 (kubelet)[2831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:22:44.831162 kubelet[2831]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:22:44.831162 kubelet[2831]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:22:44.831162 kubelet[2831]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:22:44.831639 kubelet[2831]: I0909 00:22:44.831235 2831 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:22:44.839458 kubelet[2831]: I0909 00:22:44.839423 2831 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:22:44.839458 kubelet[2831]: I0909 00:22:44.839445 2831 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:22:44.839659 kubelet[2831]: I0909 00:22:44.839621 2831 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:22:44.840784 kubelet[2831]: I0909 00:22:44.840757 2831 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 00:22:44.842813 kubelet[2831]: I0909 00:22:44.842759 2831 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:22:44.847155 kubelet[2831]: I0909 00:22:44.847118 2831 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:22:44.855171 kubelet[2831]: I0909 00:22:44.855124 2831 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:22:44.855492 kubelet[2831]: I0909 00:22:44.855448 2831 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:22:44.855719 kubelet[2831]: I0909 00:22:44.855477 2831 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:22:44.855840 kubelet[2831]: I0909 00:22:44.855722 2831 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:22:44.855840 kubelet[2831]: I0909 00:22:44.855735 2831 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:22:44.855840 kubelet[2831]: I0909 00:22:44.855797 2831 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:22:44.856006 kubelet[2831]: I0909 00:22:44.855992 2831 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:22:44.856067 kubelet[2831]: I0909 00:22:44.856013 2831 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:22:44.856067 kubelet[2831]: I0909 00:22:44.856040 2831 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:22:44.856067 kubelet[2831]: I0909 00:22:44.856060 2831 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:22:44.859140 kubelet[2831]: I0909 00:22:44.859066 2831 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:22:44.860206 kubelet[2831]: I0909 00:22:44.860156 2831 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:22:44.864473 kubelet[2831]: I0909 00:22:44.863787 2831 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:22:44.864473 kubelet[2831]: I0909 00:22:44.863846 2831 server.go:1289] "Started kubelet" Sep 9 00:22:44.866115 kubelet[2831]: I0909 00:22:44.865423 2831 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:22:44.866115 kubelet[2831]: I0909 00:22:44.865562 2831 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:22:44.866115 kubelet[2831]: I0909 00:22:44.865916 2831 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:22:44.869924 kubelet[2831]: I0909 00:22:44.869831 2831 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:22:44.873880 kubelet[2831]: I0909 00:22:44.871727 2831 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:22:44.873880 kubelet[2831]: I0909 00:22:44.871806 2831 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:22:44.873880 kubelet[2831]: I0909 00:22:44.873384 2831 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:22:44.873880 kubelet[2831]: E0909 00:22:44.873488 2831 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:22:44.874608 kubelet[2831]: I0909 00:22:44.874593 2831 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:22:44.874811 kubelet[2831]: I0909 00:22:44.874797 2831 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:22:44.877664 kubelet[2831]: I0909 00:22:44.877627 2831 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:22:44.877762 kubelet[2831]: I0909 00:22:44.877729 2831 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:22:44.879831 kubelet[2831]: I0909 00:22:44.879465 2831 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:22:44.882220 kubelet[2831]: E0909 00:22:44.882106 2831 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:22:44.885902 kubelet[2831]: I0909 00:22:44.885855 2831 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:22:44.887617 kubelet[2831]: I0909 00:22:44.887591 2831 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:22:44.887617 kubelet[2831]: I0909 00:22:44.887610 2831 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:22:44.887693 kubelet[2831]: I0909 00:22:44.887635 2831 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:22:44.887693 kubelet[2831]: I0909 00:22:44.887645 2831 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:22:44.887736 kubelet[2831]: E0909 00:22:44.887707 2831 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:22:44.923929 kubelet[2831]: I0909 00:22:44.923890 2831 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:22:44.923929 kubelet[2831]: I0909 00:22:44.923908 2831 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:22:44.923929 kubelet[2831]: I0909 00:22:44.923927 2831 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:22:44.924145 kubelet[2831]: I0909 00:22:44.924068 2831 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:22:44.924145 kubelet[2831]: I0909 00:22:44.924080 2831 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:22:44.924145 kubelet[2831]: I0909 00:22:44.924096 2831 policy_none.go:49] "None policy: Start" Sep 9 00:22:44.924145 kubelet[2831]: I0909 00:22:44.924106 2831 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:22:44.924145 kubelet[2831]: I0909 00:22:44.924116 2831 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:22:44.924313 kubelet[2831]: I0909 00:22:44.924207 2831 state_mem.go:75] "Updated machine memory state" Sep 9 00:22:44.929004 kubelet[2831]: E0909 00:22:44.928891 2831 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:22:44.929151 kubelet[2831]: I0909 00:22:44.929125 2831 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:22:44.929210 kubelet[2831]: I0909 00:22:44.929151 2831 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:22:44.929811 kubelet[2831]: I0909 00:22:44.929778 2831 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:22:44.931817 kubelet[2831]: E0909 00:22:44.930872 2831 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:22:44.989352 kubelet[2831]: I0909 00:22:44.989136 2831 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:22:44.989352 kubelet[2831]: I0909 00:22:44.989159 2831 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:44.989497 kubelet[2831]: I0909 00:22:44.989419 2831 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:44.995646 kubelet[2831]: E0909 00:22:44.995605 2831 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:22:45.036796 kubelet[2831]: I0909 00:22:45.036509 2831 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:22:45.045510 kubelet[2831]: I0909 00:22:45.045459 2831 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:22:45.045703 kubelet[2831]: I0909 00:22:45.045578 2831 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:22:45.075635 kubelet[2831]: I0909 00:22:45.075565 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:45.075635 kubelet[2831]: I0909 00:22:45.075604 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:45.075635 kubelet[2831]: I0909 00:22:45.075625 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:45.075635 kubelet[2831]: I0909 00:22:45.075641 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:45.075907 kubelet[2831]: I0909 00:22:45.075699 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:22:45.075907 kubelet[2831]: I0909 00:22:45.075737 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0432c95726d47637e527e4de0e2f7109-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0432c95726d47637e527e4de0e2f7109\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:45.075907 kubelet[2831]: I0909 00:22:45.075770 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0432c95726d47637e527e4de0e2f7109-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0432c95726d47637e527e4de0e2f7109\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:45.075907 kubelet[2831]: I0909 00:22:45.075790 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:45.075907 kubelet[2831]: I0909 00:22:45.075803 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0432c95726d47637e527e4de0e2f7109-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0432c95726d47637e527e4de0e2f7109\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:45.296496 kubelet[2831]: E0909 00:22:45.296375 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:45.296496 kubelet[2831]: E0909 00:22:45.296420 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:45.296632 kubelet[2831]: E0909 00:22:45.296573 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:45.857093 kubelet[2831]: I0909 00:22:45.857026 2831 apiserver.go:52] "Watching apiserver" Sep 9 00:22:45.875505 kubelet[2831]: I0909 00:22:45.875456 2831 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:22:45.906767 kubelet[2831]: I0909 00:22:45.906731 2831 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:45.907552 kubelet[2831]: E0909 00:22:45.907529 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:45.908767 kubelet[2831]: I0909 00:22:45.908567 2831 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:45.923685 kubelet[2831]: E0909 00:22:45.923632 2831 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:45.923884 kubelet[2831]: E0909 00:22:45.923854 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:45.925432 kubelet[2831]: E0909 00:22:45.925405 2831 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:45.925609 kubelet[2831]: E0909 00:22:45.925574 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:45.997813 kubelet[2831]: I0909 00:22:45.997750 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.9977314489999998 podStartE2EDuration="3.997731449s" podCreationTimestamp="2025-09-09 00:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:22:45.99439811 +0000 UTC m=+1.207296679" watchObservedRunningTime="2025-09-09 00:22:45.997731449 +0000 UTC m=+1.210630018" Sep 9 00:22:46.036594 kubelet[2831]: I0909 00:22:46.036515 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.036494395 podStartE2EDuration="2.036494395s" podCreationTimestamp="2025-09-09 00:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:22:46.03616697 +0000 UTC m=+1.249065539" watchObservedRunningTime="2025-09-09 00:22:46.036494395 +0000 UTC m=+1.249392964" Sep 9 00:22:46.076890 kubelet[2831]: I0909 00:22:46.076827 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.076807347 podStartE2EDuration="2.076807347s" podCreationTimestamp="2025-09-09 00:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:22:46.076687037 +0000 UTC m=+1.289585607" watchObservedRunningTime="2025-09-09 00:22:46.076807347 +0000 UTC m=+1.289705926" Sep 9 00:22:46.908326 kubelet[2831]: E0909 00:22:46.908169 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:46.909151 kubelet[2831]: E0909 00:22:46.908449 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:46.909151 kubelet[2831]: E0909 00:22:46.908818 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:47.909839 kubelet[2831]: E0909 00:22:47.909791 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:47.909839 kubelet[2831]: E0909 00:22:47.909824 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:49.770131 kubelet[2831]: E0909 00:22:49.770033 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:49.984579 kubelet[2831]: I0909 00:22:49.984538 2831 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:22:49.984982 containerd[1591]: time="2025-09-09T00:22:49.984943334Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:22:49.985418 kubelet[2831]: I0909 00:22:49.985119 2831 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:22:50.075529 kubelet[2831]: E0909 00:22:50.075453 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:50.863672 systemd[1]: Created slice kubepods-besteffort-pod1bfefdbe_e958_417a_8f86_f90f9d2a61f2.slice - libcontainer container kubepods-besteffort-pod1bfefdbe_e958_417a_8f86_f90f9d2a61f2.slice. Sep 9 00:22:50.915322 kubelet[2831]: E0909 00:22:50.915251 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:51.010666 kubelet[2831]: I0909 00:22:51.010602 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1bfefdbe-e958-417a-8f86-f90f9d2a61f2-kube-proxy\") pod \"kube-proxy-rck64\" (UID: \"1bfefdbe-e958-417a-8f86-f90f9d2a61f2\") " pod="kube-system/kube-proxy-rck64" Sep 9 00:22:51.010666 kubelet[2831]: I0909 00:22:51.010663 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bfefdbe-e958-417a-8f86-f90f9d2a61f2-xtables-lock\") pod \"kube-proxy-rck64\" (UID: \"1bfefdbe-e958-417a-8f86-f90f9d2a61f2\") " pod="kube-system/kube-proxy-rck64" Sep 9 00:22:51.010913 kubelet[2831]: I0909 00:22:51.010684 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtf79\" (UniqueName: \"kubernetes.io/projected/1bfefdbe-e958-417a-8f86-f90f9d2a61f2-kube-api-access-qtf79\") pod \"kube-proxy-rck64\" (UID: \"1bfefdbe-e958-417a-8f86-f90f9d2a61f2\") " pod="kube-system/kube-proxy-rck64" Sep 9 00:22:51.010913 kubelet[2831]: I0909 00:22:51.010703 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bfefdbe-e958-417a-8f86-f90f9d2a61f2-lib-modules\") pod \"kube-proxy-rck64\" (UID: \"1bfefdbe-e958-417a-8f86-f90f9d2a61f2\") " pod="kube-system/kube-proxy-rck64" Sep 9 00:22:51.028290 systemd[1]: Created slice kubepods-besteffort-podce26197a_01e0_4436_93ba_f50fb6821ed9.slice - libcontainer container kubepods-besteffort-podce26197a_01e0_4436_93ba_f50fb6821ed9.slice. Sep 9 00:22:51.175010 kubelet[2831]: E0909 00:22:51.174828 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:51.175750 containerd[1591]: time="2025-09-09T00:22:51.175664627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rck64,Uid:1bfefdbe-e958-417a-8f86-f90f9d2a61f2,Namespace:kube-system,Attempt:0,}" Sep 9 00:22:51.195185 containerd[1591]: time="2025-09-09T00:22:51.195114280Z" level=info msg="connecting to shim 92debfdb20fcade495eaea7620fbd27fe7e8b04b72d4e8e11593efe2a45021ab" address="unix:///run/containerd/s/50d224e5afab82c7d7b27384b09107bcd60b24a37a07b7580bd3ab46653596d0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:22:51.211394 kubelet[2831]: I0909 00:22:51.211326 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce26197a-01e0-4436-93ba-f50fb6821ed9-var-lib-calico\") pod \"tigera-operator-755d956888-j465x\" (UID: \"ce26197a-01e0-4436-93ba-f50fb6821ed9\") " pod="tigera-operator/tigera-operator-755d956888-j465x" Sep 9 00:22:51.211394 kubelet[2831]: I0909 00:22:51.211397 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l89d\" (UniqueName: \"kubernetes.io/projected/ce26197a-01e0-4436-93ba-f50fb6821ed9-kube-api-access-7l89d\") pod \"tigera-operator-755d956888-j465x\" (UID: \"ce26197a-01e0-4436-93ba-f50fb6821ed9\") " pod="tigera-operator/tigera-operator-755d956888-j465x" Sep 9 00:22:51.248488 systemd[1]: Started cri-containerd-92debfdb20fcade495eaea7620fbd27fe7e8b04b72d4e8e11593efe2a45021ab.scope - libcontainer container 92debfdb20fcade495eaea7620fbd27fe7e8b04b72d4e8e11593efe2a45021ab. Sep 9 00:22:51.282408 containerd[1591]: time="2025-09-09T00:22:51.282326759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rck64,Uid:1bfefdbe-e958-417a-8f86-f90f9d2a61f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"92debfdb20fcade495eaea7620fbd27fe7e8b04b72d4e8e11593efe2a45021ab\"" Sep 9 00:22:51.285309 kubelet[2831]: E0909 00:22:51.283410 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:51.292083 containerd[1591]: time="2025-09-09T00:22:51.292036071Z" level=info msg="CreateContainer within sandbox \"92debfdb20fcade495eaea7620fbd27fe7e8b04b72d4e8e11593efe2a45021ab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:22:51.305287 containerd[1591]: time="2025-09-09T00:22:51.303126214Z" level=info msg="Container 387c01d2ddd36800f85f5eee7da01b5652c9c025fced1c45da04ce2722e42d10: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:51.313668 containerd[1591]: time="2025-09-09T00:22:51.313612206Z" level=info msg="CreateContainer within sandbox \"92debfdb20fcade495eaea7620fbd27fe7e8b04b72d4e8e11593efe2a45021ab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"387c01d2ddd36800f85f5eee7da01b5652c9c025fced1c45da04ce2722e42d10\"" Sep 9 00:22:51.314716 containerd[1591]: time="2025-09-09T00:22:51.314653260Z" level=info msg="StartContainer for \"387c01d2ddd36800f85f5eee7da01b5652c9c025fced1c45da04ce2722e42d10\"" Sep 9 00:22:51.316501 containerd[1591]: time="2025-09-09T00:22:51.316465964Z" level=info msg="connecting to shim 387c01d2ddd36800f85f5eee7da01b5652c9c025fced1c45da04ce2722e42d10" address="unix:///run/containerd/s/50d224e5afab82c7d7b27384b09107bcd60b24a37a07b7580bd3ab46653596d0" protocol=ttrpc version=3 Sep 9 00:22:51.332880 containerd[1591]: time="2025-09-09T00:22:51.332635726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-j465x,Uid:ce26197a-01e0-4436-93ba-f50fb6821ed9,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:22:51.346438 systemd[1]: Started cri-containerd-387c01d2ddd36800f85f5eee7da01b5652c9c025fced1c45da04ce2722e42d10.scope - libcontainer container 387c01d2ddd36800f85f5eee7da01b5652c9c025fced1c45da04ce2722e42d10. Sep 9 00:22:51.357292 containerd[1591]: time="2025-09-09T00:22:51.356283479Z" level=info msg="connecting to shim 6b3938db6a2df0486ea5914234e4adb313948f444f6971119a5f661025a55f0d" address="unix:///run/containerd/s/d1a35bf3e0f0dabd189447c52616ae14ad47d3b63376458eff4b1f2c9500e2b5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:22:51.387586 systemd[1]: Started cri-containerd-6b3938db6a2df0486ea5914234e4adb313948f444f6971119a5f661025a55f0d.scope - libcontainer container 6b3938db6a2df0486ea5914234e4adb313948f444f6971119a5f661025a55f0d. Sep 9 00:22:51.397303 containerd[1591]: time="2025-09-09T00:22:51.397241888Z" level=info msg="StartContainer for \"387c01d2ddd36800f85f5eee7da01b5652c9c025fced1c45da04ce2722e42d10\" returns successfully" Sep 9 00:22:51.451604 containerd[1591]: time="2025-09-09T00:22:51.451051065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-j465x,Uid:ce26197a-01e0-4436-93ba-f50fb6821ed9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6b3938db6a2df0486ea5914234e4adb313948f444f6971119a5f661025a55f0d\"" Sep 9 00:22:51.453005 containerd[1591]: time="2025-09-09T00:22:51.452928613Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:22:51.920762 kubelet[2831]: E0909 00:22:51.920715 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:51.934395 kubelet[2831]: I0909 00:22:51.934159 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rck64" podStartSLOduration=1.934121288 podStartE2EDuration="1.934121288s" podCreationTimestamp="2025-09-09 00:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:22:51.933920666 +0000 UTC m=+7.146819235" watchObservedRunningTime="2025-09-09 00:22:51.934121288 +0000 UTC m=+7.147019857" Sep 9 00:22:52.563826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3036111316.mount: Deactivated successfully. Sep 9 00:22:53.087990 containerd[1591]: time="2025-09-09T00:22:53.087922683Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:53.088779 containerd[1591]: time="2025-09-09T00:22:53.088733547Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 00:22:53.089745 containerd[1591]: time="2025-09-09T00:22:53.089705928Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:53.091685 containerd[1591]: time="2025-09-09T00:22:53.091650150Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:53.092660 containerd[1591]: time="2025-09-09T00:22:53.092594297Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.639619997s" Sep 9 00:22:53.092660 containerd[1591]: time="2025-09-09T00:22:53.092653771Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 00:22:53.098027 containerd[1591]: time="2025-09-09T00:22:53.097973389Z" level=info msg="CreateContainer within sandbox \"6b3938db6a2df0486ea5914234e4adb313948f444f6971119a5f661025a55f0d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:22:53.108038 containerd[1591]: time="2025-09-09T00:22:53.107984190Z" level=info msg="Container 2e01fed8a7dae43cb8f428e1bdb323aafe7bb6e0ace8997ef49052317694caf5: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:53.113700 containerd[1591]: time="2025-09-09T00:22:53.113655278Z" level=info msg="CreateContainer within sandbox \"6b3938db6a2df0486ea5914234e4adb313948f444f6971119a5f661025a55f0d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2e01fed8a7dae43cb8f428e1bdb323aafe7bb6e0ace8997ef49052317694caf5\"" Sep 9 00:22:53.114213 containerd[1591]: time="2025-09-09T00:22:53.114184376Z" level=info msg="StartContainer for \"2e01fed8a7dae43cb8f428e1bdb323aafe7bb6e0ace8997ef49052317694caf5\"" Sep 9 00:22:53.114940 containerd[1591]: time="2025-09-09T00:22:53.114915248Z" level=info msg="connecting to shim 2e01fed8a7dae43cb8f428e1bdb323aafe7bb6e0ace8997ef49052317694caf5" address="unix:///run/containerd/s/d1a35bf3e0f0dabd189447c52616ae14ad47d3b63376458eff4b1f2c9500e2b5" protocol=ttrpc version=3 Sep 9 00:22:53.170437 systemd[1]: Started cri-containerd-2e01fed8a7dae43cb8f428e1bdb323aafe7bb6e0ace8997ef49052317694caf5.scope - libcontainer container 2e01fed8a7dae43cb8f428e1bdb323aafe7bb6e0ace8997ef49052317694caf5. Sep 9 00:22:53.206853 containerd[1591]: time="2025-09-09T00:22:53.206791352Z" level=info msg="StartContainer for \"2e01fed8a7dae43cb8f428e1bdb323aafe7bb6e0ace8997ef49052317694caf5\" returns successfully" Sep 9 00:22:53.934780 kubelet[2831]: I0909 00:22:53.934695 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-j465x" podStartSLOduration=2.293621788 podStartE2EDuration="3.934664034s" podCreationTimestamp="2025-09-09 00:22:50 +0000 UTC" firstStartedPulling="2025-09-09 00:22:51.452473926 +0000 UTC m=+6.665372495" lastFinishedPulling="2025-09-09 00:22:53.093516172 +0000 UTC m=+8.306414741" observedRunningTime="2025-09-09 00:22:53.934490395 +0000 UTC m=+9.147388964" watchObservedRunningTime="2025-09-09 00:22:53.934664034 +0000 UTC m=+9.147562604" Sep 9 00:22:56.096308 kubelet[2831]: E0909 00:22:56.096113 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:56.932834 kubelet[2831]: E0909 00:22:56.932783 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:22:59.315144 sudo[1818]: pam_unix(sudo:session): session closed for user root Sep 9 00:22:59.317601 sshd[1817]: Connection closed by 10.0.0.1 port 53980 Sep 9 00:22:59.319027 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Sep 9 00:22:59.328742 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:53980.service: Deactivated successfully. Sep 9 00:22:59.332630 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:22:59.333082 systemd[1]: session-9.scope: Consumed 8.207s CPU time, 226.5M memory peak. Sep 9 00:22:59.336035 systemd-logind[1576]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:22:59.339456 systemd-logind[1576]: Removed session 9. Sep 9 00:22:59.824857 kubelet[2831]: E0909 00:22:59.824781 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:00.043877 kubelet[2831]: E0909 00:23:00.043788 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:03.516316 systemd[1]: Created slice kubepods-besteffort-poda084e754_f83a_48e7_b747_ba71b6ca687e.slice - libcontainer container kubepods-besteffort-poda084e754_f83a_48e7_b747_ba71b6ca687e.slice. Sep 9 00:23:03.590638 kubelet[2831]: I0909 00:23:03.590554 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a084e754-f83a-48e7-b747-ba71b6ca687e-tigera-ca-bundle\") pod \"calico-typha-77784768b-wkjwf\" (UID: \"a084e754-f83a-48e7-b747-ba71b6ca687e\") " pod="calico-system/calico-typha-77784768b-wkjwf" Sep 9 00:23:03.590638 kubelet[2831]: I0909 00:23:03.590623 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gft6c\" (UniqueName: \"kubernetes.io/projected/a084e754-f83a-48e7-b747-ba71b6ca687e-kube-api-access-gft6c\") pod \"calico-typha-77784768b-wkjwf\" (UID: \"a084e754-f83a-48e7-b747-ba71b6ca687e\") " pod="calico-system/calico-typha-77784768b-wkjwf" Sep 9 00:23:03.590638 kubelet[2831]: I0909 00:23:03.590650 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a084e754-f83a-48e7-b747-ba71b6ca687e-typha-certs\") pod \"calico-typha-77784768b-wkjwf\" (UID: \"a084e754-f83a-48e7-b747-ba71b6ca687e\") " pod="calico-system/calico-typha-77784768b-wkjwf" Sep 9 00:23:03.823409 kubelet[2831]: E0909 00:23:03.823356 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:03.824062 containerd[1591]: time="2025-09-09T00:23:03.823998444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77784768b-wkjwf,Uid:a084e754-f83a-48e7-b747-ba71b6ca687e,Namespace:calico-system,Attempt:0,}" Sep 9 00:23:03.865814 containerd[1591]: time="2025-09-09T00:23:03.865750515Z" level=info msg="connecting to shim c9e87c44abea87382fa53a27844d9274dc7784d5def837d550c6e3e331182a3a" address="unix:///run/containerd/s/64da861d38960a1ea122a20561ef3d4f1289bed6455ce192533e080ede9498e2" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:23:03.904863 systemd[1]: Started cri-containerd-c9e87c44abea87382fa53a27844d9274dc7784d5def837d550c6e3e331182a3a.scope - libcontainer container c9e87c44abea87382fa53a27844d9274dc7784d5def837d550c6e3e331182a3a. Sep 9 00:23:03.913638 systemd[1]: Created slice kubepods-besteffort-pod0857da15_8af9_42c6_a421_11c161d9d287.slice - libcontainer container kubepods-besteffort-pod0857da15_8af9_42c6_a421_11c161d9d287.slice. Sep 9 00:23:03.965479 containerd[1591]: time="2025-09-09T00:23:03.965426966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77784768b-wkjwf,Uid:a084e754-f83a-48e7-b747-ba71b6ca687e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9e87c44abea87382fa53a27844d9274dc7784d5def837d550c6e3e331182a3a\"" Sep 9 00:23:03.966943 kubelet[2831]: E0909 00:23:03.966917 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:03.968601 containerd[1591]: time="2025-09-09T00:23:03.968566694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:23:03.996435 kubelet[2831]: I0909 00:23:03.996371 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0857da15-8af9-42c6-a421-11c161d9d287-cni-bin-dir\") pod \"calico-node-47tmm\" (UID: \"0857da15-8af9-42c6-a421-11c161d9d287\") " pod="calico-system/calico-node-47tmm" Sep 9 00:23:03.996435 kubelet[2831]: I0909 00:23:03.996424 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0857da15-8af9-42c6-a421-11c161d9d287-flexvol-driver-host\") pod \"calico-node-47tmm\" (UID: \"0857da15-8af9-42c6-a421-11c161d9d287\") " pod="calico-system/calico-node-47tmm" Sep 9 00:23:03.996435 kubelet[2831]: I0909 00:23:03.996452 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0857da15-8af9-42c6-a421-11c161d9d287-xtables-lock\") pod \"calico-node-47tmm\" (UID: \"0857da15-8af9-42c6-a421-11c161d9d287\") " pod="calico-system/calico-node-47tmm" Sep 9 00:23:03.996699 kubelet[2831]: I0909 00:23:03.996474 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0857da15-8af9-42c6-a421-11c161d9d287-cni-net-dir\") pod \"calico-node-47tmm\" (UID: \"0857da15-8af9-42c6-a421-11c161d9d287\") " pod="calico-system/calico-node-47tmm" Sep 9 00:23:03.996699 kubelet[2831]: I0909 00:23:03.996522 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0857da15-8af9-42c6-a421-11c161d9d287-tigera-ca-bundle\") pod \"calico-node-47tmm\" (UID: \"0857da15-8af9-42c6-a421-11c161d9d287\") " pod="calico-system/calico-node-47tmm" Sep 9 00:23:03.996699 kubelet[2831]: I0909 00:23:03.996594 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0857da15-8af9-42c6-a421-11c161d9d287-policysync\") pod \"calico-node-47tmm\" (UID: \"0857da15-8af9-42c6-a421-11c161d9d287\") " pod="calico-system/calico-node-47tmm" Sep 9 00:23:03.996699 kubelet[2831]: I0909 00:23:03.996615 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0857da15-8af9-42c6-a421-11c161d9d287-cni-log-dir\") pod \"calico-node-47tmm\" (UID: \"0857da15-8af9-42c6-a421-11c161d9d287\") " pod="calico-system/calico-node-47tmm" Sep 9 00:23:03.996699 kubelet[2831]: I0909 00:23:03.996640 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0857da15-8af9-42c6-a421-11c161d9d287-node-certs\") pod \"calico-node-47tmm\" (UID: \"0857da15-8af9-42c6-a421-11c161d9d287\") " pod="calico-system/calico-node-47tmm" Sep 9 00:23:03.996875 kubelet[2831]: I0909 00:23:03.996662 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0857da15-8af9-42c6-a421-11c161d9d287-var-run-calico\") pod \"calico-node-47tmm\" (UID: \"0857da15-8af9-42c6-a421-11c161d9d287\") " pod="calico-system/calico-node-47tmm" Sep 9 00:23:03.996875 kubelet[2831]: I0909 00:23:03.996681 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkw45\" (UniqueName: \"kubernetes.io/projected/0857da15-8af9-42c6-a421-11c161d9d287-kube-api-access-kkw45\") pod \"calico-node-47tmm\" (UID: \"0857da15-8af9-42c6-a421-11c161d9d287\") " pod="calico-system/calico-node-47tmm" Sep 9 00:23:03.996875 kubelet[2831]: I0909 00:23:03.996704 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0857da15-8af9-42c6-a421-11c161d9d287-lib-modules\") pod \"calico-node-47tmm\" (UID: \"0857da15-8af9-42c6-a421-11c161d9d287\") " pod="calico-system/calico-node-47tmm" Sep 9 00:23:03.997328 kubelet[2831]: I0909 00:23:03.996739 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0857da15-8af9-42c6-a421-11c161d9d287-var-lib-calico\") pod \"calico-node-47tmm\" (UID: \"0857da15-8af9-42c6-a421-11c161d9d287\") " pod="calico-system/calico-node-47tmm" Sep 9 00:23:04.100193 kubelet[2831]: E0909 00:23:04.100063 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.100193 kubelet[2831]: W0909 00:23:04.100091 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.100193 kubelet[2831]: E0909 00:23:04.100161 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.104336 kubelet[2831]: E0909 00:23:04.104223 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.104336 kubelet[2831]: W0909 00:23:04.104288 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.104540 kubelet[2831]: E0909 00:23:04.104308 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.115855 kubelet[2831]: E0909 00:23:04.115768 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.115855 kubelet[2831]: W0909 00:23:04.115792 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.115855 kubelet[2831]: E0909 00:23:04.115815 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.144054 kubelet[2831]: E0909 00:23:04.143987 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fl7r" podUID="0f5cb06c-9205-4333-be98-49c50e03a5ae" Sep 9 00:23:04.192902 kubelet[2831]: E0909 00:23:04.192858 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.192902 kubelet[2831]: W0909 00:23:04.192886 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.192902 kubelet[2831]: E0909 00:23:04.192911 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.193194 kubelet[2831]: E0909 00:23:04.193119 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.193194 kubelet[2831]: W0909 00:23:04.193130 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.193194 kubelet[2831]: E0909 00:23:04.193141 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.193389 kubelet[2831]: E0909 00:23:04.193365 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.193389 kubelet[2831]: W0909 00:23:04.193378 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.193389 kubelet[2831]: E0909 00:23:04.193388 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.193673 kubelet[2831]: E0909 00:23:04.193640 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.193673 kubelet[2831]: W0909 00:23:04.193657 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.193673 kubelet[2831]: E0909 00:23:04.193670 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.193937 kubelet[2831]: E0909 00:23:04.193872 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.193937 kubelet[2831]: W0909 00:23:04.193882 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.193937 kubelet[2831]: E0909 00:23:04.193893 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.194186 kubelet[2831]: E0909 00:23:04.194168 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.194186 kubelet[2831]: W0909 00:23:04.194180 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.194325 kubelet[2831]: E0909 00:23:04.194191 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.194451 kubelet[2831]: E0909 00:23:04.194428 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.194451 kubelet[2831]: W0909 00:23:04.194440 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.194451 kubelet[2831]: E0909 00:23:04.194451 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.194665 kubelet[2831]: E0909 00:23:04.194642 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.194665 kubelet[2831]: W0909 00:23:04.194654 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.194665 kubelet[2831]: E0909 00:23:04.194664 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.194893 kubelet[2831]: E0909 00:23:04.194869 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.194893 kubelet[2831]: W0909 00:23:04.194881 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.194893 kubelet[2831]: E0909 00:23:04.194891 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.195086 kubelet[2831]: E0909 00:23:04.195065 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.195086 kubelet[2831]: W0909 00:23:04.195077 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.195086 kubelet[2831]: E0909 00:23:04.195088 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.195320 kubelet[2831]: E0909 00:23:04.195298 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.195320 kubelet[2831]: W0909 00:23:04.195310 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.195320 kubelet[2831]: E0909 00:23:04.195321 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.195517 kubelet[2831]: E0909 00:23:04.195498 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.195517 kubelet[2831]: W0909 00:23:04.195509 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.195597 kubelet[2831]: E0909 00:23:04.195520 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.195725 kubelet[2831]: E0909 00:23:04.195706 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.195725 kubelet[2831]: W0909 00:23:04.195716 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.195800 kubelet[2831]: E0909 00:23:04.195727 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.195921 kubelet[2831]: E0909 00:23:04.195902 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.195921 kubelet[2831]: W0909 00:23:04.195914 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.196006 kubelet[2831]: E0909 00:23:04.195924 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.196116 kubelet[2831]: E0909 00:23:04.196099 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.196116 kubelet[2831]: W0909 00:23:04.196110 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.196207 kubelet[2831]: E0909 00:23:04.196121 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.196497 kubelet[2831]: E0909 00:23:04.196446 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.196497 kubelet[2831]: W0909 00:23:04.196483 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.196563 kubelet[2831]: E0909 00:23:04.196516 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.196871 kubelet[2831]: E0909 00:23:04.196852 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.196871 kubelet[2831]: W0909 00:23:04.196866 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.196927 kubelet[2831]: E0909 00:23:04.196882 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.197100 kubelet[2831]: E0909 00:23:04.197083 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.197124 kubelet[2831]: W0909 00:23:04.197100 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.197124 kubelet[2831]: E0909 00:23:04.197111 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.197379 kubelet[2831]: E0909 00:23:04.197360 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.197379 kubelet[2831]: W0909 00:23:04.197374 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.197457 kubelet[2831]: E0909 00:23:04.197386 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.197627 kubelet[2831]: E0909 00:23:04.197606 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.197627 kubelet[2831]: W0909 00:23:04.197621 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.197695 kubelet[2831]: E0909 00:23:04.197632 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.198868 kubelet[2831]: E0909 00:23:04.198838 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.198868 kubelet[2831]: W0909 00:23:04.198855 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.198868 kubelet[2831]: E0909 00:23:04.198867 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.198995 kubelet[2831]: I0909 00:23:04.198897 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vhqm\" (UniqueName: \"kubernetes.io/projected/0f5cb06c-9205-4333-be98-49c50e03a5ae-kube-api-access-5vhqm\") pod \"csi-node-driver-6fl7r\" (UID: \"0f5cb06c-9205-4333-be98-49c50e03a5ae\") " pod="calico-system/csi-node-driver-6fl7r" Sep 9 00:23:04.199169 kubelet[2831]: E0909 00:23:04.199144 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.199169 kubelet[2831]: W0909 00:23:04.199164 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.199287 kubelet[2831]: E0909 00:23:04.199178 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.199287 kubelet[2831]: I0909 00:23:04.199227 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0f5cb06c-9205-4333-be98-49c50e03a5ae-varrun\") pod \"csi-node-driver-6fl7r\" (UID: \"0f5cb06c-9205-4333-be98-49c50e03a5ae\") " pod="calico-system/csi-node-driver-6fl7r" Sep 9 00:23:04.199534 kubelet[2831]: E0909 00:23:04.199513 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.199534 kubelet[2831]: W0909 00:23:04.199529 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.199623 kubelet[2831]: E0909 00:23:04.199542 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.199809 kubelet[2831]: E0909 00:23:04.199781 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.199809 kubelet[2831]: W0909 00:23:04.199793 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.199809 kubelet[2831]: E0909 00:23:04.199803 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.200079 kubelet[2831]: E0909 00:23:04.200052 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.200079 kubelet[2831]: W0909 00:23:04.200067 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.200079 kubelet[2831]: E0909 00:23:04.200088 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.200236 kubelet[2831]: I0909 00:23:04.200116 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f5cb06c-9205-4333-be98-49c50e03a5ae-kubelet-dir\") pod \"csi-node-driver-6fl7r\" (UID: \"0f5cb06c-9205-4333-be98-49c50e03a5ae\") " pod="calico-system/csi-node-driver-6fl7r" Sep 9 00:23:04.200606 kubelet[2831]: E0909 00:23:04.200585 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.200606 kubelet[2831]: W0909 00:23:04.200603 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.200781 kubelet[2831]: E0909 00:23:04.200618 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.200842 kubelet[2831]: E0909 00:23:04.200822 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.200842 kubelet[2831]: W0909 00:23:04.200835 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.200923 kubelet[2831]: E0909 00:23:04.200845 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.201065 kubelet[2831]: E0909 00:23:04.201042 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.201065 kubelet[2831]: W0909 00:23:04.201058 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.201192 kubelet[2831]: E0909 00:23:04.201069 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.201192 kubelet[2831]: I0909 00:23:04.201095 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f5cb06c-9205-4333-be98-49c50e03a5ae-registration-dir\") pod \"csi-node-driver-6fl7r\" (UID: \"0f5cb06c-9205-4333-be98-49c50e03a5ae\") " pod="calico-system/csi-node-driver-6fl7r" Sep 9 00:23:04.201401 kubelet[2831]: E0909 00:23:04.201376 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.201401 kubelet[2831]: W0909 00:23:04.201395 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.201488 kubelet[2831]: E0909 00:23:04.201410 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.201643 kubelet[2831]: E0909 00:23:04.201610 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.201643 kubelet[2831]: W0909 00:23:04.201638 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.201716 kubelet[2831]: E0909 00:23:04.201651 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.201874 kubelet[2831]: E0909 00:23:04.201854 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.201874 kubelet[2831]: W0909 00:23:04.201869 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.201874 kubelet[2831]: E0909 00:23:04.201881 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.202012 kubelet[2831]: I0909 00:23:04.201910 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f5cb06c-9205-4333-be98-49c50e03a5ae-socket-dir\") pod \"csi-node-driver-6fl7r\" (UID: \"0f5cb06c-9205-4333-be98-49c50e03a5ae\") " pod="calico-system/csi-node-driver-6fl7r" Sep 9 00:23:04.202128 kubelet[2831]: E0909 00:23:04.202110 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.202128 kubelet[2831]: W0909 00:23:04.202124 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.202185 kubelet[2831]: E0909 00:23:04.202133 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.202490 kubelet[2831]: E0909 00:23:04.202467 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.202490 kubelet[2831]: W0909 00:23:04.202482 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.202576 kubelet[2831]: E0909 00:23:04.202493 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.202882 kubelet[2831]: E0909 00:23:04.202839 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.202929 kubelet[2831]: W0909 00:23:04.202891 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.202929 kubelet[2831]: E0909 00:23:04.202906 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.203166 kubelet[2831]: E0909 00:23:04.203145 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.203166 kubelet[2831]: W0909 00:23:04.203161 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.203376 kubelet[2831]: E0909 00:23:04.203173 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.217302 containerd[1591]: time="2025-09-09T00:23:04.217228423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47tmm,Uid:0857da15-8af9-42c6-a421-11c161d9d287,Namespace:calico-system,Attempt:0,}" Sep 9 00:23:04.242181 containerd[1591]: time="2025-09-09T00:23:04.242107923Z" level=info msg="connecting to shim 051cb5f3238f678b647b9bcb329dc2852df68e3cc16507943365cd9c81c5a291" address="unix:///run/containerd/s/4633fafa50bf51156056ca9d4b0608bd45fe7c5f2867e83f6e646ac3a6ba8c9d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:23:04.274562 systemd[1]: Started cri-containerd-051cb5f3238f678b647b9bcb329dc2852df68e3cc16507943365cd9c81c5a291.scope - libcontainer container 051cb5f3238f678b647b9bcb329dc2852df68e3cc16507943365cd9c81c5a291. Sep 9 00:23:04.302679 kubelet[2831]: E0909 00:23:04.302638 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.302860 kubelet[2831]: W0909 00:23:04.302701 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.302860 kubelet[2831]: E0909 00:23:04.302773 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.304835 kubelet[2831]: E0909 00:23:04.304441 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.304835 kubelet[2831]: W0909 00:23:04.304456 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.304835 kubelet[2831]: E0909 00:23:04.304470 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.305702 kubelet[2831]: E0909 00:23:04.305661 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.305702 kubelet[2831]: W0909 00:23:04.305680 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.305702 kubelet[2831]: E0909 00:23:04.305691 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.306934 kubelet[2831]: E0909 00:23:04.306902 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.307028 kubelet[2831]: W0909 00:23:04.306944 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.307028 kubelet[2831]: E0909 00:23:04.306958 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.307332 kubelet[2831]: E0909 00:23:04.307309 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.307332 kubelet[2831]: W0909 00:23:04.307326 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.307417 kubelet[2831]: E0909 00:23:04.307338 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.307846 kubelet[2831]: E0909 00:23:04.307823 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.307846 kubelet[2831]: W0909 00:23:04.307840 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.307979 kubelet[2831]: E0909 00:23:04.307852 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.308388 kubelet[2831]: E0909 00:23:04.308348 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.308435 kubelet[2831]: W0909 00:23:04.308364 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.308435 kubelet[2831]: E0909 00:23:04.308412 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.308778 kubelet[2831]: E0909 00:23:04.308759 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.308778 kubelet[2831]: W0909 00:23:04.308772 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.308854 kubelet[2831]: E0909 00:23:04.308781 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.309458 kubelet[2831]: E0909 00:23:04.309437 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.309458 kubelet[2831]: W0909 00:23:04.309449 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.309458 kubelet[2831]: E0909 00:23:04.309458 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.309673 kubelet[2831]: E0909 00:23:04.309655 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.309673 kubelet[2831]: W0909 00:23:04.309666 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.309673 kubelet[2831]: E0909 00:23:04.309675 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.310085 kubelet[2831]: E0909 00:23:04.310048 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.310085 kubelet[2831]: W0909 00:23:04.310061 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.310085 kubelet[2831]: E0909 00:23:04.310071 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.310473 kubelet[2831]: E0909 00:23:04.310451 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.310473 kubelet[2831]: W0909 00:23:04.310467 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.310576 kubelet[2831]: E0909 00:23:04.310479 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.310844 kubelet[2831]: E0909 00:23:04.310806 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.310844 kubelet[2831]: W0909 00:23:04.310818 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.310844 kubelet[2831]: E0909 00:23:04.310828 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.311121 kubelet[2831]: E0909 00:23:04.311102 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.311179 kubelet[2831]: W0909 00:23:04.311123 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.311179 kubelet[2831]: E0909 00:23:04.311133 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.311411 kubelet[2831]: E0909 00:23:04.311391 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.311466 kubelet[2831]: W0909 00:23:04.311433 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.311466 kubelet[2831]: E0909 00:23:04.311443 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.311920 kubelet[2831]: E0909 00:23:04.311864 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.311920 kubelet[2831]: W0909 00:23:04.311907 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.311920 kubelet[2831]: E0909 00:23:04.311920 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.312452 kubelet[2831]: E0909 00:23:04.312414 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.312452 kubelet[2831]: W0909 00:23:04.312432 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.312571 kubelet[2831]: E0909 00:23:04.312445 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.312873 kubelet[2831]: E0909 00:23:04.312837 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.312873 kubelet[2831]: W0909 00:23:04.312851 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.312873 kubelet[2831]: E0909 00:23:04.312871 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.315839 kubelet[2831]: E0909 00:23:04.315718 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.315839 kubelet[2831]: W0909 00:23:04.315747 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.315839 kubelet[2831]: E0909 00:23:04.315777 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.316239 kubelet[2831]: E0909 00:23:04.316135 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.316239 kubelet[2831]: W0909 00:23:04.316147 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.316239 kubelet[2831]: E0909 00:23:04.316157 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.316623 kubelet[2831]: E0909 00:23:04.316581 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.316623 kubelet[2831]: W0909 00:23:04.316596 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.316623 kubelet[2831]: E0909 00:23:04.316607 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.317081 kubelet[2831]: E0909 00:23:04.317061 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.317150 kubelet[2831]: W0909 00:23:04.317074 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.317150 kubelet[2831]: E0909 00:23:04.317106 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.317767 kubelet[2831]: E0909 00:23:04.317737 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.317767 kubelet[2831]: W0909 00:23:04.317750 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.317767 kubelet[2831]: E0909 00:23:04.317761 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.318083 kubelet[2831]: E0909 00:23:04.318070 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.318120 kubelet[2831]: W0909 00:23:04.318082 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.318120 kubelet[2831]: E0909 00:23:04.318104 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.318422 containerd[1591]: time="2025-09-09T00:23:04.318351506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47tmm,Uid:0857da15-8af9-42c6-a421-11c161d9d287,Namespace:calico-system,Attempt:0,} returns sandbox id \"051cb5f3238f678b647b9bcb329dc2852df68e3cc16507943365cd9c81c5a291\"" Sep 9 00:23:04.319046 kubelet[2831]: E0909 00:23:04.319008 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.319046 kubelet[2831]: W0909 00:23:04.319026 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.319046 kubelet[2831]: E0909 00:23:04.319037 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:04.327227 kubelet[2831]: E0909 00:23:04.327157 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:04.327227 kubelet[2831]: W0909 00:23:04.327190 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:04.327227 kubelet[2831]: E0909 00:23:04.327234 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:05.889059 kubelet[2831]: E0909 00:23:05.888976 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fl7r" podUID="0f5cb06c-9205-4333-be98-49c50e03a5ae" Sep 9 00:23:06.226820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount675097843.mount: Deactivated successfully. Sep 9 00:23:07.080788 containerd[1591]: time="2025-09-09T00:23:07.080703026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:07.081491 containerd[1591]: time="2025-09-09T00:23:07.081430725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 00:23:07.082763 containerd[1591]: time="2025-09-09T00:23:07.082687657Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:07.084685 containerd[1591]: time="2025-09-09T00:23:07.084647623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:07.085370 containerd[1591]: time="2025-09-09T00:23:07.085315750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.116701266s" Sep 9 00:23:07.085370 containerd[1591]: time="2025-09-09T00:23:07.085358771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 00:23:07.086701 containerd[1591]: time="2025-09-09T00:23:07.086667522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:23:07.100613 containerd[1591]: time="2025-09-09T00:23:07.100558765Z" level=info msg="CreateContainer within sandbox \"c9e87c44abea87382fa53a27844d9274dc7784d5def837d550c6e3e331182a3a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:23:07.111684 containerd[1591]: time="2025-09-09T00:23:07.111595816Z" level=info msg="Container b5ee1d78b78a99cdef32c326eee076781060fc207519fa2840f91d83b2f16cdd: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:07.120964 containerd[1591]: time="2025-09-09T00:23:07.120907398Z" level=info msg="CreateContainer within sandbox \"c9e87c44abea87382fa53a27844d9274dc7784d5def837d550c6e3e331182a3a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b5ee1d78b78a99cdef32c326eee076781060fc207519fa2840f91d83b2f16cdd\"" Sep 9 00:23:07.121821 containerd[1591]: time="2025-09-09T00:23:07.121769091Z" level=info msg="StartContainer for \"b5ee1d78b78a99cdef32c326eee076781060fc207519fa2840f91d83b2f16cdd\"" Sep 9 00:23:07.123328 containerd[1591]: time="2025-09-09T00:23:07.123299001Z" level=info msg="connecting to shim b5ee1d78b78a99cdef32c326eee076781060fc207519fa2840f91d83b2f16cdd" address="unix:///run/containerd/s/64da861d38960a1ea122a20561ef3d4f1289bed6455ce192533e080ede9498e2" protocol=ttrpc version=3 Sep 9 00:23:07.155436 systemd[1]: Started cri-containerd-b5ee1d78b78a99cdef32c326eee076781060fc207519fa2840f91d83b2f16cdd.scope - libcontainer container b5ee1d78b78a99cdef32c326eee076781060fc207519fa2840f91d83b2f16cdd. Sep 9 00:23:07.223020 containerd[1591]: time="2025-09-09T00:23:07.222909024Z" level=info msg="StartContainer for \"b5ee1d78b78a99cdef32c326eee076781060fc207519fa2840f91d83b2f16cdd\" returns successfully" Sep 9 00:23:07.888053 kubelet[2831]: E0909 00:23:07.887999 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fl7r" podUID="0f5cb06c-9205-4333-be98-49c50e03a5ae" Sep 9 00:23:07.959583 kubelet[2831]: E0909 00:23:07.959533 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:08.021359 kubelet[2831]: E0909 00:23:08.021300 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.021359 kubelet[2831]: W0909 00:23:08.021326 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.021359 kubelet[2831]: E0909 00:23:08.021350 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.021650 kubelet[2831]: E0909 00:23:08.021626 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.021650 kubelet[2831]: W0909 00:23:08.021638 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.021712 kubelet[2831]: E0909 00:23:08.021663 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.021896 kubelet[2831]: E0909 00:23:08.021867 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.021896 kubelet[2831]: W0909 00:23:08.021879 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.021896 kubelet[2831]: E0909 00:23:08.021887 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.022127 kubelet[2831]: E0909 00:23:08.022102 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.022127 kubelet[2831]: W0909 00:23:08.022113 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.022127 kubelet[2831]: E0909 00:23:08.022121 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.022365 kubelet[2831]: E0909 00:23:08.022339 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.022365 kubelet[2831]: W0909 00:23:08.022350 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.022365 kubelet[2831]: E0909 00:23:08.022358 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.022580 kubelet[2831]: E0909 00:23:08.022554 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.022580 kubelet[2831]: W0909 00:23:08.022565 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.022638 kubelet[2831]: E0909 00:23:08.022579 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.022802 kubelet[2831]: E0909 00:23:08.022780 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.022802 kubelet[2831]: W0909 00:23:08.022790 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.022802 kubelet[2831]: E0909 00:23:08.022798 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.022972 kubelet[2831]: E0909 00:23:08.022954 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.022972 kubelet[2831]: W0909 00:23:08.022964 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.022972 kubelet[2831]: E0909 00:23:08.022972 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.023165 kubelet[2831]: E0909 00:23:08.023137 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.023165 kubelet[2831]: W0909 00:23:08.023158 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.023165 kubelet[2831]: E0909 00:23:08.023167 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.023386 kubelet[2831]: E0909 00:23:08.023368 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.023386 kubelet[2831]: W0909 00:23:08.023378 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.023386 kubelet[2831]: E0909 00:23:08.023386 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.023600 kubelet[2831]: E0909 00:23:08.023577 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.023600 kubelet[2831]: W0909 00:23:08.023589 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.023600 kubelet[2831]: E0909 00:23:08.023600 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.023947 kubelet[2831]: E0909 00:23:08.023893 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.023947 kubelet[2831]: W0909 00:23:08.023925 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.023947 kubelet[2831]: E0909 00:23:08.023957 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.024321 kubelet[2831]: E0909 00:23:08.024304 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.024321 kubelet[2831]: W0909 00:23:08.024317 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.024398 kubelet[2831]: E0909 00:23:08.024341 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.024569 kubelet[2831]: E0909 00:23:08.024552 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.024569 kubelet[2831]: W0909 00:23:08.024564 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.024627 kubelet[2831]: E0909 00:23:08.024581 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.024823 kubelet[2831]: E0909 00:23:08.024807 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.024823 kubelet[2831]: W0909 00:23:08.024819 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.024877 kubelet[2831]: E0909 00:23:08.024828 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.034479 kubelet[2831]: E0909 00:23:08.034438 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.034479 kubelet[2831]: W0909 00:23:08.034458 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.034479 kubelet[2831]: E0909 00:23:08.034474 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.034779 kubelet[2831]: E0909 00:23:08.034756 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.034779 kubelet[2831]: W0909 00:23:08.034769 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.034779 kubelet[2831]: E0909 00:23:08.034778 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.035192 kubelet[2831]: E0909 00:23:08.035136 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.035192 kubelet[2831]: W0909 00:23:08.035168 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.035298 kubelet[2831]: E0909 00:23:08.035202 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.035510 kubelet[2831]: E0909 00:23:08.035483 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.035510 kubelet[2831]: W0909 00:23:08.035498 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.035510 kubelet[2831]: E0909 00:23:08.035508 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.035715 kubelet[2831]: E0909 00:23:08.035693 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.035715 kubelet[2831]: W0909 00:23:08.035704 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.035715 kubelet[2831]: E0909 00:23:08.035712 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.035969 kubelet[2831]: E0909 00:23:08.035946 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.035969 kubelet[2831]: W0909 00:23:08.035958 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.035969 kubelet[2831]: E0909 00:23:08.035967 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.036425 kubelet[2831]: E0909 00:23:08.036388 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.036476 kubelet[2831]: W0909 00:23:08.036430 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.036476 kubelet[2831]: E0909 00:23:08.036458 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.036710 kubelet[2831]: E0909 00:23:08.036690 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.036710 kubelet[2831]: W0909 00:23:08.036702 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.036710 kubelet[2831]: E0909 00:23:08.036712 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.036911 kubelet[2831]: E0909 00:23:08.036897 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.036911 kubelet[2831]: W0909 00:23:08.036907 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.036960 kubelet[2831]: E0909 00:23:08.036915 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.037102 kubelet[2831]: E0909 00:23:08.037086 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.037102 kubelet[2831]: W0909 00:23:08.037097 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.037158 kubelet[2831]: E0909 00:23:08.037104 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.037346 kubelet[2831]: E0909 00:23:08.037331 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.037346 kubelet[2831]: W0909 00:23:08.037342 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.037400 kubelet[2831]: E0909 00:23:08.037351 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.037561 kubelet[2831]: E0909 00:23:08.037543 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.037561 kubelet[2831]: W0909 00:23:08.037555 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.037626 kubelet[2831]: E0909 00:23:08.037565 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.037772 kubelet[2831]: E0909 00:23:08.037756 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.037772 kubelet[2831]: W0909 00:23:08.037767 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.037818 kubelet[2831]: E0909 00:23:08.037775 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.038303 kubelet[2831]: E0909 00:23:08.038224 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.038303 kubelet[2831]: W0909 00:23:08.038249 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.038303 kubelet[2831]: E0909 00:23:08.038288 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.038603 kubelet[2831]: E0909 00:23:08.038566 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.038603 kubelet[2831]: W0909 00:23:08.038586 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.038603 kubelet[2831]: E0909 00:23:08.038599 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.039032 kubelet[2831]: E0909 00:23:08.039006 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.039032 kubelet[2831]: W0909 00:23:08.039027 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.039119 kubelet[2831]: E0909 00:23:08.039039 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.039448 kubelet[2831]: E0909 00:23:08.039406 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.039448 kubelet[2831]: W0909 00:23:08.039422 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.039448 kubelet[2831]: E0909 00:23:08.039436 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.039732 kubelet[2831]: E0909 00:23:08.039664 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:23:08.039732 kubelet[2831]: W0909 00:23:08.039676 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:23:08.039732 kubelet[2831]: E0909 00:23:08.039694 2831 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:23:08.587367 containerd[1591]: time="2025-09-09T00:23:08.587285735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:08.588085 containerd[1591]: time="2025-09-09T00:23:08.588035757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 00:23:08.589537 containerd[1591]: time="2025-09-09T00:23:08.589506555Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:08.591321 containerd[1591]: time="2025-09-09T00:23:08.591292308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:08.591862 containerd[1591]: time="2025-09-09T00:23:08.591818797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.505117632s" Sep 9 00:23:08.591923 containerd[1591]: time="2025-09-09T00:23:08.591864824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 00:23:08.598793 containerd[1591]: time="2025-09-09T00:23:08.598738921Z" level=info msg="CreateContainer within sandbox \"051cb5f3238f678b647b9bcb329dc2852df68e3cc16507943365cd9c81c5a291\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:23:08.612288 containerd[1591]: time="2025-09-09T00:23:08.611918058Z" level=info msg="Container c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:08.622131 containerd[1591]: time="2025-09-09T00:23:08.622070528Z" level=info msg="CreateContainer within sandbox \"051cb5f3238f678b647b9bcb329dc2852df68e3cc16507943365cd9c81c5a291\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f\"" Sep 9 00:23:08.622808 containerd[1591]: time="2025-09-09T00:23:08.622775714Z" level=info msg="StartContainer for \"c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f\"" Sep 9 00:23:08.624158 containerd[1591]: time="2025-09-09T00:23:08.624125934Z" level=info msg="connecting to shim c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f" address="unix:///run/containerd/s/4633fafa50bf51156056ca9d4b0608bd45fe7c5f2867e83f6e646ac3a6ba8c9d" protocol=ttrpc version=3 Sep 9 00:23:08.652560 systemd[1]: Started cri-containerd-c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f.scope - libcontainer container c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f. Sep 9 00:23:08.706951 containerd[1591]: time="2025-09-09T00:23:08.706790465Z" level=info msg="StartContainer for \"c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f\" returns successfully" Sep 9 00:23:08.724973 systemd[1]: cri-containerd-c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f.scope: Deactivated successfully. Sep 9 00:23:08.725759 systemd[1]: cri-containerd-c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f.scope: Consumed 49ms CPU time, 6.5M memory peak, 4.6M written to disk. Sep 9 00:23:08.729396 containerd[1591]: time="2025-09-09T00:23:08.729331654Z" level=info msg="received exit event container_id:\"c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f\" id:\"c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f\" pid:3528 exited_at:{seconds:1757377388 nanos:728666653}" Sep 9 00:23:08.729605 containerd[1591]: time="2025-09-09T00:23:08.729421434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f\" id:\"c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f\" pid:3528 exited_at:{seconds:1757377388 nanos:728666653}" Sep 9 00:23:08.757144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1f323b0c2fd9e2286814ec85757792c771bc037e2998b388028de2f7413ed4f-rootfs.mount: Deactivated successfully. Sep 9 00:23:08.963874 kubelet[2831]: I0909 00:23:08.963205 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:23:08.963874 kubelet[2831]: E0909 00:23:08.963759 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:09.153646 kubelet[2831]: I0909 00:23:09.153575 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77784768b-wkjwf" podStartSLOduration=3.035341739 podStartE2EDuration="6.153555772s" podCreationTimestamp="2025-09-09 00:23:03 +0000 UTC" firstStartedPulling="2025-09-09 00:23:03.968244794 +0000 UTC m=+19.181143373" lastFinishedPulling="2025-09-09 00:23:07.086458827 +0000 UTC m=+22.299357406" observedRunningTime="2025-09-09 00:23:07.973011943 +0000 UTC m=+23.185910512" watchObservedRunningTime="2025-09-09 00:23:09.153555772 +0000 UTC m=+24.366454342" Sep 9 00:23:09.888786 kubelet[2831]: E0909 00:23:09.888716 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fl7r" podUID="0f5cb06c-9205-4333-be98-49c50e03a5ae" Sep 9 00:23:09.966554 containerd[1591]: time="2025-09-09T00:23:09.966504200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:23:11.888237 kubelet[2831]: E0909 00:23:11.888162 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fl7r" podUID="0f5cb06c-9205-4333-be98-49c50e03a5ae" Sep 9 00:23:13.889050 kubelet[2831]: E0909 00:23:13.888966 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fl7r" podUID="0f5cb06c-9205-4333-be98-49c50e03a5ae" Sep 9 00:23:14.817057 containerd[1591]: time="2025-09-09T00:23:14.816962509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:14.817724 containerd[1591]: time="2025-09-09T00:23:14.817685096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 00:23:14.819053 containerd[1591]: time="2025-09-09T00:23:14.818990477Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:14.821337 containerd[1591]: time="2025-09-09T00:23:14.821302332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:14.821916 containerd[1591]: time="2025-09-09T00:23:14.821880036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.855330148s" Sep 9 00:23:14.821959 containerd[1591]: time="2025-09-09T00:23:14.821914020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 00:23:14.828751 containerd[1591]: time="2025-09-09T00:23:14.828671589Z" level=info msg="CreateContainer within sandbox \"051cb5f3238f678b647b9bcb329dc2852df68e3cc16507943365cd9c81c5a291\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:23:14.841436 containerd[1591]: time="2025-09-09T00:23:14.841366296Z" level=info msg="Container 0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:14.855534 containerd[1591]: time="2025-09-09T00:23:14.855462374Z" level=info msg="CreateContainer within sandbox \"051cb5f3238f678b647b9bcb329dc2852df68e3cc16507943365cd9c81c5a291\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f\"" Sep 9 00:23:14.857463 containerd[1591]: time="2025-09-09T00:23:14.857417475Z" level=info msg="StartContainer for \"0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f\"" Sep 9 00:23:14.859721 containerd[1591]: time="2025-09-09T00:23:14.859672482Z" level=info msg="connecting to shim 0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f" address="unix:///run/containerd/s/4633fafa50bf51156056ca9d4b0608bd45fe7c5f2867e83f6e646ac3a6ba8c9d" protocol=ttrpc version=3 Sep 9 00:23:14.888493 systemd[1]: Started cri-containerd-0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f.scope - libcontainer container 0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f. Sep 9 00:23:14.953594 containerd[1591]: time="2025-09-09T00:23:14.953555578Z" level=info msg="StartContainer for \"0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f\" returns successfully" Sep 9 00:23:15.889087 kubelet[2831]: E0909 00:23:15.889001 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fl7r" podUID="0f5cb06c-9205-4333-be98-49c50e03a5ae" Sep 9 00:23:16.320428 containerd[1591]: time="2025-09-09T00:23:16.320152176Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:23:16.324646 systemd[1]: cri-containerd-0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f.scope: Deactivated successfully. Sep 9 00:23:16.325002 systemd[1]: cri-containerd-0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f.scope: Consumed 697ms CPU time, 175.7M memory peak, 3M read from disk, 171.3M written to disk. Sep 9 00:23:16.325972 containerd[1591]: time="2025-09-09T00:23:16.325912364Z" level=info msg="received exit event container_id:\"0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f\" id:\"0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f\" pid:3588 exited_at:{seconds:1757377396 nanos:325630650}" Sep 9 00:23:16.326051 containerd[1591]: time="2025-09-09T00:23:16.326004528Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f\" id:\"0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f\" pid:3588 exited_at:{seconds:1757377396 nanos:325630650}" Sep 9 00:23:16.354480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0db97371c6572fa822c172ff6b8e6aefa35088f7f326cf4c7044dadf7378d26f-rootfs.mount: Deactivated successfully. Sep 9 00:23:16.388168 kubelet[2831]: I0909 00:23:16.388116 2831 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:23:16.847069 systemd[1]: Created slice kubepods-besteffort-pod077c59c5_131e_4bd3_82b6_f6eb0e4199cc.slice - libcontainer container kubepods-besteffort-pod077c59c5_131e_4bd3_82b6_f6eb0e4199cc.slice. Sep 9 00:23:16.856819 systemd[1]: Created slice kubepods-burstable-pod9a54da31_62cf_4aee_ba5d_e0a857a34c2b.slice - libcontainer container kubepods-burstable-pod9a54da31_62cf_4aee_ba5d_e0a857a34c2b.slice. Sep 9 00:23:16.867012 systemd[1]: Created slice kubepods-besteffort-pod44a26592_5a84_447c_a0d8_345d0ec82cf5.slice - libcontainer container kubepods-besteffort-pod44a26592_5a84_447c_a0d8_345d0ec82cf5.slice. Sep 9 00:23:16.874420 systemd[1]: Created slice kubepods-besteffort-pod6f0c2c7b_bc01_4f02_9a04_4a39dfe60e5e.slice - libcontainer container kubepods-besteffort-pod6f0c2c7b_bc01_4f02_9a04_4a39dfe60e5e.slice. Sep 9 00:23:16.880597 systemd[1]: Created slice kubepods-burstable-podbabafc45_a3f4_4482_aee4_f7dae7ac27df.slice - libcontainer container kubepods-burstable-podbabafc45_a3f4_4482_aee4_f7dae7ac27df.slice. Sep 9 00:23:16.886837 systemd[1]: Created slice kubepods-besteffort-podd9104726_3d48_43bc_a3ba_3286b6c4cf8b.slice - libcontainer container kubepods-besteffort-podd9104726_3d48_43bc_a3ba_3286b6c4cf8b.slice. Sep 9 00:23:16.891521 systemd[1]: Created slice kubepods-besteffort-pod9eaed2fe_b762_453d_88ca_e4192df1f01f.slice - libcontainer container kubepods-besteffort-pod9eaed2fe_b762_453d_88ca_e4192df1f01f.slice. Sep 9 00:23:16.936072 kubelet[2831]: I0909 00:23:16.935997 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9b7d\" (UniqueName: \"kubernetes.io/projected/44a26592-5a84-447c-a0d8-345d0ec82cf5-kube-api-access-p9b7d\") pod \"calico-apiserver-76bd4cd4c9-jcx2b\" (UID: \"44a26592-5a84-447c-a0d8-345d0ec82cf5\") " pod="calico-apiserver/calico-apiserver-76bd4cd4c9-jcx2b" Sep 9 00:23:16.936693 kubelet[2831]: I0909 00:23:16.936140 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/44a26592-5a84-447c-a0d8-345d0ec82cf5-calico-apiserver-certs\") pod \"calico-apiserver-76bd4cd4c9-jcx2b\" (UID: \"44a26592-5a84-447c-a0d8-345d0ec82cf5\") " pod="calico-apiserver/calico-apiserver-76bd4cd4c9-jcx2b" Sep 9 00:23:16.996611 containerd[1591]: time="2025-09-09T00:23:16.995607608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:23:17.037079 kubelet[2831]: I0909 00:23:17.037019 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9eaed2fe-b762-453d-88ca-e4192df1f01f-whisker-backend-key-pair\") pod \"whisker-7dd7cfbccc-xjmx2\" (UID: \"9eaed2fe-b762-453d-88ca-e4192df1f01f\") " pod="calico-system/whisker-7dd7cfbccc-xjmx2" Sep 9 00:23:17.037079 kubelet[2831]: I0909 00:23:17.037110 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e-goldmane-key-pair\") pod \"goldmane-54d579b49d-rc24j\" (UID: \"6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e\") " pod="calico-system/goldmane-54d579b49d-rc24j" Sep 9 00:23:17.037417 kubelet[2831]: I0909 00:23:17.037129 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/077c59c5-131e-4bd3-82b6-f6eb0e4199cc-tigera-ca-bundle\") pod \"calico-kube-controllers-7b5bc6bd7-p9249\" (UID: \"077c59c5-131e-4bd3-82b6-f6eb0e4199cc\") " pod="calico-system/calico-kube-controllers-7b5bc6bd7-p9249" Sep 9 00:23:17.037417 kubelet[2831]: I0909 00:23:17.037188 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e-config\") pod \"goldmane-54d579b49d-rc24j\" (UID: \"6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e\") " pod="calico-system/goldmane-54d579b49d-rc24j" Sep 9 00:23:17.037417 kubelet[2831]: I0909 00:23:17.037213 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkb4h\" (UniqueName: \"kubernetes.io/projected/9eaed2fe-b762-453d-88ca-e4192df1f01f-kube-api-access-xkb4h\") pod \"whisker-7dd7cfbccc-xjmx2\" (UID: \"9eaed2fe-b762-453d-88ca-e4192df1f01f\") " pod="calico-system/whisker-7dd7cfbccc-xjmx2" Sep 9 00:23:17.037854 kubelet[2831]: I0909 00:23:17.037797 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-rc24j\" (UID: \"6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e\") " pod="calico-system/goldmane-54d579b49d-rc24j" Sep 9 00:23:17.037914 kubelet[2831]: I0909 00:23:17.037861 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd29k\" (UniqueName: \"kubernetes.io/projected/babafc45-a3f4-4482-aee4-f7dae7ac27df-kube-api-access-rd29k\") pod \"coredns-674b8bbfcf-45t44\" (UID: \"babafc45-a3f4-4482-aee4-f7dae7ac27df\") " pod="kube-system/coredns-674b8bbfcf-45t44" Sep 9 00:23:17.037914 kubelet[2831]: I0909 00:23:17.037887 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfwgj\" (UniqueName: \"kubernetes.io/projected/077c59c5-131e-4bd3-82b6-f6eb0e4199cc-kube-api-access-vfwgj\") pod \"calico-kube-controllers-7b5bc6bd7-p9249\" (UID: \"077c59c5-131e-4bd3-82b6-f6eb0e4199cc\") " pod="calico-system/calico-kube-controllers-7b5bc6bd7-p9249" Sep 9 00:23:17.037994 kubelet[2831]: I0909 00:23:17.037923 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a54da31-62cf-4aee-ba5d-e0a857a34c2b-config-volume\") pod \"coredns-674b8bbfcf-8q9lh\" (UID: \"9a54da31-62cf-4aee-ba5d-e0a857a34c2b\") " pod="kube-system/coredns-674b8bbfcf-8q9lh" Sep 9 00:23:17.037994 kubelet[2831]: I0909 00:23:17.037950 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcrsj\" (UniqueName: \"kubernetes.io/projected/9a54da31-62cf-4aee-ba5d-e0a857a34c2b-kube-api-access-fcrsj\") pod \"coredns-674b8bbfcf-8q9lh\" (UID: \"9a54da31-62cf-4aee-ba5d-e0a857a34c2b\") " pod="kube-system/coredns-674b8bbfcf-8q9lh" Sep 9 00:23:17.037994 kubelet[2831]: I0909 00:23:17.037972 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d9104726-3d48-43bc-a3ba-3286b6c4cf8b-calico-apiserver-certs\") pod \"calico-apiserver-76bd4cd4c9-lh8kj\" (UID: \"d9104726-3d48-43bc-a3ba-3286b6c4cf8b\") " pod="calico-apiserver/calico-apiserver-76bd4cd4c9-lh8kj" Sep 9 00:23:17.039156 kubelet[2831]: I0909 00:23:17.039102 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lvs8\" (UniqueName: \"kubernetes.io/projected/d9104726-3d48-43bc-a3ba-3286b6c4cf8b-kube-api-access-7lvs8\") pod \"calico-apiserver-76bd4cd4c9-lh8kj\" (UID: \"d9104726-3d48-43bc-a3ba-3286b6c4cf8b\") " pod="calico-apiserver/calico-apiserver-76bd4cd4c9-lh8kj" Sep 9 00:23:17.039156 kubelet[2831]: I0909 00:23:17.039175 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eaed2fe-b762-453d-88ca-e4192df1f01f-whisker-ca-bundle\") pod \"whisker-7dd7cfbccc-xjmx2\" (UID: \"9eaed2fe-b762-453d-88ca-e4192df1f01f\") " pod="calico-system/whisker-7dd7cfbccc-xjmx2" Sep 9 00:23:17.039443 kubelet[2831]: I0909 00:23:17.039211 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bb6x\" (UniqueName: \"kubernetes.io/projected/6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e-kube-api-access-5bb6x\") pod \"goldmane-54d579b49d-rc24j\" (UID: \"6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e\") " pod="calico-system/goldmane-54d579b49d-rc24j" Sep 9 00:23:17.039530 kubelet[2831]: I0909 00:23:17.039508 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/babafc45-a3f4-4482-aee4-f7dae7ac27df-config-volume\") pod \"coredns-674b8bbfcf-45t44\" (UID: \"babafc45-a3f4-4482-aee4-f7dae7ac27df\") " pod="kube-system/coredns-674b8bbfcf-45t44" Sep 9 00:23:17.163123 kubelet[2831]: E0909 00:23:17.162941 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:17.163826 containerd[1591]: time="2025-09-09T00:23:17.163699591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8q9lh,Uid:9a54da31-62cf-4aee-ba5d-e0a857a34c2b,Namespace:kube-system,Attempt:0,}" Sep 9 00:23:17.172661 containerd[1591]: time="2025-09-09T00:23:17.172530214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bd4cd4c9-jcx2b,Uid:44a26592-5a84-447c-a0d8-345d0ec82cf5,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:23:17.178559 containerd[1591]: time="2025-09-09T00:23:17.178513131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rc24j,Uid:6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e,Namespace:calico-system,Attempt:0,}" Sep 9 00:23:17.184469 kubelet[2831]: E0909 00:23:17.184406 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:17.191195 containerd[1591]: time="2025-09-09T00:23:17.191066516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-45t44,Uid:babafc45-a3f4-4482-aee4-f7dae7ac27df,Namespace:kube-system,Attempt:0,}" Sep 9 00:23:17.191461 containerd[1591]: time="2025-09-09T00:23:17.191371653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bd4cd4c9-lh8kj,Uid:d9104726-3d48-43bc-a3ba-3286b6c4cf8b,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:23:17.195231 containerd[1591]: time="2025-09-09T00:23:17.195187762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dd7cfbccc-xjmx2,Uid:9eaed2fe-b762-453d-88ca-e4192df1f01f,Namespace:calico-system,Attempt:0,}" Sep 9 00:23:17.284163 containerd[1591]: time="2025-09-09T00:23:17.284073191Z" level=error msg="Failed to destroy network for sandbox \"770022dd56e483a7485ef555e68df2ee0a1d4d366cd03a7fab95f00b456cf991\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.285448 containerd[1591]: time="2025-09-09T00:23:17.285395312Z" level=error msg="Failed to destroy network for sandbox \"5006a5f7a769c639209c1078886c532cd2d3868bb7688a3223aa7d8e34d1f2be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.308157 containerd[1591]: time="2025-09-09T00:23:17.308058700Z" level=error msg="Failed to destroy network for sandbox \"6d8d9bd30481a7605eb1b2925fac1f8799728431272d51a81bcc2ae85b66597a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.311447 containerd[1591]: time="2025-09-09T00:23:17.311246410Z" level=error msg="Failed to destroy network for sandbox \"e1503e2662b2703f73e59f57d09e37273be439e870a92454919790559ba27d27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.323247 containerd[1591]: time="2025-09-09T00:23:17.323070535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8q9lh,Uid:9a54da31-62cf-4aee-ba5d-e0a857a34c2b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"770022dd56e483a7485ef555e68df2ee0a1d4d366cd03a7fab95f00b456cf991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.323247 containerd[1591]: time="2025-09-09T00:23:17.323153061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bd4cd4c9-lh8kj,Uid:d9104726-3d48-43bc-a3ba-3286b6c4cf8b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d8d9bd30481a7605eb1b2925fac1f8799728431272d51a81bcc2ae85b66597a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.323247 containerd[1591]: time="2025-09-09T00:23:17.323197796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bd4cd4c9-jcx2b,Uid:44a26592-5a84-447c-a0d8-345d0ec82cf5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5006a5f7a769c639209c1078886c532cd2d3868bb7688a3223aa7d8e34d1f2be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.323832 containerd[1591]: time="2025-09-09T00:23:17.323163281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-45t44,Uid:babafc45-a3f4-4482-aee4-f7dae7ac27df,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1503e2662b2703f73e59f57d09e37273be439e870a92454919790559ba27d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.323832 containerd[1591]: time="2025-09-09T00:23:17.323328603Z" level=error msg="Failed to destroy network for sandbox \"1808b3a26a817f42bc295a901e913d11158479055bee924d598fee453007e766\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.323832 containerd[1591]: time="2025-09-09T00:23:17.323329846Z" level=error msg="Failed to destroy network for sandbox \"cd16c444ee4e445d393cb0955846d18a6ac775482386edcdbbdf7424bf0022f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.324872 containerd[1591]: time="2025-09-09T00:23:17.324831556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rc24j,Uid:6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1808b3a26a817f42bc295a901e913d11158479055bee924d598fee453007e766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.325866 containerd[1591]: time="2025-09-09T00:23:17.325828843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dd7cfbccc-xjmx2,Uid:9eaed2fe-b762-453d-88ca-e4192df1f01f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd16c444ee4e445d393cb0955846d18a6ac775482386edcdbbdf7424bf0022f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.338317 kubelet[2831]: E0909 00:23:17.338182 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd16c444ee4e445d393cb0955846d18a6ac775482386edcdbbdf7424bf0022f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.338317 kubelet[2831]: E0909 00:23:17.338308 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1503e2662b2703f73e59f57d09e37273be439e870a92454919790559ba27d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.338426 kubelet[2831]: E0909 00:23:17.338367 2831 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd16c444ee4e445d393cb0955846d18a6ac775482386edcdbbdf7424bf0022f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dd7cfbccc-xjmx2" Sep 9 00:23:17.338426 kubelet[2831]: E0909 00:23:17.338396 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1808b3a26a817f42bc295a901e913d11158479055bee924d598fee453007e766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.338426 kubelet[2831]: E0909 00:23:17.338416 2831 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1808b3a26a817f42bc295a901e913d11158479055bee924d598fee453007e766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-rc24j" Sep 9 00:23:17.338502 kubelet[2831]: E0909 00:23:17.338373 2831 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1503e2662b2703f73e59f57d09e37273be439e870a92454919790559ba27d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-45t44" Sep 9 00:23:17.338502 kubelet[2831]: E0909 00:23:17.338447 2831 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1808b3a26a817f42bc295a901e913d11158479055bee924d598fee453007e766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-rc24j" Sep 9 00:23:17.338502 kubelet[2831]: E0909 00:23:17.338441 2831 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1503e2662b2703f73e59f57d09e37273be439e870a92454919790559ba27d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-45t44" Sep 9 00:23:17.338580 kubelet[2831]: E0909 00:23:17.338524 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-rc24j_calico-system(6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-rc24j_calico-system(6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1808b3a26a817f42bc295a901e913d11158479055bee924d598fee453007e766\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-rc24j" podUID="6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e" Sep 9 00:23:17.338580 kubelet[2831]: E0909 00:23:17.338547 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-45t44_kube-system(babafc45-a3f4-4482-aee4-f7dae7ac27df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-45t44_kube-system(babafc45-a3f4-4482-aee4-f7dae7ac27df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1503e2662b2703f73e59f57d09e37273be439e870a92454919790559ba27d27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-45t44" podUID="babafc45-a3f4-4482-aee4-f7dae7ac27df" Sep 9 00:23:17.338659 kubelet[2831]: E0909 00:23:17.338238 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d8d9bd30481a7605eb1b2925fac1f8799728431272d51a81bcc2ae85b66597a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.338659 kubelet[2831]: E0909 00:23:17.338607 2831 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d8d9bd30481a7605eb1b2925fac1f8799728431272d51a81bcc2ae85b66597a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bd4cd4c9-lh8kj" Sep 9 00:23:17.338659 kubelet[2831]: E0909 00:23:17.338625 2831 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d8d9bd30481a7605eb1b2925fac1f8799728431272d51a81bcc2ae85b66597a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bd4cd4c9-lh8kj" Sep 9 00:23:17.338802 kubelet[2831]: E0909 00:23:17.338666 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76bd4cd4c9-lh8kj_calico-apiserver(d9104726-3d48-43bc-a3ba-3286b6c4cf8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76bd4cd4c9-lh8kj_calico-apiserver(d9104726-3d48-43bc-a3ba-3286b6c4cf8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d8d9bd30481a7605eb1b2925fac1f8799728431272d51a81bcc2ae85b66597a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76bd4cd4c9-lh8kj" podUID="d9104726-3d48-43bc-a3ba-3286b6c4cf8b" Sep 9 00:23:17.338802 kubelet[2831]: E0909 00:23:17.338413 2831 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd16c444ee4e445d393cb0955846d18a6ac775482386edcdbbdf7424bf0022f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dd7cfbccc-xjmx2" Sep 9 00:23:17.338802 kubelet[2831]: E0909 00:23:17.338720 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7dd7cfbccc-xjmx2_calico-system(9eaed2fe-b762-453d-88ca-e4192df1f01f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7dd7cfbccc-xjmx2_calico-system(9eaed2fe-b762-453d-88ca-e4192df1f01f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd16c444ee4e445d393cb0955846d18a6ac775482386edcdbbdf7424bf0022f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7dd7cfbccc-xjmx2" podUID="9eaed2fe-b762-453d-88ca-e4192df1f01f" Sep 9 00:23:17.338901 kubelet[2831]: E0909 00:23:17.338195 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"770022dd56e483a7485ef555e68df2ee0a1d4d366cd03a7fab95f00b456cf991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.338931 kubelet[2831]: E0909 00:23:17.338185 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5006a5f7a769c639209c1078886c532cd2d3868bb7688a3223aa7d8e34d1f2be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.339050 kubelet[2831]: E0909 00:23:17.338918 2831 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"770022dd56e483a7485ef555e68df2ee0a1d4d366cd03a7fab95f00b456cf991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8q9lh" Sep 9 00:23:17.339094 kubelet[2831]: E0909 00:23:17.339063 2831 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"770022dd56e483a7485ef555e68df2ee0a1d4d366cd03a7fab95f00b456cf991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8q9lh" Sep 9 00:23:17.339198 kubelet[2831]: E0909 00:23:17.339163 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8q9lh_kube-system(9a54da31-62cf-4aee-ba5d-e0a857a34c2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8q9lh_kube-system(9a54da31-62cf-4aee-ba5d-e0a857a34c2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"770022dd56e483a7485ef555e68df2ee0a1d4d366cd03a7fab95f00b456cf991\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8q9lh" podUID="9a54da31-62cf-4aee-ba5d-e0a857a34c2b" Sep 9 00:23:17.339309 kubelet[2831]: E0909 00:23:17.339222 2831 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5006a5f7a769c639209c1078886c532cd2d3868bb7688a3223aa7d8e34d1f2be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bd4cd4c9-jcx2b" Sep 9 00:23:17.339309 kubelet[2831]: E0909 00:23:17.339239 2831 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5006a5f7a769c639209c1078886c532cd2d3868bb7688a3223aa7d8e34d1f2be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bd4cd4c9-jcx2b" Sep 9 00:23:17.339372 kubelet[2831]: E0909 00:23:17.339302 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76bd4cd4c9-jcx2b_calico-apiserver(44a26592-5a84-447c-a0d8-345d0ec82cf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76bd4cd4c9-jcx2b_calico-apiserver(44a26592-5a84-447c-a0d8-345d0ec82cf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5006a5f7a769c639209c1078886c532cd2d3868bb7688a3223aa7d8e34d1f2be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76bd4cd4c9-jcx2b" podUID="44a26592-5a84-447c-a0d8-345d0ec82cf5" Sep 9 00:23:17.455587 containerd[1591]: time="2025-09-09T00:23:17.455426170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5bc6bd7-p9249,Uid:077c59c5-131e-4bd3-82b6-f6eb0e4199cc,Namespace:calico-system,Attempt:0,}" Sep 9 00:23:17.516044 containerd[1591]: time="2025-09-09T00:23:17.515966032Z" level=error msg="Failed to destroy network for sandbox \"5755409f06a8d9611beeede737608c99235e4dfed11022679b7467b45e00c119\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.517566 containerd[1591]: time="2025-09-09T00:23:17.517517958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5bc6bd7-p9249,Uid:077c59c5-131e-4bd3-82b6-f6eb0e4199cc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5755409f06a8d9611beeede737608c99235e4dfed11022679b7467b45e00c119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.517954 kubelet[2831]: E0909 00:23:17.517888 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5755409f06a8d9611beeede737608c99235e4dfed11022679b7467b45e00c119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.518017 kubelet[2831]: E0909 00:23:17.517995 2831 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5755409f06a8d9611beeede737608c99235e4dfed11022679b7467b45e00c119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b5bc6bd7-p9249" Sep 9 00:23:17.518045 kubelet[2831]: E0909 00:23:17.518025 2831 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5755409f06a8d9611beeede737608c99235e4dfed11022679b7467b45e00c119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b5bc6bd7-p9249" Sep 9 00:23:17.518167 kubelet[2831]: E0909 00:23:17.518111 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b5bc6bd7-p9249_calico-system(077c59c5-131e-4bd3-82b6-f6eb0e4199cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b5bc6bd7-p9249_calico-system(077c59c5-131e-4bd3-82b6-f6eb0e4199cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5755409f06a8d9611beeede737608c99235e4dfed11022679b7467b45e00c119\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b5bc6bd7-p9249" podUID="077c59c5-131e-4bd3-82b6-f6eb0e4199cc" Sep 9 00:23:17.519233 systemd[1]: run-netns-cni\x2d9498e834\x2dbbb4\x2d3585\x2d454f\x2de9f9fcdfe01c.mount: Deactivated successfully. Sep 9 00:23:17.895413 systemd[1]: Created slice kubepods-besteffort-pod0f5cb06c_9205_4333_be98_49c50e03a5ae.slice - libcontainer container kubepods-besteffort-pod0f5cb06c_9205_4333_be98_49c50e03a5ae.slice. Sep 9 00:23:17.897988 containerd[1591]: time="2025-09-09T00:23:17.897937964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6fl7r,Uid:0f5cb06c-9205-4333-be98-49c50e03a5ae,Namespace:calico-system,Attempt:0,}" Sep 9 00:23:17.946740 containerd[1591]: time="2025-09-09T00:23:17.946674110Z" level=error msg="Failed to destroy network for sandbox \"68d46c51a154b28901d9771279c48dee4089e342a09fad9b4f5741d6deefd90f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.948093 containerd[1591]: time="2025-09-09T00:23:17.948041045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6fl7r,Uid:0f5cb06c-9205-4333-be98-49c50e03a5ae,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68d46c51a154b28901d9771279c48dee4089e342a09fad9b4f5741d6deefd90f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.948364 kubelet[2831]: E0909 00:23:17.948306 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68d46c51a154b28901d9771279c48dee4089e342a09fad9b4f5741d6deefd90f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:23:17.948738 kubelet[2831]: E0909 00:23:17.948375 2831 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68d46c51a154b28901d9771279c48dee4089e342a09fad9b4f5741d6deefd90f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6fl7r" Sep 9 00:23:17.948738 kubelet[2831]: E0909 00:23:17.948402 2831 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68d46c51a154b28901d9771279c48dee4089e342a09fad9b4f5741d6deefd90f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6fl7r" Sep 9 00:23:17.948738 kubelet[2831]: E0909 00:23:17.948459 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6fl7r_calico-system(0f5cb06c-9205-4333-be98-49c50e03a5ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6fl7r_calico-system(0f5cb06c-9205-4333-be98-49c50e03a5ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68d46c51a154b28901d9771279c48dee4089e342a09fad9b4f5741d6deefd90f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6fl7r" podUID="0f5cb06c-9205-4333-be98-49c50e03a5ae" Sep 9 00:23:17.949597 systemd[1]: run-netns-cni\x2dbfa17208\x2d9072\x2d434c\x2d6414\x2d85caac5e1e78.mount: Deactivated successfully. Sep 9 00:23:23.513590 kubelet[2831]: I0909 00:23:23.513492 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:23:23.514467 kubelet[2831]: E0909 00:23:23.514297 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:24.011435 kubelet[2831]: E0909 00:23:24.011381 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:24.927082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1750742180.mount: Deactivated successfully. Sep 9 00:23:26.463619 containerd[1591]: time="2025-09-09T00:23:26.463530564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:26.501529 containerd[1591]: time="2025-09-09T00:23:26.501431732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 00:23:26.549423 containerd[1591]: time="2025-09-09T00:23:26.549329371Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:26.552048 containerd[1591]: time="2025-09-09T00:23:26.552020465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:26.553111 containerd[1591]: time="2025-09-09T00:23:26.552862465Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.555959827s" Sep 9 00:23:26.553111 containerd[1591]: time="2025-09-09T00:23:26.553051582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 00:23:26.577412 containerd[1591]: time="2025-09-09T00:23:26.577348073Z" level=info msg="CreateContainer within sandbox \"051cb5f3238f678b647b9bcb329dc2852df68e3cc16507943365cd9c81c5a291\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:23:26.589867 containerd[1591]: time="2025-09-09T00:23:26.589791296Z" level=info msg="Container 359412b840ffd04db4fadff63c4c8d9c66d17312fad0b0743d32ab1cf3711835: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:26.605297 containerd[1591]: time="2025-09-09T00:23:26.605214117Z" level=info msg="CreateContainer within sandbox \"051cb5f3238f678b647b9bcb329dc2852df68e3cc16507943365cd9c81c5a291\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"359412b840ffd04db4fadff63c4c8d9c66d17312fad0b0743d32ab1cf3711835\"" Sep 9 00:23:26.605924 containerd[1591]: time="2025-09-09T00:23:26.605854266Z" level=info msg="StartContainer for \"359412b840ffd04db4fadff63c4c8d9c66d17312fad0b0743d32ab1cf3711835\"" Sep 9 00:23:26.607919 containerd[1591]: time="2025-09-09T00:23:26.607875284Z" level=info msg="connecting to shim 359412b840ffd04db4fadff63c4c8d9c66d17312fad0b0743d32ab1cf3711835" address="unix:///run/containerd/s/4633fafa50bf51156056ca9d4b0608bd45fe7c5f2867e83f6e646ac3a6ba8c9d" protocol=ttrpc version=3 Sep 9 00:23:26.644503 systemd[1]: Started cri-containerd-359412b840ffd04db4fadff63c4c8d9c66d17312fad0b0743d32ab1cf3711835.scope - libcontainer container 359412b840ffd04db4fadff63c4c8d9c66d17312fad0b0743d32ab1cf3711835. Sep 9 00:23:26.702461 containerd[1591]: time="2025-09-09T00:23:26.702410402Z" level=info msg="StartContainer for \"359412b840ffd04db4fadff63c4c8d9c66d17312fad0b0743d32ab1cf3711835\" returns successfully" Sep 9 00:23:26.787839 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:23:26.788446 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:23:27.017768 kubelet[2831]: I0909 00:23:27.017705 2831 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9eaed2fe-b762-453d-88ca-e4192df1f01f-whisker-backend-key-pair\") pod \"9eaed2fe-b762-453d-88ca-e4192df1f01f\" (UID: \"9eaed2fe-b762-453d-88ca-e4192df1f01f\") " Sep 9 00:23:27.017768 kubelet[2831]: I0909 00:23:27.017763 2831 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkb4h\" (UniqueName: \"kubernetes.io/projected/9eaed2fe-b762-453d-88ca-e4192df1f01f-kube-api-access-xkb4h\") pod \"9eaed2fe-b762-453d-88ca-e4192df1f01f\" (UID: \"9eaed2fe-b762-453d-88ca-e4192df1f01f\") " Sep 9 00:23:27.018352 kubelet[2831]: I0909 00:23:27.017788 2831 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eaed2fe-b762-453d-88ca-e4192df1f01f-whisker-ca-bundle\") pod \"9eaed2fe-b762-453d-88ca-e4192df1f01f\" (UID: \"9eaed2fe-b762-453d-88ca-e4192df1f01f\") " Sep 9 00:23:27.018494 kubelet[2831]: I0909 00:23:27.018465 2831 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eaed2fe-b762-453d-88ca-e4192df1f01f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9eaed2fe-b762-453d-88ca-e4192df1f01f" (UID: "9eaed2fe-b762-453d-88ca-e4192df1f01f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:23:27.023972 kubelet[2831]: I0909 00:23:27.023895 2831 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eaed2fe-b762-453d-88ca-e4192df1f01f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9eaed2fe-b762-453d-88ca-e4192df1f01f" (UID: "9eaed2fe-b762-453d-88ca-e4192df1f01f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:23:27.025329 kubelet[2831]: I0909 00:23:27.025286 2831 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eaed2fe-b762-453d-88ca-e4192df1f01f-kube-api-access-xkb4h" (OuterVolumeSpecName: "kube-api-access-xkb4h") pod "9eaed2fe-b762-453d-88ca-e4192df1f01f" (UID: "9eaed2fe-b762-453d-88ca-e4192df1f01f"). InnerVolumeSpecName "kube-api-access-xkb4h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:23:27.046601 systemd[1]: Removed slice kubepods-besteffort-pod9eaed2fe_b762_453d_88ca_e4192df1f01f.slice - libcontainer container kubepods-besteffort-pod9eaed2fe_b762_453d_88ca_e4192df1f01f.slice. Sep 9 00:23:27.073385 kubelet[2831]: I0909 00:23:27.072342 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-47tmm" podStartSLOduration=1.838656901 podStartE2EDuration="24.072296047s" podCreationTimestamp="2025-09-09 00:23:03 +0000 UTC" firstStartedPulling="2025-09-09 00:23:04.320347973 +0000 UTC m=+19.533246542" lastFinishedPulling="2025-09-09 00:23:26.553987119 +0000 UTC m=+41.766885688" observedRunningTime="2025-09-09 00:23:27.068301744 +0000 UTC m=+42.281200323" watchObservedRunningTime="2025-09-09 00:23:27.072296047 +0000 UTC m=+42.285194616" Sep 9 00:23:27.119100 kubelet[2831]: I0909 00:23:27.119040 2831 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9eaed2fe-b762-453d-88ca-e4192df1f01f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:27.119100 kubelet[2831]: I0909 00:23:27.119078 2831 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xkb4h\" (UniqueName: \"kubernetes.io/projected/9eaed2fe-b762-453d-88ca-e4192df1f01f-kube-api-access-xkb4h\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:27.119100 kubelet[2831]: I0909 00:23:27.119086 2831 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eaed2fe-b762-453d-88ca-e4192df1f01f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:27.148418 systemd[1]: Created slice kubepods-besteffort-podaf518883_9b7b_40a9_bc40_3c589f745526.slice - libcontainer container kubepods-besteffort-podaf518883_9b7b_40a9_bc40_3c589f745526.slice. Sep 9 00:23:27.177135 containerd[1591]: time="2025-09-09T00:23:27.177075578Z" level=info msg="TaskExit event in podsandbox handler container_id:\"359412b840ffd04db4fadff63c4c8d9c66d17312fad0b0743d32ab1cf3711835\" id:\"c99994c771bf205a0eb0ee92178283cbcb627a8ab2c309161705e43807505182\" pid:3968 exit_status:1 exited_at:{seconds:1757377407 nanos:176566868}" Sep 9 00:23:27.219711 kubelet[2831]: I0909 00:23:27.219644 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgjrb\" (UniqueName: \"kubernetes.io/projected/af518883-9b7b-40a9-bc40-3c589f745526-kube-api-access-lgjrb\") pod \"whisker-74cb56d88-wnpgc\" (UID: \"af518883-9b7b-40a9-bc40-3c589f745526\") " pod="calico-system/whisker-74cb56d88-wnpgc" Sep 9 00:23:27.219711 kubelet[2831]: I0909 00:23:27.219692 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af518883-9b7b-40a9-bc40-3c589f745526-whisker-ca-bundle\") pod \"whisker-74cb56d88-wnpgc\" (UID: \"af518883-9b7b-40a9-bc40-3c589f745526\") " pod="calico-system/whisker-74cb56d88-wnpgc" Sep 9 00:23:27.219711 kubelet[2831]: I0909 00:23:27.219720 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/af518883-9b7b-40a9-bc40-3c589f745526-whisker-backend-key-pair\") pod \"whisker-74cb56d88-wnpgc\" (UID: \"af518883-9b7b-40a9-bc40-3c589f745526\") " pod="calico-system/whisker-74cb56d88-wnpgc" Sep 9 00:23:27.451938 containerd[1591]: time="2025-09-09T00:23:27.451857638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74cb56d88-wnpgc,Uid:af518883-9b7b-40a9-bc40-3c589f745526,Namespace:calico-system,Attempt:0,}" Sep 9 00:23:27.565421 systemd[1]: var-lib-kubelet-pods-9eaed2fe\x2db762\x2d453d\x2d88ca\x2de4192df1f01f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxkb4h.mount: Deactivated successfully. Sep 9 00:23:27.565818 systemd[1]: var-lib-kubelet-pods-9eaed2fe\x2db762\x2d453d\x2d88ca\x2de4192df1f01f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:23:27.986520 systemd-networkd[1522]: cali396f0bb2e57: Link UP Sep 9 00:23:27.987140 systemd-networkd[1522]: cali396f0bb2e57: Gained carrier Sep 9 00:23:28.119339 containerd[1591]: 2025-09-09 00:23:27.481 [INFO][3987] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:23:28.119339 containerd[1591]: 2025-09-09 00:23:27.503 [INFO][3987] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--74cb56d88--wnpgc-eth0 whisker-74cb56d88- calico-system af518883-9b7b-40a9-bc40-3c589f745526 972 0 2025-09-09 00:23:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74cb56d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-74cb56d88-wnpgc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali396f0bb2e57 [] [] }} ContainerID="0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" Namespace="calico-system" Pod="whisker-74cb56d88-wnpgc" WorkloadEndpoint="localhost-k8s-whisker--74cb56d88--wnpgc-" Sep 9 00:23:28.119339 containerd[1591]: 2025-09-09 00:23:27.503 [INFO][3987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" Namespace="calico-system" Pod="whisker-74cb56d88-wnpgc" WorkloadEndpoint="localhost-k8s-whisker--74cb56d88--wnpgc-eth0" Sep 9 00:23:28.119339 containerd[1591]: 2025-09-09 00:23:27.588 [INFO][4001] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" HandleID="k8s-pod-network.0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" Workload="localhost-k8s-whisker--74cb56d88--wnpgc-eth0" Sep 9 00:23:28.119974 containerd[1591]: 2025-09-09 00:23:27.590 [INFO][4001] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" HandleID="k8s-pod-network.0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" Workload="localhost-k8s-whisker--74cb56d88--wnpgc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00068e120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-74cb56d88-wnpgc", "timestamp":"2025-09-09 00:23:27.588622576 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:23:28.119974 containerd[1591]: 2025-09-09 00:23:27.590 [INFO][4001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:23:28.119974 containerd[1591]: 2025-09-09 00:23:27.590 [INFO][4001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:23:28.119974 containerd[1591]: 2025-09-09 00:23:27.590 [INFO][4001] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:23:28.119974 containerd[1591]: 2025-09-09 00:23:27.611 [INFO][4001] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" host="localhost" Sep 9 00:23:28.119974 containerd[1591]: 2025-09-09 00:23:27.618 [INFO][4001] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:23:28.119974 containerd[1591]: 2025-09-09 00:23:27.623 [INFO][4001] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:23:28.119974 containerd[1591]: 2025-09-09 00:23:27.625 [INFO][4001] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:28.119974 containerd[1591]: 2025-09-09 00:23:27.627 [INFO][4001] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:28.119974 containerd[1591]: 2025-09-09 00:23:27.627 [INFO][4001] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" host="localhost" Sep 9 00:23:28.120217 containerd[1591]: 2025-09-09 00:23:27.628 [INFO][4001] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b Sep 9 00:23:28.120217 containerd[1591]: 2025-09-09 00:23:27.816 [INFO][4001] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" host="localhost" Sep 9 00:23:28.120217 containerd[1591]: 2025-09-09 00:23:27.974 [INFO][4001] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" host="localhost" Sep 9 00:23:28.120217 containerd[1591]: 2025-09-09 00:23:27.974 [INFO][4001] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" host="localhost" Sep 9 00:23:28.120217 containerd[1591]: 2025-09-09 00:23:27.974 [INFO][4001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:23:28.120217 containerd[1591]: 2025-09-09 00:23:27.974 [INFO][4001] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" HandleID="k8s-pod-network.0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" Workload="localhost-k8s-whisker--74cb56d88--wnpgc-eth0" Sep 9 00:23:28.120384 containerd[1591]: 2025-09-09 00:23:27.978 [INFO][3987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" Namespace="calico-system" Pod="whisker-74cb56d88-wnpgc" WorkloadEndpoint="localhost-k8s-whisker--74cb56d88--wnpgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74cb56d88--wnpgc-eth0", GenerateName:"whisker-74cb56d88-", Namespace:"calico-system", SelfLink:"", UID:"af518883-9b7b-40a9-bc40-3c589f745526", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74cb56d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-74cb56d88-wnpgc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali396f0bb2e57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:28.120384 containerd[1591]: 2025-09-09 00:23:27.978 [INFO][3987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" Namespace="calico-system" Pod="whisker-74cb56d88-wnpgc" WorkloadEndpoint="localhost-k8s-whisker--74cb56d88--wnpgc-eth0" Sep 9 00:23:28.120483 containerd[1591]: 2025-09-09 00:23:27.978 [INFO][3987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali396f0bb2e57 ContainerID="0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" Namespace="calico-system" Pod="whisker-74cb56d88-wnpgc" WorkloadEndpoint="localhost-k8s-whisker--74cb56d88--wnpgc-eth0" Sep 9 00:23:28.120483 containerd[1591]: 2025-09-09 00:23:27.986 [INFO][3987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" Namespace="calico-system" Pod="whisker-74cb56d88-wnpgc" WorkloadEndpoint="localhost-k8s-whisker--74cb56d88--wnpgc-eth0" Sep 9 00:23:28.120528 containerd[1591]: 2025-09-09 00:23:27.988 [INFO][3987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" Namespace="calico-system" Pod="whisker-74cb56d88-wnpgc" WorkloadEndpoint="localhost-k8s-whisker--74cb56d88--wnpgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74cb56d88--wnpgc-eth0", GenerateName:"whisker-74cb56d88-", Namespace:"calico-system", SelfLink:"", UID:"af518883-9b7b-40a9-bc40-3c589f745526", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74cb56d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b", Pod:"whisker-74cb56d88-wnpgc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali396f0bb2e57", MAC:"aa:10:39:e2:de:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:28.120593 containerd[1591]: 2025-09-09 00:23:28.105 [INFO][3987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" Namespace="calico-system" Pod="whisker-74cb56d88-wnpgc" WorkloadEndpoint="localhost-k8s-whisker--74cb56d88--wnpgc-eth0" Sep 9 00:23:28.153740 containerd[1591]: time="2025-09-09T00:23:28.153483587Z" level=info msg="TaskExit event in podsandbox handler container_id:\"359412b840ffd04db4fadff63c4c8d9c66d17312fad0b0743d32ab1cf3711835\" id:\"b8acb89f4883e38e07a5d109802e3b83aad44779ef08547802e3d2455e2757fb\" pid:4023 exit_status:1 exited_at:{seconds:1757377408 nanos:151512756}" Sep 9 00:23:28.616099 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:33766.service - OpenSSH per-connection server daemon (10.0.0.1:33766). Sep 9 00:23:28.651584 containerd[1591]: time="2025-09-09T00:23:28.651528888Z" level=info msg="connecting to shim 0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b" address="unix:///run/containerd/s/56c02bf157132de7043a8fb2840e5471a3fb2114d3f3c34af5ae63685f08f7cc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:23:28.688288 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 33766 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:23:28.689573 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:28.704439 systemd[1]: Started cri-containerd-0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b.scope - libcontainer container 0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b. Sep 9 00:23:28.707993 systemd-logind[1576]: New session 10 of user core. Sep 9 00:23:28.709987 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:23:28.721460 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:23:28.757986 containerd[1591]: time="2025-09-09T00:23:28.757913785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74cb56d88-wnpgc,Uid:af518883-9b7b-40a9-bc40-3c589f745526,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b\"" Sep 9 00:23:28.759643 containerd[1591]: time="2025-09-09T00:23:28.759609276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:23:28.851443 systemd-networkd[1522]: vxlan.calico: Link UP Sep 9 00:23:28.851457 systemd-networkd[1522]: vxlan.calico: Gained carrier Sep 9 00:23:28.893477 containerd[1591]: time="2025-09-09T00:23:28.891946268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6fl7r,Uid:0f5cb06c-9205-4333-be98-49c50e03a5ae,Namespace:calico-system,Attempt:0,}" Sep 9 00:23:28.898155 kubelet[2831]: I0909 00:23:28.898038 2831 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eaed2fe-b762-453d-88ca-e4192df1f01f" path="/var/lib/kubelet/pods/9eaed2fe-b762-453d-88ca-e4192df1f01f/volumes" Sep 9 00:23:28.956203 sshd[4215]: Connection closed by 10.0.0.1 port 33766 Sep 9 00:23:28.958130 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:28.963540 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:33766.service: Deactivated successfully. Sep 9 00:23:28.966356 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:23:28.967752 systemd-logind[1576]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:23:28.970206 systemd-logind[1576]: Removed session 10. Sep 9 00:23:29.138656 systemd-networkd[1522]: cali18b0d20f5fa: Link UP Sep 9 00:23:29.142157 systemd-networkd[1522]: cali18b0d20f5fa: Gained carrier Sep 9 00:23:29.169493 containerd[1591]: 2025-09-09 00:23:29.001 [INFO][4272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6fl7r-eth0 csi-node-driver- calico-system 0f5cb06c-9205-4333-be98-49c50e03a5ae 771 0 2025-09-09 00:23:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6fl7r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali18b0d20f5fa [] [] }} ContainerID="cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" Namespace="calico-system" Pod="csi-node-driver-6fl7r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fl7r-" Sep 9 00:23:29.169493 containerd[1591]: 2025-09-09 00:23:29.002 [INFO][4272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" Namespace="calico-system" Pod="csi-node-driver-6fl7r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fl7r-eth0" Sep 9 00:23:29.169493 containerd[1591]: 2025-09-09 00:23:29.034 [INFO][4288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" HandleID="k8s-pod-network.cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" Workload="localhost-k8s-csi--node--driver--6fl7r-eth0" Sep 9 00:23:29.170422 containerd[1591]: 2025-09-09 00:23:29.034 [INFO][4288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" HandleID="k8s-pod-network.cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" Workload="localhost-k8s-csi--node--driver--6fl7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001393f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6fl7r", "timestamp":"2025-09-09 00:23:29.034214906 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:23:29.170422 containerd[1591]: 2025-09-09 00:23:29.034 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:23:29.170422 containerd[1591]: 2025-09-09 00:23:29.034 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:23:29.170422 containerd[1591]: 2025-09-09 00:23:29.034 [INFO][4288] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:23:29.170422 containerd[1591]: 2025-09-09 00:23:29.042 [INFO][4288] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" host="localhost" Sep 9 00:23:29.170422 containerd[1591]: 2025-09-09 00:23:29.049 [INFO][4288] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:23:29.170422 containerd[1591]: 2025-09-09 00:23:29.053 [INFO][4288] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:23:29.170422 containerd[1591]: 2025-09-09 00:23:29.055 [INFO][4288] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:29.170422 containerd[1591]: 2025-09-09 00:23:29.057 [INFO][4288] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:29.170422 containerd[1591]: 2025-09-09 00:23:29.057 [INFO][4288] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" host="localhost" Sep 9 00:23:29.171146 containerd[1591]: 2025-09-09 00:23:29.058 [INFO][4288] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b Sep 9 00:23:29.171146 containerd[1591]: 2025-09-09 00:23:29.092 [INFO][4288] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" host="localhost" Sep 9 00:23:29.171146 containerd[1591]: 2025-09-09 00:23:29.130 [INFO][4288] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" host="localhost" Sep 9 00:23:29.171146 containerd[1591]: 2025-09-09 00:23:29.130 [INFO][4288] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" host="localhost" Sep 9 00:23:29.171146 containerd[1591]: 2025-09-09 00:23:29.131 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:23:29.171146 containerd[1591]: 2025-09-09 00:23:29.131 [INFO][4288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" HandleID="k8s-pod-network.cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" Workload="localhost-k8s-csi--node--driver--6fl7r-eth0" Sep 9 00:23:29.171389 containerd[1591]: 2025-09-09 00:23:29.135 [INFO][4272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" Namespace="calico-system" Pod="csi-node-driver-6fl7r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fl7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6fl7r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f5cb06c-9205-4333-be98-49c50e03a5ae", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6fl7r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18b0d20f5fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:29.171462 containerd[1591]: 2025-09-09 00:23:29.135 [INFO][4272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" Namespace="calico-system" Pod="csi-node-driver-6fl7r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fl7r-eth0" Sep 9 00:23:29.171462 containerd[1591]: 2025-09-09 00:23:29.135 [INFO][4272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18b0d20f5fa ContainerID="cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" Namespace="calico-system" Pod="csi-node-driver-6fl7r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fl7r-eth0" Sep 9 00:23:29.171462 containerd[1591]: 2025-09-09 00:23:29.144 [INFO][4272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" Namespace="calico-system" Pod="csi-node-driver-6fl7r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fl7r-eth0" Sep 9 00:23:29.171554 containerd[1591]: 2025-09-09 00:23:29.146 [INFO][4272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" Namespace="calico-system" Pod="csi-node-driver-6fl7r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fl7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6fl7r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f5cb06c-9205-4333-be98-49c50e03a5ae", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b", Pod:"csi-node-driver-6fl7r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18b0d20f5fa", MAC:"ba:9e:6a:ef:db:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:29.171622 containerd[1591]: 2025-09-09 00:23:29.164 [INFO][4272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" Namespace="calico-system" Pod="csi-node-driver-6fl7r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fl7r-eth0" Sep 9 00:23:29.206236 containerd[1591]: time="2025-09-09T00:23:29.205813205Z" level=info msg="connecting to shim cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b" address="unix:///run/containerd/s/efe9ff8f067c0b269ef1aae0cf077c10521de5a8512524b45a767f96c3427e5e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:23:29.239616 systemd[1]: Started cri-containerd-cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b.scope - libcontainer container cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b. Sep 9 00:23:29.259684 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:23:29.261404 systemd-networkd[1522]: cali396f0bb2e57: Gained IPv6LL Sep 9 00:23:29.283370 containerd[1591]: time="2025-09-09T00:23:29.283311817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6fl7r,Uid:0f5cb06c-9205-4333-be98-49c50e03a5ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b\"" Sep 9 00:23:29.889439 kubelet[2831]: E0909 00:23:29.889363 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:29.889892 containerd[1591]: time="2025-09-09T00:23:29.889807069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bd4cd4c9-jcx2b,Uid:44a26592-5a84-447c-a0d8-345d0ec82cf5,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:23:29.889892 containerd[1591]: time="2025-09-09T00:23:29.889879184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-45t44,Uid:babafc45-a3f4-4482-aee4-f7dae7ac27df,Namespace:kube-system,Attempt:0,}" Sep 9 00:23:29.890240 containerd[1591]: time="2025-09-09T00:23:29.890202524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bd4cd4c9-lh8kj,Uid:d9104726-3d48-43bc-a3ba-3286b6c4cf8b,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:23:30.040560 systemd-networkd[1522]: calic6b4f91950c: Link UP Sep 9 00:23:30.041545 systemd-networkd[1522]: calic6b4f91950c: Gained carrier Sep 9 00:23:30.057050 containerd[1591]: 2025-09-09 00:23:29.946 [INFO][4388] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--45t44-eth0 coredns-674b8bbfcf- kube-system babafc45-a3f4-4482-aee4-f7dae7ac27df 890 0 2025-09-09 00:22:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-45t44 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic6b4f91950c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" Namespace="kube-system" Pod="coredns-674b8bbfcf-45t44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--45t44-" Sep 9 00:23:30.057050 containerd[1591]: 2025-09-09 00:23:29.946 [INFO][4388] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" Namespace="kube-system" Pod="coredns-674b8bbfcf-45t44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--45t44-eth0" Sep 9 00:23:30.057050 containerd[1591]: 2025-09-09 00:23:29.992 [INFO][4432] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" HandleID="k8s-pod-network.544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" Workload="localhost-k8s-coredns--674b8bbfcf--45t44-eth0" Sep 9 00:23:30.057368 containerd[1591]: 2025-09-09 00:23:29.992 [INFO][4432] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" HandleID="k8s-pod-network.544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" Workload="localhost-k8s-coredns--674b8bbfcf--45t44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a47d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-45t44", "timestamp":"2025-09-09 00:23:29.992082681 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:23:30.057368 containerd[1591]: 2025-09-09 00:23:29.992 [INFO][4432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:23:30.057368 containerd[1591]: 2025-09-09 00:23:29.992 [INFO][4432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:23:30.057368 containerd[1591]: 2025-09-09 00:23:29.992 [INFO][4432] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:23:30.057368 containerd[1591]: 2025-09-09 00:23:30.003 [INFO][4432] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" host="localhost" Sep 9 00:23:30.057368 containerd[1591]: 2025-09-09 00:23:30.008 [INFO][4432] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:23:30.057368 containerd[1591]: 2025-09-09 00:23:30.014 [INFO][4432] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:23:30.057368 containerd[1591]: 2025-09-09 00:23:30.016 [INFO][4432] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:30.057368 containerd[1591]: 2025-09-09 00:23:30.018 [INFO][4432] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:30.057368 containerd[1591]: 2025-09-09 00:23:30.018 [INFO][4432] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" host="localhost" Sep 9 00:23:30.057746 containerd[1591]: 2025-09-09 00:23:30.020 [INFO][4432] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd Sep 9 00:23:30.057746 containerd[1591]: 2025-09-09 00:23:30.025 [INFO][4432] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" host="localhost" Sep 9 00:23:30.057746 containerd[1591]: 2025-09-09 00:23:30.033 [INFO][4432] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" host="localhost" Sep 9 00:23:30.057746 containerd[1591]: 2025-09-09 00:23:30.033 [INFO][4432] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" host="localhost" Sep 9 00:23:30.057746 containerd[1591]: 2025-09-09 00:23:30.033 [INFO][4432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:23:30.057746 containerd[1591]: 2025-09-09 00:23:30.033 [INFO][4432] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" HandleID="k8s-pod-network.544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" Workload="localhost-k8s-coredns--674b8bbfcf--45t44-eth0" Sep 9 00:23:30.057978 containerd[1591]: 2025-09-09 00:23:30.037 [INFO][4388] cni-plugin/k8s.go 418: Populated endpoint ContainerID="544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" Namespace="kube-system" Pod="coredns-674b8bbfcf-45t44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--45t44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--45t44-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"babafc45-a3f4-4482-aee4-f7dae7ac27df", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 22, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-45t44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6b4f91950c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:30.058082 containerd[1591]: 2025-09-09 00:23:30.037 [INFO][4388] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" Namespace="kube-system" Pod="coredns-674b8bbfcf-45t44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--45t44-eth0" Sep 9 00:23:30.058082 containerd[1591]: 2025-09-09 00:23:30.037 [INFO][4388] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6b4f91950c ContainerID="544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" Namespace="kube-system" Pod="coredns-674b8bbfcf-45t44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--45t44-eth0" Sep 9 00:23:30.058082 containerd[1591]: 2025-09-09 00:23:30.041 [INFO][4388] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" Namespace="kube-system" Pod="coredns-674b8bbfcf-45t44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--45t44-eth0" Sep 9 00:23:30.058180 containerd[1591]: 2025-09-09 00:23:30.042 [INFO][4388] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" Namespace="kube-system" Pod="coredns-674b8bbfcf-45t44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--45t44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--45t44-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"babafc45-a3f4-4482-aee4-f7dae7ac27df", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 22, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd", Pod:"coredns-674b8bbfcf-45t44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6b4f91950c", MAC:"a6:66:33:66:96:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:30.058180 containerd[1591]: 2025-09-09 00:23:30.053 [INFO][4388] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" Namespace="kube-system" Pod="coredns-674b8bbfcf-45t44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--45t44-eth0" Sep 9 00:23:30.096461 containerd[1591]: time="2025-09-09T00:23:30.096390589Z" level=info msg="connecting to shim 544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd" address="unix:///run/containerd/s/61e4196ed25dcd07dfecff4b67d4e5494685920e2ca0c27c994c0a6cf6a26696" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:23:30.130564 systemd[1]: Started cri-containerd-544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd.scope - libcontainer container 544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd. Sep 9 00:23:30.151629 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:23:30.158910 systemd-networkd[1522]: calibe582d1cc82: Link UP Sep 9 00:23:30.162457 systemd-networkd[1522]: calibe582d1cc82: Gained carrier Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:29.947 [INFO][4399] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-eth0 calico-apiserver-76bd4cd4c9- calico-apiserver 44a26592-5a84-447c-a0d8-345d0ec82cf5 889 0 2025-09-09 00:22:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76bd4cd4c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76bd4cd4c9-jcx2b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe582d1cc82 [] [] }} ContainerID="111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-jcx2b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-" Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:29.948 [INFO][4399] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-jcx2b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-eth0" Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.001 [INFO][4434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" HandleID="k8s-pod-network.111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" Workload="localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-eth0" Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.001 [INFO][4434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" HandleID="k8s-pod-network.111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" Workload="localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76bd4cd4c9-jcx2b", "timestamp":"2025-09-09 00:23:30.00178007 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.001 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.033 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.034 [INFO][4434] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.107 [INFO][4434] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" host="localhost" Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.117 [INFO][4434] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.124 [INFO][4434] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.126 [INFO][4434] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.129 [INFO][4434] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.129 [INFO][4434] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" host="localhost" Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.131 [INFO][4434] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9 Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.138 [INFO][4434] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" host="localhost" Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.145 [INFO][4434] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" host="localhost" Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.145 [INFO][4434] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" host="localhost" Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.145 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:23:30.182743 containerd[1591]: 2025-09-09 00:23:30.145 [INFO][4434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" HandleID="k8s-pod-network.111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" Workload="localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-eth0" Sep 9 00:23:30.183621 containerd[1591]: 2025-09-09 00:23:30.149 [INFO][4399] cni-plugin/k8s.go 418: Populated endpoint ContainerID="111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-jcx2b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-eth0", GenerateName:"calico-apiserver-76bd4cd4c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"44a26592-5a84-447c-a0d8-345d0ec82cf5", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 22, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bd4cd4c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76bd4cd4c9-jcx2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe582d1cc82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:30.183621 containerd[1591]: 2025-09-09 00:23:30.149 [INFO][4399] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-jcx2b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-eth0" Sep 9 00:23:30.183621 containerd[1591]: 2025-09-09 00:23:30.149 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe582d1cc82 ContainerID="111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-jcx2b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-eth0" Sep 9 00:23:30.183621 containerd[1591]: 2025-09-09 00:23:30.161 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-jcx2b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-eth0" Sep 9 00:23:30.183621 containerd[1591]: 2025-09-09 00:23:30.161 [INFO][4399] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-jcx2b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-eth0", GenerateName:"calico-apiserver-76bd4cd4c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"44a26592-5a84-447c-a0d8-345d0ec82cf5", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 22, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bd4cd4c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9", Pod:"calico-apiserver-76bd4cd4c9-jcx2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe582d1cc82", MAC:"16:dc:3b:d1:8f:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:30.183621 containerd[1591]: 2025-09-09 00:23:30.176 [INFO][4399] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-jcx2b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--jcx2b-eth0" Sep 9 00:23:30.203655 containerd[1591]: time="2025-09-09T00:23:30.203590864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-45t44,Uid:babafc45-a3f4-4482-aee4-f7dae7ac27df,Namespace:kube-system,Attempt:0,} returns sandbox id \"544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd\"" Sep 9 00:23:30.204779 kubelet[2831]: E0909 00:23:30.204753 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:30.213188 containerd[1591]: time="2025-09-09T00:23:30.213126015Z" level=info msg="CreateContainer within sandbox \"544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:23:30.218475 containerd[1591]: time="2025-09-09T00:23:30.218013562Z" level=info msg="connecting to shim 111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9" address="unix:///run/containerd/s/af35dd350a10ea7bc5e7e931e0b768e831a25cc53dc62f2dc9232a18868fe15f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:23:30.222966 systemd-networkd[1522]: vxlan.calico: Gained IPv6LL Sep 9 00:23:30.231697 containerd[1591]: time="2025-09-09T00:23:30.231633544Z" level=info msg="Container 305aae6447b6de699c272ee129310c8289c788063c2e833502a7f8821dc1bd6f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:30.243096 containerd[1591]: time="2025-09-09T00:23:30.242581181Z" level=info msg="CreateContainer within sandbox \"544bef820d4db69ba545670c4383ffe4f6eb4b09895f2bae933dbc1a292acdcd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"305aae6447b6de699c272ee129310c8289c788063c2e833502a7f8821dc1bd6f\"" Sep 9 00:23:30.244688 containerd[1591]: time="2025-09-09T00:23:30.243642735Z" level=info msg="StartContainer for \"305aae6447b6de699c272ee129310c8289c788063c2e833502a7f8821dc1bd6f\"" Sep 9 00:23:30.245654 containerd[1591]: time="2025-09-09T00:23:30.245624296Z" level=info msg="connecting to shim 305aae6447b6de699c272ee129310c8289c788063c2e833502a7f8821dc1bd6f" address="unix:///run/containerd/s/61e4196ed25dcd07dfecff4b67d4e5494685920e2ca0c27c994c0a6cf6a26696" protocol=ttrpc version=3 Sep 9 00:23:30.250469 systemd[1]: Started cri-containerd-111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9.scope - libcontainer container 111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9. Sep 9 00:23:30.281593 systemd[1]: Started cri-containerd-305aae6447b6de699c272ee129310c8289c788063c2e833502a7f8821dc1bd6f.scope - libcontainer container 305aae6447b6de699c272ee129310c8289c788063c2e833502a7f8821dc1bd6f. Sep 9 00:23:30.290426 systemd-networkd[1522]: calidfafc85b3f8: Link UP Sep 9 00:23:30.292054 systemd-networkd[1522]: calidfafc85b3f8: Gained carrier Sep 9 00:23:30.297322 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:29.960 [INFO][4400] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-eth0 calico-apiserver-76bd4cd4c9- calico-apiserver d9104726-3d48-43bc-a3ba-3286b6c4cf8b 894 0 2025-09-09 00:22:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76bd4cd4c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76bd4cd4c9-lh8kj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidfafc85b3f8 [] [] }} ContainerID="8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-lh8kj" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-" Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:29.960 [INFO][4400] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-lh8kj" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-eth0" Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.015 [INFO][4446] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" HandleID="k8s-pod-network.8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" Workload="localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-eth0" Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.016 [INFO][4446] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" HandleID="k8s-pod-network.8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" Workload="localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76bd4cd4c9-lh8kj", "timestamp":"2025-09-09 00:23:30.015820376 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.016 [INFO][4446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.146 [INFO][4446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.146 [INFO][4446] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.207 [INFO][4446] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" host="localhost" Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.217 [INFO][4446] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.236 [INFO][4446] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.244 [INFO][4446] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.251 [INFO][4446] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.251 [INFO][4446] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" host="localhost" Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.254 [INFO][4446] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6 Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.267 [INFO][4446] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" host="localhost" Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.276 [INFO][4446] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" host="localhost" Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.277 [INFO][4446] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" host="localhost" Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.277 [INFO][4446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:23:30.320759 containerd[1591]: 2025-09-09 00:23:30.277 [INFO][4446] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" HandleID="k8s-pod-network.8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" Workload="localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-eth0" Sep 9 00:23:30.321597 containerd[1591]: 2025-09-09 00:23:30.283 [INFO][4400] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-lh8kj" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-eth0", GenerateName:"calico-apiserver-76bd4cd4c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9104726-3d48-43bc-a3ba-3286b6c4cf8b", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 22, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bd4cd4c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76bd4cd4c9-lh8kj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidfafc85b3f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:30.321597 containerd[1591]: 2025-09-09 00:23:30.283 [INFO][4400] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-lh8kj" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-eth0" Sep 9 00:23:30.321597 containerd[1591]: 2025-09-09 00:23:30.283 [INFO][4400] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfafc85b3f8 ContainerID="8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-lh8kj" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-eth0" Sep 9 00:23:30.321597 containerd[1591]: 2025-09-09 00:23:30.288 [INFO][4400] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-lh8kj" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-eth0" Sep 9 00:23:30.321597 containerd[1591]: 2025-09-09 00:23:30.294 [INFO][4400] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-lh8kj" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-eth0", GenerateName:"calico-apiserver-76bd4cd4c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9104726-3d48-43bc-a3ba-3286b6c4cf8b", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 22, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bd4cd4c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6", Pod:"calico-apiserver-76bd4cd4c9-lh8kj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidfafc85b3f8", MAC:"1a:9d:e0:e1:5c:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:30.321597 containerd[1591]: 2025-09-09 00:23:30.311 [INFO][4400] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" Namespace="calico-apiserver" Pod="calico-apiserver-76bd4cd4c9-lh8kj" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bd4cd4c9--lh8kj-eth0" Sep 9 00:23:30.354086 containerd[1591]: time="2025-09-09T00:23:30.354003758Z" level=info msg="StartContainer for \"305aae6447b6de699c272ee129310c8289c788063c2e833502a7f8821dc1bd6f\" returns successfully" Sep 9 00:23:30.366281 containerd[1591]: time="2025-09-09T00:23:30.366181627Z" level=info msg="connecting to shim 8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6" address="unix:///run/containerd/s/4bf32ca80793e5c957edf29cdcd2d51407f201967380b53c326072f029d64ccf" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:23:30.372110 containerd[1591]: time="2025-09-09T00:23:30.372047732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bd4cd4c9-jcx2b,Uid:44a26592-5a84-447c-a0d8-345d0ec82cf5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9\"" Sep 9 00:23:30.414601 systemd[1]: Started cri-containerd-8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6.scope - libcontainer container 8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6. Sep 9 00:23:30.432933 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:23:30.479460 containerd[1591]: time="2025-09-09T00:23:30.479405494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bd4cd4c9-lh8kj,Uid:d9104726-3d48-43bc-a3ba-3286b6c4cf8b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6\"" Sep 9 00:23:30.779235 containerd[1591]: time="2025-09-09T00:23:30.779066804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:30.779959 containerd[1591]: time="2025-09-09T00:23:30.779917210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 00:23:30.781232 containerd[1591]: time="2025-09-09T00:23:30.781199249Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:30.783584 containerd[1591]: time="2025-09-09T00:23:30.783549958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:30.785958 containerd[1591]: time="2025-09-09T00:23:30.785922547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.026278395s" Sep 9 00:23:30.785958 containerd[1591]: time="2025-09-09T00:23:30.785955749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 00:23:30.787472 containerd[1591]: time="2025-09-09T00:23:30.787441544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:23:30.791744 containerd[1591]: time="2025-09-09T00:23:30.791700474Z" level=info msg="CreateContainer within sandbox \"0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:23:30.799645 containerd[1591]: time="2025-09-09T00:23:30.799572144Z" level=info msg="Container 5948b772329bcd1a4fd8474f244af9de2afa96bb4f61f2c08f1ba9956393b3bc: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:30.807889 containerd[1591]: time="2025-09-09T00:23:30.807826938Z" level=info msg="CreateContainer within sandbox \"0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5948b772329bcd1a4fd8474f244af9de2afa96bb4f61f2c08f1ba9956393b3bc\"" Sep 9 00:23:30.808567 containerd[1591]: time="2025-09-09T00:23:30.808510258Z" level=info msg="StartContainer for \"5948b772329bcd1a4fd8474f244af9de2afa96bb4f61f2c08f1ba9956393b3bc\"" Sep 9 00:23:30.809856 containerd[1591]: time="2025-09-09T00:23:30.809814250Z" level=info msg="connecting to shim 5948b772329bcd1a4fd8474f244af9de2afa96bb4f61f2c08f1ba9956393b3bc" address="unix:///run/containerd/s/56c02bf157132de7043a8fb2840e5471a3fb2114d3f3c34af5ae63685f08f7cc" protocol=ttrpc version=3 Sep 9 00:23:30.849593 systemd[1]: Started cri-containerd-5948b772329bcd1a4fd8474f244af9de2afa96bb4f61f2c08f1ba9956393b3bc.scope - libcontainer container 5948b772329bcd1a4fd8474f244af9de2afa96bb4f61f2c08f1ba9956393b3bc. Sep 9 00:23:30.888942 kubelet[2831]: E0909 00:23:30.888892 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:30.889118 containerd[1591]: time="2025-09-09T00:23:30.888937080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5bc6bd7-p9249,Uid:077c59c5-131e-4bd3-82b6-f6eb0e4199cc,Namespace:calico-system,Attempt:0,}" Sep 9 00:23:30.889118 containerd[1591]: time="2025-09-09T00:23:30.888966515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rc24j,Uid:6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e,Namespace:calico-system,Attempt:0,}" Sep 9 00:23:30.889535 containerd[1591]: time="2025-09-09T00:23:30.889490785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8q9lh,Uid:9a54da31-62cf-4aee-ba5d-e0a857a34c2b,Namespace:kube-system,Attempt:0,}" Sep 9 00:23:30.929686 containerd[1591]: time="2025-09-09T00:23:30.929630533Z" level=info msg="StartContainer for \"5948b772329bcd1a4fd8474f244af9de2afa96bb4f61f2c08f1ba9956393b3bc\" returns successfully" Sep 9 00:23:31.066131 kubelet[2831]: E0909 00:23:31.065769 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:31.098679 kubelet[2831]: I0909 00:23:31.098423 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-45t44" podStartSLOduration=41.098403535 podStartE2EDuration="41.098403535s" podCreationTimestamp="2025-09-09 00:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:23:31.080431401 +0000 UTC m=+46.293329970" watchObservedRunningTime="2025-09-09 00:23:31.098403535 +0000 UTC m=+46.311302104" Sep 9 00:23:31.124194 systemd-networkd[1522]: cali1f11f27ebc7: Link UP Sep 9 00:23:31.125777 systemd-networkd[1522]: cali1f11f27ebc7: Gained carrier Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:30.993 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-eth0 calico-kube-controllers-7b5bc6bd7- calico-system 077c59c5-131e-4bd3-82b6-f6eb0e4199cc 887 0 2025-09-09 00:23:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b5bc6bd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7b5bc6bd7-p9249 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1f11f27ebc7 [] [] }} ContainerID="7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" Namespace="calico-system" Pod="calico-kube-controllers-7b5bc6bd7-p9249" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-" Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:30.994 [INFO][4693] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" Namespace="calico-system" Pod="calico-kube-controllers-7b5bc6bd7-p9249" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-eth0" Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.059 [INFO][4753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" HandleID="k8s-pod-network.7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" Workload="localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-eth0" Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.059 [INFO][4753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" HandleID="k8s-pod-network.7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" Workload="localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7b5bc6bd7-p9249", "timestamp":"2025-09-09 00:23:31.059062634 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.059 [INFO][4753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.059 [INFO][4753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.059 [INFO][4753] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.080 [INFO][4753] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" host="localhost" Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.087 [INFO][4753] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.093 [INFO][4753] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.099 [INFO][4753] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.104 [INFO][4753] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.104 [INFO][4753] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" host="localhost" Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.106 [INFO][4753] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.110 [INFO][4753] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" host="localhost" Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.115 [INFO][4753] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" host="localhost" Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.115 [INFO][4753] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" host="localhost" Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.115 [INFO][4753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:23:31.141738 containerd[1591]: 2025-09-09 00:23:31.115 [INFO][4753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" HandleID="k8s-pod-network.7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" Workload="localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-eth0" Sep 9 00:23:31.142701 containerd[1591]: 2025-09-09 00:23:31.120 [INFO][4693] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" Namespace="calico-system" Pod="calico-kube-controllers-7b5bc6bd7-p9249" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-eth0", GenerateName:"calico-kube-controllers-7b5bc6bd7-", Namespace:"calico-system", SelfLink:"", UID:"077c59c5-131e-4bd3-82b6-f6eb0e4199cc", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b5bc6bd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7b5bc6bd7-p9249", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f11f27ebc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:31.142701 containerd[1591]: 2025-09-09 00:23:31.120 [INFO][4693] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" Namespace="calico-system" Pod="calico-kube-controllers-7b5bc6bd7-p9249" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-eth0" Sep 9 00:23:31.142701 containerd[1591]: 2025-09-09 00:23:31.120 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f11f27ebc7 ContainerID="7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" Namespace="calico-system" Pod="calico-kube-controllers-7b5bc6bd7-p9249" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-eth0" Sep 9 00:23:31.142701 containerd[1591]: 2025-09-09 00:23:31.125 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" Namespace="calico-system" Pod="calico-kube-controllers-7b5bc6bd7-p9249" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-eth0" Sep 9 00:23:31.142701 containerd[1591]: 2025-09-09 00:23:31.125 [INFO][4693] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" Namespace="calico-system" Pod="calico-kube-controllers-7b5bc6bd7-p9249" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-eth0", GenerateName:"calico-kube-controllers-7b5bc6bd7-", Namespace:"calico-system", SelfLink:"", UID:"077c59c5-131e-4bd3-82b6-f6eb0e4199cc", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b5bc6bd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b", Pod:"calico-kube-controllers-7b5bc6bd7-p9249", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f11f27ebc7", MAC:"22:88:5f:aa:37:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:31.142701 containerd[1591]: 2025-09-09 00:23:31.137 [INFO][4693] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" Namespace="calico-system" Pod="calico-kube-controllers-7b5bc6bd7-p9249" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5bc6bd7--p9249-eth0" Sep 9 00:23:31.174088 containerd[1591]: time="2025-09-09T00:23:31.174021552Z" level=info msg="connecting to shim 7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b" address="unix:///run/containerd/s/62639c5b34b4168126678b3fc8bcb7412b699f5c4ba3caa6fd01921d9362a7ce" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:23:31.184469 systemd-networkd[1522]: cali18b0d20f5fa: Gained IPv6LL Sep 9 00:23:31.205783 systemd[1]: Started cri-containerd-7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b.scope - libcontainer container 7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b. Sep 9 00:23:31.225643 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:23:31.242089 systemd-networkd[1522]: cali6567257486d: Link UP Sep 9 00:23:31.242545 systemd-networkd[1522]: cali6567257486d: Gained carrier Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:30.983 [INFO][4708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--rc24j-eth0 goldmane-54d579b49d- calico-system 6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e 893 0 2025-09-09 00:23:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-rc24j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6567257486d [] [] }} ContainerID="054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" Namespace="calico-system" Pod="goldmane-54d579b49d-rc24j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rc24j-" Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:30.983 [INFO][4708] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" Namespace="calico-system" Pod="goldmane-54d579b49d-rc24j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rc24j-eth0" Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.070 [INFO][4747] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" HandleID="k8s-pod-network.054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" Workload="localhost-k8s-goldmane--54d579b49d--rc24j-eth0" Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.072 [INFO][4747] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" HandleID="k8s-pod-network.054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" Workload="localhost-k8s-goldmane--54d579b49d--rc24j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003828d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-rc24j", "timestamp":"2025-09-09 00:23:31.07010581 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.072 [INFO][4747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.115 [INFO][4747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.115 [INFO][4747] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.180 [INFO][4747] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" host="localhost" Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.191 [INFO][4747] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.202 [INFO][4747] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.206 [INFO][4747] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.209 [INFO][4747] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.209 [INFO][4747] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" host="localhost" Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.211 [INFO][4747] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322 Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.217 [INFO][4747] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" host="localhost" Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.224 [INFO][4747] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" host="localhost" Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.224 [INFO][4747] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" host="localhost" Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.229 [INFO][4747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:23:31.257129 containerd[1591]: 2025-09-09 00:23:31.229 [INFO][4747] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" HandleID="k8s-pod-network.054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" Workload="localhost-k8s-goldmane--54d579b49d--rc24j-eth0" Sep 9 00:23:31.257930 containerd[1591]: 2025-09-09 00:23:31.237 [INFO][4708] cni-plugin/k8s.go 418: Populated endpoint ContainerID="054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" Namespace="calico-system" Pod="goldmane-54d579b49d-rc24j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rc24j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--rc24j-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-rc24j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6567257486d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:31.257930 containerd[1591]: 2025-09-09 00:23:31.237 [INFO][4708] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" Namespace="calico-system" Pod="goldmane-54d579b49d-rc24j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rc24j-eth0" Sep 9 00:23:31.257930 containerd[1591]: 2025-09-09 00:23:31.237 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6567257486d ContainerID="054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" Namespace="calico-system" Pod="goldmane-54d579b49d-rc24j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rc24j-eth0" Sep 9 00:23:31.257930 containerd[1591]: 2025-09-09 00:23:31.242 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" Namespace="calico-system" Pod="goldmane-54d579b49d-rc24j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rc24j-eth0" Sep 9 00:23:31.257930 containerd[1591]: 2025-09-09 00:23:31.243 [INFO][4708] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" Namespace="calico-system" Pod="goldmane-54d579b49d-rc24j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rc24j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--rc24j-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322", Pod:"goldmane-54d579b49d-rc24j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6567257486d", MAC:"62:96:8e:2e:23:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:31.257930 containerd[1591]: 2025-09-09 00:23:31.254 [INFO][4708] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" Namespace="calico-system" Pod="goldmane-54d579b49d-rc24j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rc24j-eth0" Sep 9 00:23:31.263634 containerd[1591]: time="2025-09-09T00:23:31.263586023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5bc6bd7-p9249,Uid:077c59c5-131e-4bd3-82b6-f6eb0e4199cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b\"" Sep 9 00:23:31.313516 containerd[1591]: time="2025-09-09T00:23:31.313413578Z" level=info msg="connecting to shim 054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322" address="unix:///run/containerd/s/8317c703acb81cbba97e36021e99deb1540f6c532358ec75b1f6a6cd3cf4a88c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:23:31.334383 systemd-networkd[1522]: cali7171c20765a: Link UP Sep 9 00:23:31.335182 systemd-networkd[1522]: cali7171c20765a: Gained carrier Sep 9 00:23:31.342653 systemd[1]: Started cri-containerd-054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322.scope - libcontainer container 054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322. Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.025 [INFO][4714] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--8q9lh-eth0 coredns-674b8bbfcf- kube-system 9a54da31-62cf-4aee-ba5d-e0a857a34c2b 888 0 2025-09-09 00:22:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-8q9lh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7171c20765a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" Namespace="kube-system" Pod="coredns-674b8bbfcf-8q9lh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8q9lh-" Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.025 [INFO][4714] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" Namespace="kube-system" Pod="coredns-674b8bbfcf-8q9lh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8q9lh-eth0" Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.080 [INFO][4764] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" HandleID="k8s-pod-network.a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" Workload="localhost-k8s-coredns--674b8bbfcf--8q9lh-eth0" Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.080 [INFO][4764] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" HandleID="k8s-pod-network.a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" Workload="localhost-k8s-coredns--674b8bbfcf--8q9lh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f090), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-8q9lh", "timestamp":"2025-09-09 00:23:31.079963787 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.080 [INFO][4764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.225 [INFO][4764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.225 [INFO][4764] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.280 [INFO][4764] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" host="localhost" Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.295 [INFO][4764] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.299 [INFO][4764] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.304 [INFO][4764] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.307 [INFO][4764] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.307 [INFO][4764] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" host="localhost" Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.310 [INFO][4764] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.314 [INFO][4764] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" host="localhost" Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.325 [INFO][4764] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" host="localhost" Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.325 [INFO][4764] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" host="localhost" Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.325 [INFO][4764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:23:31.357505 containerd[1591]: 2025-09-09 00:23:31.325 [INFO][4764] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" HandleID="k8s-pod-network.a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" Workload="localhost-k8s-coredns--674b8bbfcf--8q9lh-eth0" Sep 9 00:23:31.358091 containerd[1591]: 2025-09-09 00:23:31.329 [INFO][4714] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" Namespace="kube-system" Pod="coredns-674b8bbfcf-8q9lh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8q9lh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8q9lh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9a54da31-62cf-4aee-ba5d-e0a857a34c2b", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 22, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-8q9lh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7171c20765a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:31.358091 containerd[1591]: 2025-09-09 00:23:31.330 [INFO][4714] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" Namespace="kube-system" Pod="coredns-674b8bbfcf-8q9lh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8q9lh-eth0" Sep 9 00:23:31.358091 containerd[1591]: 2025-09-09 00:23:31.330 [INFO][4714] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7171c20765a ContainerID="a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" Namespace="kube-system" Pod="coredns-674b8bbfcf-8q9lh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8q9lh-eth0" Sep 9 00:23:31.358091 containerd[1591]: 2025-09-09 00:23:31.335 [INFO][4714] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" Namespace="kube-system" Pod="coredns-674b8bbfcf-8q9lh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8q9lh-eth0" Sep 9 00:23:31.358091 containerd[1591]: 2025-09-09 00:23:31.336 [INFO][4714] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" Namespace="kube-system" Pod="coredns-674b8bbfcf-8q9lh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8q9lh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8q9lh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9a54da31-62cf-4aee-ba5d-e0a857a34c2b", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 22, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f", Pod:"coredns-674b8bbfcf-8q9lh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7171c20765a", MAC:"ba:61:34:45:f1:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:23:31.358091 containerd[1591]: 2025-09-09 00:23:31.352 [INFO][4714] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" Namespace="kube-system" Pod="coredns-674b8bbfcf-8q9lh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8q9lh-eth0" Sep 9 00:23:31.363036 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:23:31.373472 systemd-networkd[1522]: calic6b4f91950c: Gained IPv6LL Sep 9 00:23:31.393399 containerd[1591]: time="2025-09-09T00:23:31.393290874Z" level=info msg="connecting to shim a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f" address="unix:///run/containerd/s/e58642b78336643c88afced63f4a2b3df5e8ae3cbf3bdb5421a992b4d0fdd914" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:23:31.397826 containerd[1591]: time="2025-09-09T00:23:31.397778857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rc24j,Uid:6f0c2c7b-bc01-4f02-9a04-4a39dfe60e5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322\"" Sep 9 00:23:31.433423 systemd[1]: Started cri-containerd-a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f.scope - libcontainer container a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f. Sep 9 00:23:31.449442 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:23:31.482795 containerd[1591]: time="2025-09-09T00:23:31.482735280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8q9lh,Uid:9a54da31-62cf-4aee-ba5d-e0a857a34c2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f\"" Sep 9 00:23:31.483638 kubelet[2831]: E0909 00:23:31.483589 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:31.489676 containerd[1591]: time="2025-09-09T00:23:31.489622730Z" level=info msg="CreateContainer within sandbox \"a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:23:31.501476 systemd-networkd[1522]: calibe582d1cc82: Gained IPv6LL Sep 9 00:23:31.513979 containerd[1591]: time="2025-09-09T00:23:31.513914895Z" level=info msg="Container 5ae30fa25509fa2d12615722754927a142f6e4e7f7d6cd40277498f0b2c1e04f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:31.521542 containerd[1591]: time="2025-09-09T00:23:31.521482850Z" level=info msg="CreateContainer within sandbox \"a9f8a48ebf2e8dc6a3eccce27d0efc43c364d8063d5560f78402680bfaf3479f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ae30fa25509fa2d12615722754927a142f6e4e7f7d6cd40277498f0b2c1e04f\"" Sep 9 00:23:31.522145 containerd[1591]: time="2025-09-09T00:23:31.522100816Z" level=info msg="StartContainer for \"5ae30fa25509fa2d12615722754927a142f6e4e7f7d6cd40277498f0b2c1e04f\"" Sep 9 00:23:31.523244 containerd[1591]: time="2025-09-09T00:23:31.523201905Z" level=info msg="connecting to shim 5ae30fa25509fa2d12615722754927a142f6e4e7f7d6cd40277498f0b2c1e04f" address="unix:///run/containerd/s/e58642b78336643c88afced63f4a2b3df5e8ae3cbf3bdb5421a992b4d0fdd914" protocol=ttrpc version=3 Sep 9 00:23:31.548429 systemd[1]: Started cri-containerd-5ae30fa25509fa2d12615722754927a142f6e4e7f7d6cd40277498f0b2c1e04f.scope - libcontainer container 5ae30fa25509fa2d12615722754927a142f6e4e7f7d6cd40277498f0b2c1e04f. Sep 9 00:23:31.599679 containerd[1591]: time="2025-09-09T00:23:31.599530212Z" level=info msg="StartContainer for \"5ae30fa25509fa2d12615722754927a142f6e4e7f7d6cd40277498f0b2c1e04f\" returns successfully" Sep 9 00:23:32.074856 kubelet[2831]: E0909 00:23:32.074814 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:32.077808 kubelet[2831]: E0909 00:23:32.077739 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:32.086762 kubelet[2831]: I0909 00:23:32.086689 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8q9lh" podStartSLOduration=42.086670661 podStartE2EDuration="42.086670661s" podCreationTimestamp="2025-09-09 00:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:23:32.086313707 +0000 UTC m=+47.299212286" watchObservedRunningTime="2025-09-09 00:23:32.086670661 +0000 UTC m=+47.299569230" Sep 9 00:23:32.205571 systemd-networkd[1522]: calidfafc85b3f8: Gained IPv6LL Sep 9 00:23:32.525536 systemd-networkd[1522]: cali1f11f27ebc7: Gained IPv6LL Sep 9 00:23:32.525955 systemd-networkd[1522]: cali6567257486d: Gained IPv6LL Sep 9 00:23:32.973445 systemd-networkd[1522]: cali7171c20765a: Gained IPv6LL Sep 9 00:23:33.079194 kubelet[2831]: E0909 00:23:33.079151 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:33.079600 kubelet[2831]: E0909 00:23:33.079163 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:33.775311 containerd[1591]: time="2025-09-09T00:23:33.775201881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:33.776925 containerd[1591]: time="2025-09-09T00:23:33.776758929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 00:23:33.778151 containerd[1591]: time="2025-09-09T00:23:33.778078639Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:33.780682 containerd[1591]: time="2025-09-09T00:23:33.780603995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:33.781736 containerd[1591]: time="2025-09-09T00:23:33.781667433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.994188648s" Sep 9 00:23:33.781783 containerd[1591]: time="2025-09-09T00:23:33.781744357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 00:23:33.783220 containerd[1591]: time="2025-09-09T00:23:33.783175589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:23:33.789639 containerd[1591]: time="2025-09-09T00:23:33.789587209Z" level=info msg="CreateContainer within sandbox \"cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:23:33.817308 containerd[1591]: time="2025-09-09T00:23:33.813649779Z" level=info msg="Container 9e1adaa0386e12dfb97e70452dcb4098e3a283a5c8eba99c2fdc548c8bb6d62e: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:33.844087 containerd[1591]: time="2025-09-09T00:23:33.844024501Z" level=info msg="CreateContainer within sandbox \"cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9e1adaa0386e12dfb97e70452dcb4098e3a283a5c8eba99c2fdc548c8bb6d62e\"" Sep 9 00:23:33.851022 containerd[1591]: time="2025-09-09T00:23:33.850951624Z" level=info msg="StartContainer for \"9e1adaa0386e12dfb97e70452dcb4098e3a283a5c8eba99c2fdc548c8bb6d62e\"" Sep 9 00:23:33.852590 containerd[1591]: time="2025-09-09T00:23:33.852561972Z" level=info msg="connecting to shim 9e1adaa0386e12dfb97e70452dcb4098e3a283a5c8eba99c2fdc548c8bb6d62e" address="unix:///run/containerd/s/efe9ff8f067c0b269ef1aae0cf077c10521de5a8512524b45a767f96c3427e5e" protocol=ttrpc version=3 Sep 9 00:23:33.887561 systemd[1]: Started cri-containerd-9e1adaa0386e12dfb97e70452dcb4098e3a283a5c8eba99c2fdc548c8bb6d62e.scope - libcontainer container 9e1adaa0386e12dfb97e70452dcb4098e3a283a5c8eba99c2fdc548c8bb6d62e. Sep 9 00:23:33.944808 containerd[1591]: time="2025-09-09T00:23:33.944761539Z" level=info msg="StartContainer for \"9e1adaa0386e12dfb97e70452dcb4098e3a283a5c8eba99c2fdc548c8bb6d62e\" returns successfully" Sep 9 00:23:33.975936 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:57498.service - OpenSSH per-connection server daemon (10.0.0.1:57498). Sep 9 00:23:34.044427 sshd[5021]: Accepted publickey for core from 10.0.0.1 port 57498 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:23:34.046515 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:34.051712 systemd-logind[1576]: New session 11 of user core. Sep 9 00:23:34.062440 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:23:34.084007 kubelet[2831]: E0909 00:23:34.083966 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:23:34.211877 sshd[5023]: Connection closed by 10.0.0.1 port 57498 Sep 9 00:23:34.212211 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:34.216543 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:57498.service: Deactivated successfully. Sep 9 00:23:34.218667 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:23:34.219722 systemd-logind[1576]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:23:34.221515 systemd-logind[1576]: Removed session 11. Sep 9 00:23:36.621376 containerd[1591]: time="2025-09-09T00:23:36.621300841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:36.622152 containerd[1591]: time="2025-09-09T00:23:36.622115999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 00:23:36.623472 containerd[1591]: time="2025-09-09T00:23:36.623411243Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:36.625279 containerd[1591]: time="2025-09-09T00:23:36.625225937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:36.625876 containerd[1591]: time="2025-09-09T00:23:36.625827151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.842588113s" Sep 9 00:23:36.625876 containerd[1591]: time="2025-09-09T00:23:36.625860504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:23:36.626804 containerd[1591]: time="2025-09-09T00:23:36.626773276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:23:36.630881 containerd[1591]: time="2025-09-09T00:23:36.630844927Z" level=info msg="CreateContainer within sandbox \"111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:23:36.638665 containerd[1591]: time="2025-09-09T00:23:36.638636396Z" level=info msg="Container cfac51ba7ece47b0945397fcdd8e4a4fc1b368a2617f098123ed37013066b68c: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:36.647754 containerd[1591]: time="2025-09-09T00:23:36.647723189Z" level=info msg="CreateContainer within sandbox \"111b1258ef58756c7cfafc6bbee774db513482580645cd0aadb61be362e139a9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cfac51ba7ece47b0945397fcdd8e4a4fc1b368a2617f098123ed37013066b68c\"" Sep 9 00:23:36.648880 containerd[1591]: time="2025-09-09T00:23:36.648563825Z" level=info msg="StartContainer for \"cfac51ba7ece47b0945397fcdd8e4a4fc1b368a2617f098123ed37013066b68c\"" Sep 9 00:23:36.658879 containerd[1591]: time="2025-09-09T00:23:36.658823040Z" level=info msg="connecting to shim cfac51ba7ece47b0945397fcdd8e4a4fc1b368a2617f098123ed37013066b68c" address="unix:///run/containerd/s/af35dd350a10ea7bc5e7e931e0b768e831a25cc53dc62f2dc9232a18868fe15f" protocol=ttrpc version=3 Sep 9 00:23:36.691429 systemd[1]: Started cri-containerd-cfac51ba7ece47b0945397fcdd8e4a4fc1b368a2617f098123ed37013066b68c.scope - libcontainer container cfac51ba7ece47b0945397fcdd8e4a4fc1b368a2617f098123ed37013066b68c. Sep 9 00:23:36.741311 containerd[1591]: time="2025-09-09T00:23:36.741225729Z" level=info msg="StartContainer for \"cfac51ba7ece47b0945397fcdd8e4a4fc1b368a2617f098123ed37013066b68c\" returns successfully" Sep 9 00:23:37.678020 containerd[1591]: time="2025-09-09T00:23:37.677947135Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:37.678992 containerd[1591]: time="2025-09-09T00:23:37.678961399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 00:23:37.680734 containerd[1591]: time="2025-09-09T00:23:37.680692193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 1.053887358s" Sep 9 00:23:37.680811 containerd[1591]: time="2025-09-09T00:23:37.680740595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:23:37.682183 containerd[1591]: time="2025-09-09T00:23:37.682139875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:23:37.688723 containerd[1591]: time="2025-09-09T00:23:37.688677357Z" level=info msg="CreateContainer within sandbox \"8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:23:37.703306 containerd[1591]: time="2025-09-09T00:23:37.702641639Z" level=info msg="Container bce55d0b3beb221723caed3da60d8d6a07795c36e2e9fb0ae5d2e86696100c37: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:37.715101 containerd[1591]: time="2025-09-09T00:23:37.715061097Z" level=info msg="CreateContainer within sandbox \"8597cd22f7c3b50eeb7dc4019b3ffd987da3a8c94cd1dcb815c8c62995530ea6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bce55d0b3beb221723caed3da60d8d6a07795c36e2e9fb0ae5d2e86696100c37\"" Sep 9 00:23:37.716929 containerd[1591]: time="2025-09-09T00:23:37.716890337Z" level=info msg="StartContainer for \"bce55d0b3beb221723caed3da60d8d6a07795c36e2e9fb0ae5d2e86696100c37\"" Sep 9 00:23:37.718681 containerd[1591]: time="2025-09-09T00:23:37.718644516Z" level=info msg="connecting to shim bce55d0b3beb221723caed3da60d8d6a07795c36e2e9fb0ae5d2e86696100c37" address="unix:///run/containerd/s/4bf32ca80793e5c957edf29cdcd2d51407f201967380b53c326072f029d64ccf" protocol=ttrpc version=3 Sep 9 00:23:37.754471 systemd[1]: Started cri-containerd-bce55d0b3beb221723caed3da60d8d6a07795c36e2e9fb0ae5d2e86696100c37.scope - libcontainer container bce55d0b3beb221723caed3da60d8d6a07795c36e2e9fb0ae5d2e86696100c37. Sep 9 00:23:38.032827 containerd[1591]: time="2025-09-09T00:23:38.032417207Z" level=info msg="StartContainer for \"bce55d0b3beb221723caed3da60d8d6a07795c36e2e9fb0ae5d2e86696100c37\" returns successfully" Sep 9 00:23:38.094342 kubelet[2831]: I0909 00:23:38.094296 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:23:38.212108 kubelet[2831]: I0909 00:23:38.212029 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76bd4cd4c9-jcx2b" podStartSLOduration=32.959739945 podStartE2EDuration="39.212006989s" podCreationTimestamp="2025-09-09 00:22:59 +0000 UTC" firstStartedPulling="2025-09-09 00:23:30.374391426 +0000 UTC m=+45.587289995" lastFinishedPulling="2025-09-09 00:23:36.62665847 +0000 UTC m=+51.839557039" observedRunningTime="2025-09-09 00:23:37.516087716 +0000 UTC m=+52.728986275" watchObservedRunningTime="2025-09-09 00:23:38.212006989 +0000 UTC m=+53.424905558" Sep 9 00:23:39.097125 kubelet[2831]: I0909 00:23:39.097051 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:23:39.235960 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:57510.service - OpenSSH per-connection server daemon (10.0.0.1:57510). Sep 9 00:23:39.301337 sshd[5137]: Accepted publickey for core from 10.0.0.1 port 57510 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:23:39.305189 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:39.310319 systemd-logind[1576]: New session 12 of user core. Sep 9 00:23:39.316407 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:23:39.652095 sshd[5140]: Connection closed by 10.0.0.1 port 57510 Sep 9 00:23:39.652497 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:39.657650 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:57510.service: Deactivated successfully. Sep 9 00:23:39.660133 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:23:39.661068 systemd-logind[1576]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:23:39.662886 systemd-logind[1576]: Removed session 12. Sep 9 00:23:40.315089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906906647.mount: Deactivated successfully. Sep 9 00:23:40.563040 containerd[1591]: time="2025-09-09T00:23:40.562970474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:40.566220 containerd[1591]: time="2025-09-09T00:23:40.566058979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 00:23:40.567728 containerd[1591]: time="2025-09-09T00:23:40.567682210Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:40.570568 containerd[1591]: time="2025-09-09T00:23:40.570511215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:40.570964 containerd[1591]: time="2025-09-09T00:23:40.570927691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.888750135s" Sep 9 00:23:40.570964 containerd[1591]: time="2025-09-09T00:23:40.570959100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 00:23:40.579964 containerd[1591]: time="2025-09-09T00:23:40.579908387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:23:40.600339 containerd[1591]: time="2025-09-09T00:23:40.600234108Z" level=info msg="CreateContainer within sandbox \"0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:23:40.612018 containerd[1591]: time="2025-09-09T00:23:40.611831289Z" level=info msg="Container b99d7b58650430a714b9e4f018befed35d5b1de752dbab50420cef981d44f3ce: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:40.625649 containerd[1591]: time="2025-09-09T00:23:40.625579296Z" level=info msg="CreateContainer within sandbox \"0d6caf7ee909d1b153414cbea58f146f427600712ed1332dfa535d6a8d8bbd6b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b99d7b58650430a714b9e4f018befed35d5b1de752dbab50420cef981d44f3ce\"" Sep 9 00:23:40.626620 containerd[1591]: time="2025-09-09T00:23:40.626566207Z" level=info msg="StartContainer for \"b99d7b58650430a714b9e4f018befed35d5b1de752dbab50420cef981d44f3ce\"" Sep 9 00:23:40.628296 containerd[1591]: time="2025-09-09T00:23:40.628244833Z" level=info msg="connecting to shim b99d7b58650430a714b9e4f018befed35d5b1de752dbab50420cef981d44f3ce" address="unix:///run/containerd/s/56c02bf157132de7043a8fb2840e5471a3fb2114d3f3c34af5ae63685f08f7cc" protocol=ttrpc version=3 Sep 9 00:23:40.691597 systemd[1]: Started cri-containerd-b99d7b58650430a714b9e4f018befed35d5b1de752dbab50420cef981d44f3ce.scope - libcontainer container b99d7b58650430a714b9e4f018befed35d5b1de752dbab50420cef981d44f3ce. Sep 9 00:23:40.751002 containerd[1591]: time="2025-09-09T00:23:40.750949346Z" level=info msg="StartContainer for \"b99d7b58650430a714b9e4f018befed35d5b1de752dbab50420cef981d44f3ce\" returns successfully" Sep 9 00:23:41.153651 kubelet[2831]: I0909 00:23:41.153422 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-74cb56d88-wnpgc" podStartSLOduration=2.333040741 podStartE2EDuration="14.153399836s" podCreationTimestamp="2025-09-09 00:23:27 +0000 UTC" firstStartedPulling="2025-09-09 00:23:28.759360166 +0000 UTC m=+43.972258735" lastFinishedPulling="2025-09-09 00:23:40.579719271 +0000 UTC m=+55.792617830" observedRunningTime="2025-09-09 00:23:41.15303098 +0000 UTC m=+56.365929539" watchObservedRunningTime="2025-09-09 00:23:41.153399836 +0000 UTC m=+56.366298405" Sep 9 00:23:41.154626 kubelet[2831]: I0909 00:23:41.154542 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76bd4cd4c9-lh8kj" podStartSLOduration=34.954205897 podStartE2EDuration="42.154509348s" podCreationTimestamp="2025-09-09 00:22:59 +0000 UTC" firstStartedPulling="2025-09-09 00:23:30.481337312 +0000 UTC m=+45.694235881" lastFinishedPulling="2025-09-09 00:23:37.681640762 +0000 UTC m=+52.894539332" observedRunningTime="2025-09-09 00:23:38.21359218 +0000 UTC m=+53.426490749" watchObservedRunningTime="2025-09-09 00:23:41.154509348 +0000 UTC m=+56.367407927" Sep 9 00:23:44.675255 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:47394.service - OpenSSH per-connection server daemon (10.0.0.1:47394). Sep 9 00:23:44.683713 kubelet[2831]: I0909 00:23:44.681645 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:23:44.776297 sshd[5205]: Accepted publickey for core from 10.0.0.1 port 47394 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:23:44.777430 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:44.787644 systemd-logind[1576]: New session 13 of user core. Sep 9 00:23:44.794394 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:23:45.109872 containerd[1591]: time="2025-09-09T00:23:45.109501786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:45.110826 containerd[1591]: time="2025-09-09T00:23:45.110282087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 00:23:45.114838 containerd[1591]: time="2025-09-09T00:23:45.114805534Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:45.118124 containerd[1591]: time="2025-09-09T00:23:45.118058045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:45.118821 containerd[1591]: time="2025-09-09T00:23:45.118784024Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.538823017s" Sep 9 00:23:45.118886 containerd[1591]: time="2025-09-09T00:23:45.118824760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 00:23:45.120476 containerd[1591]: time="2025-09-09T00:23:45.120413395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:23:45.137982 containerd[1591]: time="2025-09-09T00:23:45.137930316Z" level=info msg="CreateContainer within sandbox \"7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:23:45.148423 containerd[1591]: time="2025-09-09T00:23:45.148366038Z" level=info msg="Container 55570bfdef84611c752c7ae4bf997302a5495b41a035c7d10793e594a7cc9855: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:45.160548 containerd[1591]: time="2025-09-09T00:23:45.160502367Z" level=info msg="CreateContainer within sandbox \"7e38dd476667257e3991c1ae3270eafec6a0c892ab475041c4669165cc91d43b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"55570bfdef84611c752c7ae4bf997302a5495b41a035c7d10793e594a7cc9855\"" Sep 9 00:23:45.161507 containerd[1591]: time="2025-09-09T00:23:45.161450944Z" level=info msg="StartContainer for \"55570bfdef84611c752c7ae4bf997302a5495b41a035c7d10793e594a7cc9855\"" Sep 9 00:23:45.163853 containerd[1591]: time="2025-09-09T00:23:45.163813158Z" level=info msg="connecting to shim 55570bfdef84611c752c7ae4bf997302a5495b41a035c7d10793e594a7cc9855" address="unix:///run/containerd/s/62639c5b34b4168126678b3fc8bcb7412b699f5c4ba3caa6fd01921d9362a7ce" protocol=ttrpc version=3 Sep 9 00:23:45.173986 sshd[5211]: Connection closed by 10.0.0.1 port 47394 Sep 9 00:23:45.174701 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:45.183178 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:47394.service: Deactivated successfully. Sep 9 00:23:45.185487 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:23:45.186947 systemd-logind[1576]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:23:45.190120 systemd-logind[1576]: Removed session 13. Sep 9 00:23:45.192490 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:47402.service - OpenSSH per-connection server daemon (10.0.0.1:47402). Sep 9 00:23:45.199398 systemd[1]: Started cri-containerd-55570bfdef84611c752c7ae4bf997302a5495b41a035c7d10793e594a7cc9855.scope - libcontainer container 55570bfdef84611c752c7ae4bf997302a5495b41a035c7d10793e594a7cc9855. Sep 9 00:23:45.236047 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 47402 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:23:45.239079 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:45.244522 systemd-logind[1576]: New session 14 of user core. Sep 9 00:23:45.250562 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:23:45.257324 containerd[1591]: time="2025-09-09T00:23:45.257289167Z" level=info msg="StartContainer for \"55570bfdef84611c752c7ae4bf997302a5495b41a035c7d10793e594a7cc9855\" returns successfully" Sep 9 00:23:45.426466 sshd[5266]: Connection closed by 10.0.0.1 port 47402 Sep 9 00:23:45.428963 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:45.440730 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:47402.service: Deactivated successfully. Sep 9 00:23:45.444361 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:23:45.446465 systemd-logind[1576]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:23:45.452120 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:47406.service - OpenSSH per-connection server daemon (10.0.0.1:47406). Sep 9 00:23:45.453541 systemd-logind[1576]: Removed session 14. Sep 9 00:23:45.505500 sshd[5287]: Accepted publickey for core from 10.0.0.1 port 47406 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:23:45.507315 sshd-session[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:45.512137 systemd-logind[1576]: New session 15 of user core. Sep 9 00:23:45.522490 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:23:45.644547 sshd[5290]: Connection closed by 10.0.0.1 port 47406 Sep 9 00:23:45.644924 sshd-session[5287]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:45.650344 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:47406.service: Deactivated successfully. Sep 9 00:23:45.652760 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:23:45.654637 systemd-logind[1576]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:23:45.655815 systemd-logind[1576]: Removed session 15. Sep 9 00:23:46.136438 kubelet[2831]: I0909 00:23:46.136205 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b5bc6bd7-p9249" podStartSLOduration=28.281589389 podStartE2EDuration="42.136185441s" podCreationTimestamp="2025-09-09 00:23:04 +0000 UTC" firstStartedPulling="2025-09-09 00:23:31.264921925 +0000 UTC m=+46.477820494" lastFinishedPulling="2025-09-09 00:23:45.119517977 +0000 UTC m=+60.332416546" observedRunningTime="2025-09-09 00:23:46.135998588 +0000 UTC m=+61.348897168" watchObservedRunningTime="2025-09-09 00:23:46.136185441 +0000 UTC m=+61.349084010" Sep 9 00:23:46.179877 containerd[1591]: time="2025-09-09T00:23:46.179821032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"55570bfdef84611c752c7ae4bf997302a5495b41a035c7d10793e594a7cc9855\" id:\"65c1e3c2587275408e617c63808b2b471379c53fca5e41c5b8ba767b5504eaa5\" pid:5317 exited_at:{seconds:1757377426 nanos:179465481}" Sep 9 00:23:47.959554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2350221962.mount: Deactivated successfully. Sep 9 00:23:48.664546 containerd[1591]: time="2025-09-09T00:23:48.664461274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:48.665404 containerd[1591]: time="2025-09-09T00:23:48.665358686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 00:23:48.666785 containerd[1591]: time="2025-09-09T00:23:48.666748344Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:48.669500 containerd[1591]: time="2025-09-09T00:23:48.669441630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:48.670134 containerd[1591]: time="2025-09-09T00:23:48.670093849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.549648515s" Sep 9 00:23:48.670134 containerd[1591]: time="2025-09-09T00:23:48.670124076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 00:23:48.671053 containerd[1591]: time="2025-09-09T00:23:48.671027639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:23:48.676375 containerd[1591]: time="2025-09-09T00:23:48.676327087Z" level=info msg="CreateContainer within sandbox \"054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:23:48.694220 containerd[1591]: time="2025-09-09T00:23:48.694090725Z" level=info msg="Container 8b34e117855152e55ee66bcf39b93ba69850b7696fbf7ae144b02b4a11ac2d69: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:48.704673 containerd[1591]: time="2025-09-09T00:23:48.704624458Z" level=info msg="CreateContainer within sandbox \"054fb1dcbe3c8d6f55e6a5f373dc9c4cf8dac8204803770d641939d01eda6322\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8b34e117855152e55ee66bcf39b93ba69850b7696fbf7ae144b02b4a11ac2d69\"" Sep 9 00:23:48.705399 containerd[1591]: time="2025-09-09T00:23:48.705340327Z" level=info msg="StartContainer for \"8b34e117855152e55ee66bcf39b93ba69850b7696fbf7ae144b02b4a11ac2d69\"" Sep 9 00:23:48.706652 containerd[1591]: time="2025-09-09T00:23:48.706624919Z" level=info msg="connecting to shim 8b34e117855152e55ee66bcf39b93ba69850b7696fbf7ae144b02b4a11ac2d69" address="unix:///run/containerd/s/8317c703acb81cbba97e36021e99deb1540f6c532358ec75b1f6a6cd3cf4a88c" protocol=ttrpc version=3 Sep 9 00:23:48.744579 systemd[1]: Started cri-containerd-8b34e117855152e55ee66bcf39b93ba69850b7696fbf7ae144b02b4a11ac2d69.scope - libcontainer container 8b34e117855152e55ee66bcf39b93ba69850b7696fbf7ae144b02b4a11ac2d69. Sep 9 00:23:48.803232 containerd[1591]: time="2025-09-09T00:23:48.803174644Z" level=info msg="StartContainer for \"8b34e117855152e55ee66bcf39b93ba69850b7696fbf7ae144b02b4a11ac2d69\" returns successfully" Sep 9 00:23:49.224248 containerd[1591]: time="2025-09-09T00:23:49.224188891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b34e117855152e55ee66bcf39b93ba69850b7696fbf7ae144b02b4a11ac2d69\" id:\"78bfda6bd0037a4d6ef678d809305c38f381e20ba47fde30d1b9235efb48d856\" pid:5385 exit_status:1 exited_at:{seconds:1757377429 nanos:223686273}" Sep 9 00:23:50.237038 containerd[1591]: time="2025-09-09T00:23:50.236994522Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b34e117855152e55ee66bcf39b93ba69850b7696fbf7ae144b02b4a11ac2d69\" id:\"4b46dac86dbe14844381a377e89f4131ffc2f25a9f2e8c7b467bec5df960a33c\" pid:5418 exit_status:1 exited_at:{seconds:1757377430 nanos:236657016}" Sep 9 00:23:50.661403 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:45406.service - OpenSSH per-connection server daemon (10.0.0.1:45406). Sep 9 00:23:50.747571 sshd[5432]: Accepted publickey for core from 10.0.0.1 port 45406 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:23:50.749990 sshd-session[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:50.759561 systemd-logind[1576]: New session 16 of user core. Sep 9 00:23:50.767192 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:23:50.903933 sshd[5435]: Connection closed by 10.0.0.1 port 45406 Sep 9 00:23:50.904596 sshd-session[5432]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:50.911320 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:45406.service: Deactivated successfully. Sep 9 00:23:50.914196 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:23:50.915658 systemd-logind[1576]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:23:50.917311 systemd-logind[1576]: Removed session 16. Sep 9 00:23:51.519841 containerd[1591]: time="2025-09-09T00:23:51.519764994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:51.520564 containerd[1591]: time="2025-09-09T00:23:51.520487385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 00:23:51.521835 containerd[1591]: time="2025-09-09T00:23:51.521784990Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:51.524046 containerd[1591]: time="2025-09-09T00:23:51.523991728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:23:51.524599 containerd[1591]: time="2025-09-09T00:23:51.524542717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.853489829s" Sep 9 00:23:51.524599 containerd[1591]: time="2025-09-09T00:23:51.524592280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 00:23:51.529596 containerd[1591]: time="2025-09-09T00:23:51.529540113Z" level=info msg="CreateContainer within sandbox \"cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:23:51.539720 containerd[1591]: time="2025-09-09T00:23:51.539662617Z" level=info msg="Container e09c04cfab5a8f2b9369dd748b46bd1913966795564645d5cc631cf0f1882a82: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:23:51.555855 containerd[1591]: time="2025-09-09T00:23:51.555795577Z" level=info msg="CreateContainer within sandbox \"cb85e500606295ffeec9827d57b66b7ab8b132c3cb9c90765b7a64846f22173b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e09c04cfab5a8f2b9369dd748b46bd1913966795564645d5cc631cf0f1882a82\"" Sep 9 00:23:51.556380 containerd[1591]: time="2025-09-09T00:23:51.556350452Z" level=info msg="StartContainer for \"e09c04cfab5a8f2b9369dd748b46bd1913966795564645d5cc631cf0f1882a82\"" Sep 9 00:23:51.558096 containerd[1591]: time="2025-09-09T00:23:51.558067258Z" level=info msg="connecting to shim e09c04cfab5a8f2b9369dd748b46bd1913966795564645d5cc631cf0f1882a82" address="unix:///run/containerd/s/efe9ff8f067c0b269ef1aae0cf077c10521de5a8512524b45a767f96c3427e5e" protocol=ttrpc version=3 Sep 9 00:23:51.580538 systemd[1]: Started cri-containerd-e09c04cfab5a8f2b9369dd748b46bd1913966795564645d5cc631cf0f1882a82.scope - libcontainer container e09c04cfab5a8f2b9369dd748b46bd1913966795564645d5cc631cf0f1882a82. Sep 9 00:23:51.633333 containerd[1591]: time="2025-09-09T00:23:51.633277546Z" level=info msg="StartContainer for \"e09c04cfab5a8f2b9369dd748b46bd1913966795564645d5cc631cf0f1882a82\" returns successfully" Sep 9 00:23:51.967626 kubelet[2831]: I0909 00:23:51.967586 2831 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:23:51.976354 kubelet[2831]: I0909 00:23:51.976326 2831 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:23:52.260112 kubelet[2831]: I0909 00:23:52.259934 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6fl7r" podStartSLOduration=26.019532656 podStartE2EDuration="48.259912457s" podCreationTimestamp="2025-09-09 00:23:04 +0000 UTC" firstStartedPulling="2025-09-09 00:23:29.284952915 +0000 UTC m=+44.497851484" lastFinishedPulling="2025-09-09 00:23:51.525332715 +0000 UTC m=+66.738231285" observedRunningTime="2025-09-09 00:23:52.259427313 +0000 UTC m=+67.472325882" watchObservedRunningTime="2025-09-09 00:23:52.259912457 +0000 UTC m=+67.472811026" Sep 9 00:23:52.261078 kubelet[2831]: I0909 00:23:52.260920 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-rc24j" podStartSLOduration=31.989818995 podStartE2EDuration="49.260912601s" podCreationTimestamp="2025-09-09 00:23:03 +0000 UTC" firstStartedPulling="2025-09-09 00:23:31.399856187 +0000 UTC m=+46.612754756" lastFinishedPulling="2025-09-09 00:23:48.670949793 +0000 UTC m=+63.883848362" observedRunningTime="2025-09-09 00:23:49.415080556 +0000 UTC m=+64.627979135" watchObservedRunningTime="2025-09-09 00:23:52.260912601 +0000 UTC m=+67.473811170" Sep 9 00:23:55.912342 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:45414.service - OpenSSH per-connection server daemon (10.0.0.1:45414). Sep 9 00:23:55.970096 containerd[1591]: time="2025-09-09T00:23:55.970045023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b34e117855152e55ee66bcf39b93ba69850b7696fbf7ae144b02b4a11ac2d69\" id:\"21f23d132b271a035e478e799ec196a146a3359aa8df1a07b4536900ebc8fc89\" pid:5505 exited_at:{seconds:1757377435 nanos:969671359}" Sep 9 00:23:55.996699 sshd[5512]: Accepted publickey for core from 10.0.0.1 port 45414 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:23:55.998405 sshd-session[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:56.003508 systemd-logind[1576]: New session 17 of user core. Sep 9 00:23:56.017420 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:23:56.165550 sshd[5519]: Connection closed by 10.0.0.1 port 45414 Sep 9 00:23:56.165837 sshd-session[5512]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:56.170866 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:45414.service: Deactivated successfully. Sep 9 00:23:56.173209 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:23:56.174406 systemd-logind[1576]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:23:56.175871 systemd-logind[1576]: Removed session 17. Sep 9 00:23:58.130582 containerd[1591]: time="2025-09-09T00:23:58.130537836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"359412b840ffd04db4fadff63c4c8d9c66d17312fad0b0743d32ab1cf3711835\" id:\"007fdf00dc135329dcfb0ee1ed43e33543f0d935f0fbd58b2e37a719dc05a4f1\" pid:5544 exit_status:1 exited_at:{seconds:1757377438 nanos:130195431}" Sep 9 00:24:01.181118 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:35536.service - OpenSSH per-connection server daemon (10.0.0.1:35536). Sep 9 00:24:01.236236 sshd[5560]: Accepted publickey for core from 10.0.0.1 port 35536 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:24:01.238412 sshd-session[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:01.244757 systemd-logind[1576]: New session 18 of user core. Sep 9 00:24:01.250463 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:24:01.372468 sshd[5562]: Connection closed by 10.0.0.1 port 35536 Sep 9 00:24:01.372775 sshd-session[5560]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:01.378108 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:35536.service: Deactivated successfully. Sep 9 00:24:01.380984 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:24:01.381842 systemd-logind[1576]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:24:01.383345 systemd-logind[1576]: Removed session 18. Sep 9 00:24:01.915765 kubelet[2831]: I0909 00:24:01.915707 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:24:04.891315 kubelet[2831]: E0909 00:24:04.890965 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:24:06.389616 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:35552.service - OpenSSH per-connection server daemon (10.0.0.1:35552). Sep 9 00:24:06.452690 sshd[5578]: Accepted publickey for core from 10.0.0.1 port 35552 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:24:06.454331 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:06.459242 systemd-logind[1576]: New session 19 of user core. Sep 9 00:24:06.466482 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:24:06.633109 sshd[5580]: Connection closed by 10.0.0.1 port 35552 Sep 9 00:24:06.633793 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:06.640656 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:35552.service: Deactivated successfully. Sep 9 00:24:06.644008 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:24:06.644873 systemd-logind[1576]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:24:06.646455 systemd-logind[1576]: Removed session 19. Sep 9 00:24:06.892193 kubelet[2831]: E0909 00:24:06.891994 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:24:08.463383 containerd[1591]: time="2025-09-09T00:24:08.463325097Z" level=info msg="TaskExit event in podsandbox handler container_id:\"55570bfdef84611c752c7ae4bf997302a5495b41a035c7d10793e594a7cc9855\" id:\"23bbb51520c981bf138425fddd1531c78320c5410b32ff186bfc5d912f81ae8b\" pid:5604 exited_at:{seconds:1757377448 nanos:463069967}" Sep 9 00:24:11.653637 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:54882.service - OpenSSH per-connection server daemon (10.0.0.1:54882). Sep 9 00:24:11.713562 sshd[5623]: Accepted publickey for core from 10.0.0.1 port 54882 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:24:11.715699 sshd-session[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:11.721723 systemd-logind[1576]: New session 20 of user core. Sep 9 00:24:11.732479 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:24:11.861910 sshd[5625]: Connection closed by 10.0.0.1 port 54882 Sep 9 00:24:11.862290 sshd-session[5623]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:11.875966 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:54882.service: Deactivated successfully. Sep 9 00:24:11.878494 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:24:11.879378 systemd-logind[1576]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:24:11.883733 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:54890.service - OpenSSH per-connection server daemon (10.0.0.1:54890). Sep 9 00:24:11.884670 systemd-logind[1576]: Removed session 20. Sep 9 00:24:11.937575 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 54890 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:24:11.940350 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:11.945720 systemd-logind[1576]: New session 21 of user core. Sep 9 00:24:11.966073 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:24:13.322950 sshd[5641]: Connection closed by 10.0.0.1 port 54890 Sep 9 00:24:13.323516 sshd-session[5639]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:13.336486 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:54890.service: Deactivated successfully. Sep 9 00:24:13.338538 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:24:13.339519 systemd-logind[1576]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:24:13.342960 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:54896.service - OpenSSH per-connection server daemon (10.0.0.1:54896). Sep 9 00:24:13.343800 systemd-logind[1576]: Removed session 21. Sep 9 00:24:13.407199 sshd[5653]: Accepted publickey for core from 10.0.0.1 port 54896 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:24:13.408740 sshd-session[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:13.413344 systemd-logind[1576]: New session 22 of user core. Sep 9 00:24:13.421416 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:24:14.775972 sshd[5655]: Connection closed by 10.0.0.1 port 54896 Sep 9 00:24:14.776434 sshd-session[5653]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:14.791780 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:54896.service: Deactivated successfully. Sep 9 00:24:14.794033 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:24:14.794964 systemd-logind[1576]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:24:14.798763 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:54908.service - OpenSSH per-connection server daemon (10.0.0.1:54908). Sep 9 00:24:14.799866 systemd-logind[1576]: Removed session 22. Sep 9 00:24:14.855348 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 54908 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:24:14.861836 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:14.868943 systemd-logind[1576]: New session 23 of user core. Sep 9 00:24:14.879485 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:24:15.928529 sshd[5676]: Connection closed by 10.0.0.1 port 54908 Sep 9 00:24:15.928919 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:15.939080 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:54908.service: Deactivated successfully. Sep 9 00:24:15.941592 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:24:15.942789 systemd-logind[1576]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:24:15.946819 systemd[1]: Started sshd@23-10.0.0.81:22-10.0.0.1:54914.service - OpenSSH per-connection server daemon (10.0.0.1:54914). Sep 9 00:24:15.948660 systemd-logind[1576]: Removed session 23. Sep 9 00:24:16.000650 sshd[5688]: Accepted publickey for core from 10.0.0.1 port 54914 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:24:16.002712 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:16.009027 systemd-logind[1576]: New session 24 of user core. Sep 9 00:24:16.025609 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:24:16.172787 containerd[1591]: time="2025-09-09T00:24:16.172725827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"55570bfdef84611c752c7ae4bf997302a5495b41a035c7d10793e594a7cc9855\" id:\"f1e2ca97c1d7fcd9bc4ebf32c07632ace5e44caa4fe1e0f44af2d2ad00229d08\" pid:5712 exited_at:{seconds:1757377456 nanos:172420412}" Sep 9 00:24:16.186696 sshd[5690]: Connection closed by 10.0.0.1 port 54914 Sep 9 00:24:16.186971 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:16.192192 systemd[1]: sshd@23-10.0.0.81:22-10.0.0.1:54914.service: Deactivated successfully. Sep 9 00:24:16.195058 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:24:16.196604 systemd-logind[1576]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:24:16.198946 systemd-logind[1576]: Removed session 24. Sep 9 00:24:17.889811 kubelet[2831]: E0909 00:24:17.889239 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:24:19.888793 kubelet[2831]: E0909 00:24:19.888749 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:24:20.245632 containerd[1591]: time="2025-09-09T00:24:20.224540859Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b34e117855152e55ee66bcf39b93ba69850b7696fbf7ae144b02b4a11ac2d69\" id:\"4f7ad0b6af987ffedfb97dcef58450a6e4779b415c84fdb482f402d371c1c8aa\" pid:5738 exited_at:{seconds:1757377460 nanos:224189176}" Sep 9 00:24:21.204048 systemd[1]: Started sshd@24-10.0.0.81:22-10.0.0.1:56498.service - OpenSSH per-connection server daemon (10.0.0.1:56498). Sep 9 00:24:21.268687 sshd[5753]: Accepted publickey for core from 10.0.0.1 port 56498 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:24:21.270820 sshd-session[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:21.276657 systemd-logind[1576]: New session 25 of user core. Sep 9 00:24:21.285453 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:24:21.410158 sshd[5755]: Connection closed by 10.0.0.1 port 56498 Sep 9 00:24:21.410536 sshd-session[5753]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:21.415575 systemd[1]: sshd@24-10.0.0.81:22-10.0.0.1:56498.service: Deactivated successfully. Sep 9 00:24:21.418292 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:24:21.419437 systemd-logind[1576]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:24:21.421190 systemd-logind[1576]: Removed session 25. Sep 9 00:24:26.427139 systemd[1]: Started sshd@25-10.0.0.81:22-10.0.0.1:56508.service - OpenSSH per-connection server daemon (10.0.0.1:56508). Sep 9 00:24:26.475124 sshd[5772]: Accepted publickey for core from 10.0.0.1 port 56508 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:24:26.476906 sshd-session[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:26.481368 systemd-logind[1576]: New session 26 of user core. Sep 9 00:24:26.492460 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:24:26.605301 sshd[5774]: Connection closed by 10.0.0.1 port 56508 Sep 9 00:24:26.605635 sshd-session[5772]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:26.610299 systemd[1]: sshd@25-10.0.0.81:22-10.0.0.1:56508.service: Deactivated successfully. Sep 9 00:24:26.612640 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:24:26.613732 systemd-logind[1576]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:24:26.615052 systemd-logind[1576]: Removed session 26. Sep 9 00:24:26.889301 kubelet[2831]: E0909 00:24:26.889179 2831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:24:28.133536 containerd[1591]: time="2025-09-09T00:24:28.133377058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"359412b840ffd04db4fadff63c4c8d9c66d17312fad0b0743d32ab1cf3711835\" id:\"ddfcdfc15012a369451e7e596d5a84dd9d8bed3d9be5d5d819a4a3822c689c79\" pid:5797 exited_at:{seconds:1757377468 nanos:132912263}" Sep 9 00:24:31.625858 systemd[1]: Started sshd@26-10.0.0.81:22-10.0.0.1:53176.service - OpenSSH per-connection server daemon (10.0.0.1:53176). Sep 9 00:24:31.714981 sshd[5813]: Accepted publickey for core from 10.0.0.1 port 53176 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:24:31.717120 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:31.723836 systemd-logind[1576]: New session 27 of user core. Sep 9 00:24:31.737610 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 00:24:31.915320 sshd[5815]: Connection closed by 10.0.0.1 port 53176 Sep 9 00:24:31.915881 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:31.922483 systemd[1]: sshd@26-10.0.0.81:22-10.0.0.1:53176.service: Deactivated successfully. Sep 9 00:24:31.925909 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 00:24:31.926967 systemd-logind[1576]: Session 27 logged out. Waiting for processes to exit. Sep 9 00:24:31.929912 systemd-logind[1576]: Removed session 27. Sep 9 00:24:36.933182 systemd[1]: Started sshd@27-10.0.0.81:22-10.0.0.1:53182.service - OpenSSH per-connection server daemon (10.0.0.1:53182). Sep 9 00:24:36.987252 sshd[5831]: Accepted publickey for core from 10.0.0.1 port 53182 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:24:36.989081 sshd-session[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:36.995072 systemd-logind[1576]: New session 28 of user core. Sep 9 00:24:36.999425 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 00:24:37.195981 sshd[5834]: Connection closed by 10.0.0.1 port 53182 Sep 9 00:24:37.196587 sshd-session[5831]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:37.201694 systemd[1]: sshd@27-10.0.0.81:22-10.0.0.1:53182.service: Deactivated successfully. Sep 9 00:24:37.204514 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 00:24:37.205488 systemd-logind[1576]: Session 28 logged out. Waiting for processes to exit. Sep 9 00:24:37.207286 systemd-logind[1576]: Removed session 28.