Sep 6 09:55:54.781195 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat Sep 6 08:10:27 -00 2025 Sep 6 09:55:54.781216 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c69e1af3de5da13c6be143e3ab518bb528f3fa67a9a488f7fa132987ebb99256 Sep 6 09:55:54.781224 kernel: BIOS-provided physical RAM map: Sep 6 09:55:54.781231 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 6 09:55:54.781237 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 6 09:55:54.781244 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 6 09:55:54.781251 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 6 09:55:54.781257 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 6 09:55:54.781269 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 6 09:55:54.781275 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 6 09:55:54.781282 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 6 09:55:54.781288 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 6 09:55:54.781294 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 6 09:55:54.781301 kernel: NX (Execute Disable) protection: active Sep 6 09:55:54.781311 kernel: APIC: Static calls initialized Sep 6 09:55:54.781318 kernel: SMBIOS 2.8 present. Sep 6 09:55:54.781327 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 6 09:55:54.781334 kernel: DMI: Memory slots populated: 1/1 Sep 6 09:55:54.781340 kernel: Hypervisor detected: KVM Sep 6 09:55:54.781347 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 09:55:54.781354 kernel: kvm-clock: using sched offset of 4326802253 cycles Sep 6 09:55:54.781361 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 09:55:54.781369 kernel: tsc: Detected 2794.750 MHz processor Sep 6 09:55:54.781376 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 09:55:54.781386 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 09:55:54.781395 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 6 09:55:54.781404 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 6 09:55:54.781414 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 09:55:54.781422 kernel: Using GB pages for direct mapping Sep 6 09:55:54.781440 kernel: ACPI: Early table checksum verification disabled Sep 6 09:55:54.781449 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 6 09:55:54.781457 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:55:54.781469 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:55:54.781478 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:55:54.781488 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 6 09:55:54.781497 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:55:54.781504 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:55:54.781511 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:55:54.781518 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 09:55:54.781525 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 6 09:55:54.781537 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 6 09:55:54.781544 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 6 09:55:54.781551 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 6 09:55:54.781559 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 6 09:55:54.781566 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 6 09:55:54.781575 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 6 09:55:54.781585 kernel: No NUMA configuration found Sep 6 09:55:54.781594 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 6 09:55:54.781602 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 6 09:55:54.781609 kernel: Zone ranges: Sep 6 09:55:54.781616 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 09:55:54.781624 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 6 09:55:54.781631 kernel: Normal empty Sep 6 09:55:54.781638 kernel: Device empty Sep 6 09:55:54.781645 kernel: Movable zone start for each node Sep 6 09:55:54.781652 kernel: Early memory node ranges Sep 6 09:55:54.781661 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 6 09:55:54.781668 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 6 09:55:54.781675 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 6 09:55:54.781683 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 09:55:54.781690 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 6 09:55:54.781697 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 6 09:55:54.781704 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 6 09:55:54.781715 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 09:55:54.781722 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 09:55:54.781732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 09:55:54.781739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 09:55:54.781749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 09:55:54.781756 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 09:55:54.781764 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 09:55:54.781771 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 09:55:54.781778 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 09:55:54.781785 kernel: TSC deadline timer available Sep 6 09:55:54.781792 kernel: CPU topo: Max. logical packages: 1 Sep 6 09:55:54.781802 kernel: CPU topo: Max. logical dies: 1 Sep 6 09:55:54.781809 kernel: CPU topo: Max. dies per package: 1 Sep 6 09:55:54.781842 kernel: CPU topo: Max. threads per core: 1 Sep 6 09:55:54.781849 kernel: CPU topo: Num. cores per package: 4 Sep 6 09:55:54.781857 kernel: CPU topo: Num. threads per package: 4 Sep 6 09:55:54.781864 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 6 09:55:54.781871 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 6 09:55:54.781878 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 6 09:55:54.781885 kernel: kvm-guest: setup PV sched yield Sep 6 09:55:54.781893 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 6 09:55:54.781902 kernel: Booting paravirtualized kernel on KVM Sep 6 09:55:54.781910 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 09:55:54.781917 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 6 09:55:54.781925 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 6 09:55:54.781932 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 6 09:55:54.781939 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 6 09:55:54.781946 kernel: kvm-guest: PV spinlocks enabled Sep 6 09:55:54.781953 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 6 09:55:54.781963 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c69e1af3de5da13c6be143e3ab518bb528f3fa67a9a488f7fa132987ebb99256 Sep 6 09:55:54.781971 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 09:55:54.781978 kernel: random: crng init done Sep 6 09:55:54.781985 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 09:55:54.781993 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 09:55:54.782000 kernel: Fallback order for Node 0: 0 Sep 6 09:55:54.782007 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 6 09:55:54.782014 kernel: Policy zone: DMA32 Sep 6 09:55:54.782022 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 09:55:54.782031 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 6 09:55:54.782038 kernel: ftrace: allocating 40102 entries in 157 pages Sep 6 09:55:54.782045 kernel: ftrace: allocated 157 pages with 5 groups Sep 6 09:55:54.782052 kernel: Dynamic Preempt: voluntary Sep 6 09:55:54.782059 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 09:55:54.782067 kernel: rcu: RCU event tracing is enabled. Sep 6 09:55:54.782074 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 6 09:55:54.782082 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 09:55:54.782092 kernel: Rude variant of Tasks RCU enabled. Sep 6 09:55:54.782101 kernel: Tracing variant of Tasks RCU enabled. Sep 6 09:55:54.782108 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 09:55:54.782116 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 6 09:55:54.782123 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 6 09:55:54.782130 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 6 09:55:54.782138 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 6 09:55:54.782145 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 6 09:55:54.782152 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 6 09:55:54.782168 kernel: Console: colour VGA+ 80x25 Sep 6 09:55:54.782176 kernel: printk: legacy console [ttyS0] enabled Sep 6 09:55:54.782183 kernel: ACPI: Core revision 20240827 Sep 6 09:55:54.782193 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 6 09:55:54.782200 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 09:55:54.782208 kernel: x2apic enabled Sep 6 09:55:54.782215 kernel: APIC: Switched APIC routing to: physical x2apic Sep 6 09:55:54.782225 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 6 09:55:54.782233 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 6 09:55:54.782242 kernel: kvm-guest: setup PV IPIs Sep 6 09:55:54.782250 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 6 09:55:54.782258 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 6 09:55:54.782266 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 6 09:55:54.782273 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 6 09:55:54.782281 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 6 09:55:54.782289 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 6 09:55:54.782297 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 09:55:54.782306 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 09:55:54.782314 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 09:55:54.782321 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 6 09:55:54.782329 kernel: active return thunk: retbleed_return_thunk Sep 6 09:55:54.782336 kernel: RETBleed: Mitigation: untrained return thunk Sep 6 09:55:54.782344 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 09:55:54.782351 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 6 09:55:54.782359 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 6 09:55:54.782367 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 6 09:55:54.782377 kernel: active return thunk: srso_return_thunk Sep 6 09:55:54.782384 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 6 09:55:54.782392 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 09:55:54.782399 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 09:55:54.782407 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 09:55:54.782414 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 09:55:54.782422 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 6 09:55:54.782438 kernel: Freeing SMP alternatives memory: 32K Sep 6 09:55:54.782447 kernel: pid_max: default: 32768 minimum: 301 Sep 6 09:55:54.782456 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 6 09:55:54.782463 kernel: landlock: Up and running. Sep 6 09:55:54.782470 kernel: SELinux: Initializing. Sep 6 09:55:54.782481 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 09:55:54.782489 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 09:55:54.782496 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 6 09:55:54.782504 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 6 09:55:54.782511 kernel: ... version: 0 Sep 6 09:55:54.782522 kernel: ... bit width: 48 Sep 6 09:55:54.782533 kernel: ... generic registers: 6 Sep 6 09:55:54.782543 kernel: ... value mask: 0000ffffffffffff Sep 6 09:55:54.782553 kernel: ... max period: 00007fffffffffff Sep 6 09:55:54.782563 kernel: ... fixed-purpose events: 0 Sep 6 09:55:54.782572 kernel: ... event mask: 000000000000003f Sep 6 09:55:54.782582 kernel: signal: max sigframe size: 1776 Sep 6 09:55:54.782592 kernel: rcu: Hierarchical SRCU implementation. Sep 6 09:55:54.782602 kernel: rcu: Max phase no-delay instances is 400. Sep 6 09:55:54.782610 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 6 09:55:54.782620 kernel: smp: Bringing up secondary CPUs ... Sep 6 09:55:54.782627 kernel: smpboot: x86: Booting SMP configuration: Sep 6 09:55:54.782635 kernel: .... node #0, CPUs: #1 #2 #3 Sep 6 09:55:54.782642 kernel: smp: Brought up 1 node, 4 CPUs Sep 6 09:55:54.782649 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 6 09:55:54.782657 kernel: Memory: 2428920K/2571752K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54076K init, 2892K bss, 136904K reserved, 0K cma-reserved) Sep 6 09:55:54.782665 kernel: devtmpfs: initialized Sep 6 09:55:54.782672 kernel: x86/mm: Memory block size: 128MB Sep 6 09:55:54.782680 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 09:55:54.782689 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 6 09:55:54.782697 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 09:55:54.782704 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 09:55:54.782712 kernel: audit: initializing netlink subsys (disabled) Sep 6 09:55:54.782719 kernel: audit: type=2000 audit(1757152552.128:1): state=initialized audit_enabled=0 res=1 Sep 6 09:55:54.782726 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 09:55:54.782734 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 09:55:54.782741 kernel: cpuidle: using governor menu Sep 6 09:55:54.782750 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 09:55:54.782758 kernel: dca service started, version 1.12.1 Sep 6 09:55:54.782765 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 6 09:55:54.782773 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 6 09:55:54.782780 kernel: PCI: Using configuration type 1 for base access Sep 6 09:55:54.782788 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 09:55:54.782795 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 09:55:54.782803 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 6 09:55:54.782810 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 09:55:54.782886 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 6 09:55:54.782894 kernel: ACPI: Added _OSI(Module Device) Sep 6 09:55:54.782901 kernel: ACPI: Added _OSI(Processor Device) Sep 6 09:55:54.782908 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 09:55:54.782916 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 09:55:54.782923 kernel: ACPI: Interpreter enabled Sep 6 09:55:54.782931 kernel: ACPI: PM: (supports S0 S3 S5) Sep 6 09:55:54.782938 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 09:55:54.782946 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 09:55:54.782953 kernel: PCI: Using E820 reservations for host bridge windows Sep 6 09:55:54.782964 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 6 09:55:54.782971 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 09:55:54.783220 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 09:55:54.783356 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 6 09:55:54.783481 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 6 09:55:54.783492 kernel: PCI host bridge to bus 0000:00 Sep 6 09:55:54.783621 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 09:55:54.783735 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 09:55:54.783856 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 09:55:54.783963 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 6 09:55:54.784068 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 6 09:55:54.784177 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 6 09:55:54.784331 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 09:55:54.784559 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 6 09:55:54.784703 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 6 09:55:54.784837 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 6 09:55:54.784957 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 6 09:55:54.785073 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 6 09:55:54.785188 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 09:55:54.785324 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 6 09:55:54.785457 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 6 09:55:54.785576 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 6 09:55:54.785723 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 6 09:55:54.785930 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 6 09:55:54.786052 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 6 09:55:54.786170 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 6 09:55:54.786285 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 6 09:55:54.786423 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 6 09:55:54.786551 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 6 09:55:54.786717 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 6 09:55:54.786855 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 6 09:55:54.786973 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 6 09:55:54.787103 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 6 09:55:54.787220 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 6 09:55:54.787355 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 6 09:55:54.787480 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 6 09:55:54.787601 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 6 09:55:54.787735 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 6 09:55:54.787871 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 6 09:55:54.787882 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 09:55:54.787893 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 09:55:54.787901 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 09:55:54.787908 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 09:55:54.787916 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 6 09:55:54.787923 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 6 09:55:54.787931 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 6 09:55:54.787939 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 6 09:55:54.787946 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 6 09:55:54.787954 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 6 09:55:54.787963 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 6 09:55:54.787971 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 6 09:55:54.787979 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 6 09:55:54.787986 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 6 09:55:54.787994 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 6 09:55:54.788001 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 6 09:55:54.788009 kernel: iommu: Default domain type: Translated Sep 6 09:55:54.788016 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 09:55:54.788024 kernel: PCI: Using ACPI for IRQ routing Sep 6 09:55:54.788034 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 09:55:54.788041 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 6 09:55:54.788049 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 6 09:55:54.788165 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 6 09:55:54.788282 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 6 09:55:54.788404 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 09:55:54.788414 kernel: vgaarb: loaded Sep 6 09:55:54.788422 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 6 09:55:54.788439 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 6 09:55:54.788450 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 09:55:54.788458 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 09:55:54.788466 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 09:55:54.788474 kernel: pnp: PnP ACPI init Sep 6 09:55:54.788619 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 6 09:55:54.788630 kernel: pnp: PnP ACPI: found 6 devices Sep 6 09:55:54.788638 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 09:55:54.788646 kernel: NET: Registered PF_INET protocol family Sep 6 09:55:54.788657 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 09:55:54.788665 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 09:55:54.788672 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 09:55:54.788680 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 09:55:54.788687 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 6 09:55:54.788695 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 09:55:54.788703 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 09:55:54.788710 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 09:55:54.788720 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 09:55:54.788727 kernel: NET: Registered PF_XDP protocol family Sep 6 09:55:54.788856 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 09:55:54.788963 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 09:55:54.789070 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 09:55:54.789179 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 6 09:55:54.789285 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 6 09:55:54.789390 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 6 09:55:54.789399 kernel: PCI: CLS 0 bytes, default 64 Sep 6 09:55:54.789411 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 6 09:55:54.789419 kernel: Initialise system trusted keyrings Sep 6 09:55:54.789435 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 09:55:54.789443 kernel: Key type asymmetric registered Sep 6 09:55:54.789450 kernel: Asymmetric key parser 'x509' registered Sep 6 09:55:54.789458 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 6 09:55:54.789466 kernel: io scheduler mq-deadline registered Sep 6 09:55:54.789474 kernel: io scheduler kyber registered Sep 6 09:55:54.789481 kernel: io scheduler bfq registered Sep 6 09:55:54.789491 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 09:55:54.789499 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 6 09:55:54.789507 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 6 09:55:54.789514 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 6 09:55:54.789522 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 09:55:54.789530 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 09:55:54.789537 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 09:55:54.789545 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 09:55:54.789553 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 09:55:54.789687 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 6 09:55:54.789799 kernel: rtc_cmos 00:04: registered as rtc0 Sep 6 09:55:54.789810 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 09:55:54.789938 kernel: rtc_cmos 00:04: setting system clock to 2025-09-06T09:55:54 UTC (1757152554) Sep 6 09:55:54.790047 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 6 09:55:54.790058 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 6 09:55:54.790065 kernel: NET: Registered PF_INET6 protocol family Sep 6 09:55:54.790073 kernel: Segment Routing with IPv6 Sep 6 09:55:54.790084 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 09:55:54.790092 kernel: NET: Registered PF_PACKET protocol family Sep 6 09:55:54.790100 kernel: Key type dns_resolver registered Sep 6 09:55:54.790107 kernel: IPI shorthand broadcast: enabled Sep 6 09:55:54.790115 kernel: sched_clock: Marking stable (2766002134, 108979286)->(2894639188, -19657768) Sep 6 09:55:54.790123 kernel: registered taskstats version 1 Sep 6 09:55:54.790130 kernel: Loading compiled-in X.509 certificates Sep 6 09:55:54.790138 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: d54a04c0c6d7404ed8dd26757b3e0037e8128454' Sep 6 09:55:54.790146 kernel: Demotion targets for Node 0: null Sep 6 09:55:54.790155 kernel: Key type .fscrypt registered Sep 6 09:55:54.790163 kernel: Key type fscrypt-provisioning registered Sep 6 09:55:54.790170 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 09:55:54.790178 kernel: ima: Allocated hash algorithm: sha1 Sep 6 09:55:54.790186 kernel: ima: No architecture policies found Sep 6 09:55:54.790193 kernel: clk: Disabling unused clocks Sep 6 09:55:54.790200 kernel: Warning: unable to open an initial console. Sep 6 09:55:54.790208 kernel: Freeing unused kernel image (initmem) memory: 54076K Sep 6 09:55:54.790218 kernel: Write protecting the kernel read-only data: 24576k Sep 6 09:55:54.790226 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 6 09:55:54.790233 kernel: Run /init as init process Sep 6 09:55:54.790241 kernel: with arguments: Sep 6 09:55:54.790248 kernel: /init Sep 6 09:55:54.790256 kernel: with environment: Sep 6 09:55:54.790263 kernel: HOME=/ Sep 6 09:55:54.790271 kernel: TERM=linux Sep 6 09:55:54.790278 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 09:55:54.790287 systemd[1]: Successfully made /usr/ read-only. Sep 6 09:55:54.790307 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 6 09:55:54.790318 systemd[1]: Detected virtualization kvm. Sep 6 09:55:54.790327 systemd[1]: Detected architecture x86-64. Sep 6 09:55:54.790335 systemd[1]: Running in initrd. Sep 6 09:55:54.790343 systemd[1]: No hostname configured, using default hostname. Sep 6 09:55:54.790353 systemd[1]: Hostname set to . Sep 6 09:55:54.790361 systemd[1]: Initializing machine ID from VM UUID. Sep 6 09:55:54.790370 systemd[1]: Queued start job for default target initrd.target. Sep 6 09:55:54.790378 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 09:55:54.790386 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 09:55:54.790395 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 6 09:55:54.790406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 6 09:55:54.790414 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 6 09:55:54.790432 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 6 09:55:54.790442 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 6 09:55:54.790451 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 6 09:55:54.790460 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 09:55:54.790468 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 6 09:55:54.790477 systemd[1]: Reached target paths.target - Path Units. Sep 6 09:55:54.790487 systemd[1]: Reached target slices.target - Slice Units. Sep 6 09:55:54.790495 systemd[1]: Reached target swap.target - Swaps. Sep 6 09:55:54.790503 systemd[1]: Reached target timers.target - Timer Units. Sep 6 09:55:54.790512 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 6 09:55:54.790520 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 6 09:55:54.790529 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 6 09:55:54.790537 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 6 09:55:54.790545 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 6 09:55:54.790554 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 6 09:55:54.790564 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 09:55:54.790572 systemd[1]: Reached target sockets.target - Socket Units. Sep 6 09:55:54.790581 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 6 09:55:54.790592 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 6 09:55:54.790603 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 6 09:55:54.790615 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 6 09:55:54.790624 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 09:55:54.790632 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 6 09:55:54.790640 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 6 09:55:54.790649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 09:55:54.790657 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 6 09:55:54.790668 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 09:55:54.790677 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 09:55:54.790711 systemd-journald[221]: Collecting audit messages is disabled. Sep 6 09:55:54.790732 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 6 09:55:54.790741 systemd-journald[221]: Journal started Sep 6 09:55:54.790759 systemd-journald[221]: Runtime Journal (/run/log/journal/6c54d2f29c574b7d8593641af77c5ef0) is 6M, max 48.6M, 42.5M free. Sep 6 09:55:54.784561 systemd-modules-load[222]: Inserted module 'overlay' Sep 6 09:55:54.823936 systemd[1]: Started systemd-journald.service - Journal Service. Sep 6 09:55:54.823968 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 09:55:54.823985 kernel: Bridge firewalling registered Sep 6 09:55:54.810554 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 6 09:55:54.822532 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 6 09:55:54.825144 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 09:55:54.827077 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 6 09:55:54.831665 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 09:55:54.833519 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 09:55:54.845466 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 6 09:55:54.846203 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 6 09:55:54.855838 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 6 09:55:54.858632 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 09:55:54.861401 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 09:55:54.861663 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 09:55:54.865163 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 6 09:55:54.869634 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 09:55:54.884436 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 6 09:55:54.901122 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c69e1af3de5da13c6be143e3ab518bb528f3fa67a9a488f7fa132987ebb99256 Sep 6 09:55:54.917225 systemd-resolved[260]: Positive Trust Anchors: Sep 6 09:55:54.917240 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 09:55:54.917268 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 6 09:55:54.919662 systemd-resolved[260]: Defaulting to hostname 'linux'. Sep 6 09:55:54.920684 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 6 09:55:54.926372 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 6 09:55:55.009860 kernel: SCSI subsystem initialized Sep 6 09:55:55.018848 kernel: Loading iSCSI transport class v2.0-870. Sep 6 09:55:55.029856 kernel: iscsi: registered transport (tcp) Sep 6 09:55:55.049848 kernel: iscsi: registered transport (qla4xxx) Sep 6 09:55:55.049879 kernel: QLogic iSCSI HBA Driver Sep 6 09:55:55.069741 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 6 09:55:55.090295 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 6 09:55:55.091453 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 6 09:55:55.146546 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 6 09:55:55.149094 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 6 09:55:55.210863 kernel: raid6: avx2x4 gen() 30454 MB/s Sep 6 09:55:55.227856 kernel: raid6: avx2x2 gen() 31047 MB/s Sep 6 09:55:55.244877 kernel: raid6: avx2x1 gen() 25892 MB/s Sep 6 09:55:55.244906 kernel: raid6: using algorithm avx2x2 gen() 31047 MB/s Sep 6 09:55:55.262900 kernel: raid6: .... xor() 19974 MB/s, rmw enabled Sep 6 09:55:55.262971 kernel: raid6: using avx2x2 recovery algorithm Sep 6 09:55:55.283851 kernel: xor: automatically using best checksumming function avx Sep 6 09:55:55.448860 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 6 09:55:55.456955 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 6 09:55:55.460024 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 09:55:55.502811 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 6 09:55:55.508376 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 09:55:55.509436 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 6 09:55:55.530734 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Sep 6 09:55:55.559257 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 6 09:55:55.561765 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 6 09:55:55.640064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 09:55:55.641787 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 6 09:55:55.686841 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 6 09:55:55.688840 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 09:55:55.695870 kernel: AES CTR mode by8 optimization enabled Sep 6 09:55:55.697843 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 6 09:55:55.702270 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 09:55:55.702300 kernel: GPT:9289727 != 19775487 Sep 6 09:55:55.702311 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 09:55:55.702323 kernel: GPT:9289727 != 19775487 Sep 6 09:55:55.703214 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 09:55:55.703231 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 09:55:55.709981 kernel: libata version 3.00 loaded. Sep 6 09:55:55.725292 kernel: ahci 0000:00:1f.2: version 3.0 Sep 6 09:55:55.725492 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 6 09:55:55.730652 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 09:55:55.730935 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 09:55:55.735186 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 09:55:55.739250 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 6 09:55:55.739891 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 6 09:55:55.740040 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 6 09:55:55.744713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 09:55:55.748388 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 6 09:55:55.751848 kernel: scsi host0: ahci Sep 6 09:55:55.753850 kernel: scsi host1: ahci Sep 6 09:55:55.754836 kernel: scsi host2: ahci Sep 6 09:55:55.759842 kernel: scsi host3: ahci Sep 6 09:55:55.760844 kernel: scsi host4: ahci Sep 6 09:55:55.764033 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 6 09:55:55.765548 kernel: scsi host5: ahci Sep 6 09:55:55.765731 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 6 09:55:55.765743 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 6 09:55:55.767304 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 6 09:55:55.769199 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 6 09:55:55.769214 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 6 09:55:55.770235 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 6 09:55:55.794146 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 6 09:55:55.814705 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 6 09:55:55.815007 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 09:55:55.825329 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 6 09:55:55.825424 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 6 09:55:55.830468 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 6 09:55:55.861193 disk-uuid[634]: Primary Header is updated. Sep 6 09:55:55.861193 disk-uuid[634]: Secondary Entries is updated. Sep 6 09:55:55.861193 disk-uuid[634]: Secondary Header is updated. Sep 6 09:55:55.864308 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 09:55:56.083028 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 6 09:55:56.083094 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 6 09:55:56.083113 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 6 09:55:56.083854 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 6 09:55:56.084846 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 6 09:55:56.085855 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 6 09:55:56.087249 kernel: ata3.00: LPM support broken, forcing max_power Sep 6 09:55:56.087262 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 6 09:55:56.087272 kernel: ata3.00: applying bridge limits Sep 6 09:55:56.088383 kernel: ata3.00: LPM support broken, forcing max_power Sep 6 09:55:56.088404 kernel: ata3.00: configured for UDMA/100 Sep 6 09:55:56.089855 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 6 09:55:56.142853 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 6 09:55:56.143065 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 6 09:55:56.159855 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 6 09:55:56.555138 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 6 09:55:56.556854 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 6 09:55:56.558418 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 09:55:56.559569 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 6 09:55:56.560492 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 6 09:55:56.597580 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 6 09:55:56.872852 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 09:55:56.872920 disk-uuid[635]: The operation has completed successfully. Sep 6 09:55:56.903031 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 09:55:56.903152 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 6 09:55:56.933941 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 6 09:55:56.958945 sh[663]: Success Sep 6 09:55:56.976156 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 09:55:56.976191 kernel: device-mapper: uevent: version 1.0.3 Sep 6 09:55:56.977209 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 6 09:55:56.985885 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 6 09:55:57.013958 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 6 09:55:57.016925 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 6 09:55:57.038245 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 6 09:55:57.043850 kernel: BTRFS: device fsid d01fb51c-249d-484b-98c9-d7ac47264f4b devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (675) Sep 6 09:55:57.045849 kernel: BTRFS info (device dm-0): first mount of filesystem d01fb51c-249d-484b-98c9-d7ac47264f4b Sep 6 09:55:57.045870 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 6 09:55:57.050854 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 6 09:55:57.050875 kernel: BTRFS info (device dm-0): enabling free space tree Sep 6 09:55:57.051940 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 6 09:55:57.052478 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 6 09:55:57.053734 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 6 09:55:57.054578 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 6 09:55:57.057923 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 6 09:55:57.077837 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (707) Sep 6 09:55:57.079841 kernel: BTRFS info (device vda6): first mount of filesystem 5df41a5c-7aec-412a-8efb-42c0335a0fb4 Sep 6 09:55:57.079867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 09:55:57.082883 kernel: BTRFS info (device vda6): turning on async discard Sep 6 09:55:57.082926 kernel: BTRFS info (device vda6): enabling free space tree Sep 6 09:55:57.087845 kernel: BTRFS info (device vda6): last unmount of filesystem 5df41a5c-7aec-412a-8efb-42c0335a0fb4 Sep 6 09:55:57.088532 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 6 09:55:57.091918 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 6 09:55:57.173142 ignition[751]: Ignition 2.22.0 Sep 6 09:55:57.173155 ignition[751]: Stage: fetch-offline Sep 6 09:55:57.173191 ignition[751]: no configs at "/usr/lib/ignition/base.d" Sep 6 09:55:57.173200 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 09:55:57.173281 ignition[751]: parsed url from cmdline: "" Sep 6 09:55:57.173285 ignition[751]: no config URL provided Sep 6 09:55:57.173291 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 09:55:57.173300 ignition[751]: no config at "/usr/lib/ignition/user.ign" Sep 6 09:55:57.173324 ignition[751]: op(1): [started] loading QEMU firmware config module Sep 6 09:55:57.173329 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 6 09:55:57.180587 ignition[751]: op(1): [finished] loading QEMU firmware config module Sep 6 09:55:57.201954 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 6 09:55:57.206724 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 6 09:55:57.222515 ignition[751]: parsing config with SHA512: 43a2029c4cd137f89922aadcbd63c055e1dbbb11b62c65372f6877fd17026dc449cd9796011b8d85a1286418bca27232eb9f36a94ec0dd24dc5f3a7d90e46b31 Sep 6 09:55:57.227728 unknown[751]: fetched base config from "system" Sep 6 09:55:57.227741 unknown[751]: fetched user config from "qemu" Sep 6 09:55:57.228083 ignition[751]: fetch-offline: fetch-offline passed Sep 6 09:55:57.228133 ignition[751]: Ignition finished successfully Sep 6 09:55:57.233616 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 6 09:55:57.258530 systemd-networkd[851]: lo: Link UP Sep 6 09:55:57.258540 systemd-networkd[851]: lo: Gained carrier Sep 6 09:55:57.260044 systemd-networkd[851]: Enumeration completed Sep 6 09:55:57.260391 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 09:55:57.260395 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 09:55:57.261251 systemd-networkd[851]: eth0: Link UP Sep 6 09:55:57.261258 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 6 09:55:57.261417 systemd-networkd[851]: eth0: Gained carrier Sep 6 09:55:57.261426 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 09:55:57.269874 systemd[1]: Reached target network.target - Network. Sep 6 09:55:57.271583 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 09:55:57.274490 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 6 09:55:57.279875 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.40/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 09:55:57.306924 ignition[855]: Ignition 2.22.0 Sep 6 09:55:57.306935 ignition[855]: Stage: kargs Sep 6 09:55:57.307065 ignition[855]: no configs at "/usr/lib/ignition/base.d" Sep 6 09:55:57.307076 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 09:55:57.307739 ignition[855]: kargs: kargs passed Sep 6 09:55:57.307781 ignition[855]: Ignition finished successfully Sep 6 09:55:57.312679 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 6 09:55:57.315929 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 6 09:55:57.351411 ignition[864]: Ignition 2.22.0 Sep 6 09:55:57.351422 ignition[864]: Stage: disks Sep 6 09:55:57.351570 ignition[864]: no configs at "/usr/lib/ignition/base.d" Sep 6 09:55:57.351582 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 09:55:57.352268 ignition[864]: disks: disks passed Sep 6 09:55:57.355076 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 6 09:55:57.352307 ignition[864]: Ignition finished successfully Sep 6 09:55:57.356861 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 6 09:55:57.358750 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 6 09:55:57.360673 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 6 09:55:57.361668 systemd[1]: Reached target sysinit.target - System Initialization. Sep 6 09:55:57.363795 systemd[1]: Reached target basic.target - Basic System. Sep 6 09:55:57.368440 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 6 09:55:57.401560 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 6 09:55:57.409396 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 6 09:55:57.412329 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 6 09:55:57.507878 kernel: EXT4-fs (vda9): mounted filesystem 9a4cce02-a1df-4d9f-a25f-08e044692442 r/w with ordered data mode. Quota mode: none. Sep 6 09:55:57.508860 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 6 09:55:57.511019 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 6 09:55:57.514300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 6 09:55:57.516715 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 6 09:55:57.518612 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 6 09:55:57.518657 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 09:55:57.518679 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 6 09:55:57.530122 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 6 09:55:57.532972 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 6 09:55:57.537389 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (882) Sep 6 09:55:57.537445 kernel: BTRFS info (device vda6): first mount of filesystem 5df41a5c-7aec-412a-8efb-42c0335a0fb4 Sep 6 09:55:57.537456 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 09:55:57.540434 kernel: BTRFS info (device vda6): turning on async discard Sep 6 09:55:57.540464 kernel: BTRFS info (device vda6): enabling free space tree Sep 6 09:55:57.542073 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 6 09:55:57.569570 initrd-setup-root[906]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 09:55:57.574620 initrd-setup-root[913]: cut: /sysroot/etc/group: No such file or directory Sep 6 09:55:57.579471 initrd-setup-root[920]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 09:55:57.584257 initrd-setup-root[927]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 09:55:57.675761 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 6 09:55:57.678380 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 6 09:55:57.680140 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 6 09:55:57.706856 kernel: BTRFS info (device vda6): last unmount of filesystem 5df41a5c-7aec-412a-8efb-42c0335a0fb4 Sep 6 09:55:57.719031 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 6 09:55:57.740515 ignition[996]: INFO : Ignition 2.22.0 Sep 6 09:55:57.740515 ignition[996]: INFO : Stage: mount Sep 6 09:55:57.742168 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 09:55:57.742168 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 09:55:57.742168 ignition[996]: INFO : mount: mount passed Sep 6 09:55:57.742168 ignition[996]: INFO : Ignition finished successfully Sep 6 09:55:57.748782 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 6 09:55:57.751067 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 6 09:55:58.043945 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 6 09:55:58.045516 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 6 09:55:58.076361 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1008) Sep 6 09:55:58.076387 kernel: BTRFS info (device vda6): first mount of filesystem 5df41a5c-7aec-412a-8efb-42c0335a0fb4 Sep 6 09:55:58.076398 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 09:55:58.080086 kernel: BTRFS info (device vda6): turning on async discard Sep 6 09:55:58.080139 kernel: BTRFS info (device vda6): enabling free space tree Sep 6 09:55:58.081619 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 6 09:55:58.118532 ignition[1025]: INFO : Ignition 2.22.0 Sep 6 09:55:58.118532 ignition[1025]: INFO : Stage: files Sep 6 09:55:58.120261 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 09:55:58.120261 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 09:55:58.122703 ignition[1025]: DEBUG : files: compiled without relabeling support, skipping Sep 6 09:55:58.124130 ignition[1025]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 09:55:58.124130 ignition[1025]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 09:55:58.127398 ignition[1025]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 09:55:58.128872 ignition[1025]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 09:55:58.130531 unknown[1025]: wrote ssh authorized keys file for user: core Sep 6 09:55:58.131576 ignition[1025]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 09:55:58.133975 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 6 09:55:58.135775 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 6 09:55:58.179964 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 09:55:58.460445 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 6 09:55:58.460445 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 6 09:55:58.464344 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 09:55:58.464344 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 09:55:58.464344 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 09:55:58.464344 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 09:55:58.464344 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 09:55:58.464344 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 09:55:58.464344 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 09:55:58.476473 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 09:55:58.476473 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 09:55:58.476473 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 09:55:58.476473 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 09:55:58.476473 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 09:55:58.476473 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 6 09:55:58.900521 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 6 09:55:59.331591 systemd-networkd[851]: eth0: Gained IPv6LL Sep 6 09:56:00.710749 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 09:56:00.710749 ignition[1025]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 6 09:56:00.714923 ignition[1025]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 09:56:00.716983 ignition[1025]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 09:56:00.716983 ignition[1025]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 6 09:56:00.716983 ignition[1025]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 6 09:56:00.721723 ignition[1025]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 09:56:00.721723 ignition[1025]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 09:56:00.721723 ignition[1025]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 6 09:56:00.721723 ignition[1025]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 6 09:56:00.739439 ignition[1025]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 09:56:00.743493 ignition[1025]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 09:56:00.745261 ignition[1025]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 6 09:56:00.745261 ignition[1025]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 6 09:56:00.745261 ignition[1025]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 09:56:00.745261 ignition[1025]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 09:56:00.745261 ignition[1025]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 09:56:00.745261 ignition[1025]: INFO : files: files passed Sep 6 09:56:00.745261 ignition[1025]: INFO : Ignition finished successfully Sep 6 09:56:00.756243 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 6 09:56:00.758670 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 6 09:56:00.760949 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 6 09:56:00.819062 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 09:56:00.819200 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 6 09:56:00.822075 initrd-setup-root-after-ignition[1054]: grep: /sysroot/oem/oem-release: No such file or directory Sep 6 09:56:00.825696 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 09:56:00.825696 initrd-setup-root-after-ignition[1056]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 6 09:56:00.829210 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 09:56:00.832365 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 6 09:56:00.832630 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 6 09:56:00.837709 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 6 09:56:00.909237 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 09:56:00.909384 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 6 09:56:00.909782 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 6 09:56:00.912555 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 6 09:56:00.914460 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 6 09:56:00.916343 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 6 09:56:00.962107 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 6 09:56:00.966439 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 6 09:56:01.001474 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 6 09:56:01.003753 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 09:56:01.003974 systemd[1]: Stopped target timers.target - Timer Units. Sep 6 09:56:01.004271 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 09:56:01.004440 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 6 09:56:01.010597 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 6 09:56:01.010772 systemd[1]: Stopped target basic.target - Basic System. Sep 6 09:56:01.012639 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 6 09:56:01.013109 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 6 09:56:01.013440 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 6 09:56:01.013771 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 6 09:56:01.014252 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 6 09:56:01.014579 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 6 09:56:01.014933 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 6 09:56:01.015410 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 6 09:56:01.015722 systemd[1]: Stopped target swap.target - Swaps. Sep 6 09:56:01.016168 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 09:56:01.016327 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 6 09:56:01.033331 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 6 09:56:01.033513 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 09:56:01.035423 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 6 09:56:01.037497 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 09:56:01.040767 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 09:56:01.040936 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 6 09:56:01.043893 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 09:56:01.044050 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 6 09:56:01.047267 systemd[1]: Stopped target paths.target - Path Units. Sep 6 09:56:01.048230 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 09:56:01.052898 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 09:56:01.053078 systemd[1]: Stopped target slices.target - Slice Units. Sep 6 09:56:01.055589 systemd[1]: Stopped target sockets.target - Socket Units. Sep 6 09:56:01.057224 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 09:56:01.057342 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 6 09:56:01.058911 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 09:56:01.059010 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 6 09:56:01.061509 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 09:56:01.061648 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 6 09:56:01.062422 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 09:56:01.062541 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 6 09:56:01.065333 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 6 09:56:01.065994 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 09:56:01.066129 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 09:56:01.069957 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 6 09:56:01.071751 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 09:56:01.072811 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 09:56:01.073812 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 09:56:01.073958 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 6 09:56:01.078409 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 09:56:01.090035 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 6 09:56:01.115018 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 09:56:01.128600 ignition[1080]: INFO : Ignition 2.22.0 Sep 6 09:56:01.128600 ignition[1080]: INFO : Stage: umount Sep 6 09:56:01.130484 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 09:56:01.130484 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 09:56:01.130484 ignition[1080]: INFO : umount: umount passed Sep 6 09:56:01.130484 ignition[1080]: INFO : Ignition finished successfully Sep 6 09:56:01.135757 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 09:56:01.135907 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 6 09:56:01.137007 systemd[1]: Stopped target network.target - Network. Sep 6 09:56:01.138587 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 09:56:01.138645 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 6 09:56:01.141103 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 09:56:01.141149 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 6 09:56:01.142941 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 09:56:01.142992 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 6 09:56:01.144769 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 6 09:56:01.144816 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 6 09:56:01.145796 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 6 09:56:01.147600 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 6 09:56:01.162156 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 09:56:01.162352 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 6 09:56:01.166363 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 6 09:56:01.166636 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 09:56:01.166775 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 6 09:56:01.170451 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 6 09:56:01.171038 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 6 09:56:01.172041 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 09:56:01.172081 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 6 09:56:01.176993 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 6 09:56:01.178790 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 09:56:01.179814 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 6 09:56:01.186228 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 09:56:01.186295 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 6 09:56:01.189196 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 09:56:01.189257 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 6 09:56:01.191179 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 6 09:56:01.191223 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 09:56:01.194456 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 09:56:01.198394 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 09:56:01.198473 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 6 09:56:01.213059 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 09:56:01.213191 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 6 09:56:01.216401 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 09:56:01.216568 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 09:56:01.219807 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 09:56:01.219901 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 6 09:56:01.220929 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 09:56:01.220967 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 09:56:01.224525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 09:56:01.224588 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 6 09:56:01.226116 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 09:56:01.226177 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 6 09:56:01.226808 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 09:56:01.226891 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 09:56:01.228463 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 6 09:56:01.234874 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 6 09:56:01.234980 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 6 09:56:01.241005 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 09:56:01.241090 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 09:56:01.247027 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 09:56:01.247102 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 09:56:01.251693 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 6 09:56:01.251774 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 6 09:56:01.251859 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 6 09:56:01.252343 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 09:56:01.252481 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 6 09:56:01.255638 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 09:56:01.255776 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 6 09:56:01.258353 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 6 09:56:01.258542 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 09:56:01.258636 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 6 09:56:01.260042 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 6 09:56:01.281016 systemd[1]: Switching root. Sep 6 09:56:01.331085 systemd-journald[221]: Journal stopped Sep 6 09:56:02.412871 systemd-journald[221]: Received SIGTERM from PID 1 (systemd). Sep 6 09:56:02.412957 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 09:56:02.412979 kernel: SELinux: policy capability open_perms=1 Sep 6 09:56:02.412990 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 09:56:02.413001 kernel: SELinux: policy capability always_check_network=0 Sep 6 09:56:02.413012 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 09:56:02.413024 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 09:56:02.413037 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 09:56:02.413048 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 09:56:02.413059 kernel: SELinux: policy capability userspace_initial_context=0 Sep 6 09:56:02.413078 kernel: audit: type=1403 audit(1757152561.647:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 09:56:02.413091 systemd[1]: Successfully loaded SELinux policy in 70.626ms. Sep 6 09:56:02.413116 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.515ms. Sep 6 09:56:02.413129 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 6 09:56:02.413141 systemd[1]: Detected virtualization kvm. Sep 6 09:56:02.413157 systemd[1]: Detected architecture x86-64. Sep 6 09:56:02.413169 systemd[1]: Detected first boot. Sep 6 09:56:02.413188 systemd[1]: Initializing machine ID from VM UUID. Sep 6 09:56:02.413200 zram_generator::config[1124]: No configuration found. Sep 6 09:56:02.413213 kernel: Guest personality initialized and is inactive Sep 6 09:56:02.413224 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 6 09:56:02.413235 kernel: Initialized host personality Sep 6 09:56:02.413263 kernel: NET: Registered PF_VSOCK protocol family Sep 6 09:56:02.413274 systemd[1]: Populated /etc with preset unit settings. Sep 6 09:56:02.413292 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 6 09:56:02.413304 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 09:56:02.413316 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 6 09:56:02.413328 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 09:56:02.413341 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 6 09:56:02.413355 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 6 09:56:02.413370 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 6 09:56:02.413392 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 6 09:56:02.413407 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 6 09:56:02.413425 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 6 09:56:02.413440 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 6 09:56:02.413465 systemd[1]: Created slice user.slice - User and Session Slice. Sep 6 09:56:02.413488 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 09:56:02.413503 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 09:56:02.413517 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 6 09:56:02.413532 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 6 09:56:02.413555 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 6 09:56:02.413579 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 6 09:56:02.413594 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 6 09:56:02.413629 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 09:56:02.413652 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 6 09:56:02.413670 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 6 09:56:02.413693 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 6 09:56:02.413716 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 6 09:56:02.413736 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 6 09:56:02.413754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 09:56:02.413768 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 6 09:56:02.413780 systemd[1]: Reached target slices.target - Slice Units. Sep 6 09:56:02.413792 systemd[1]: Reached target swap.target - Swaps. Sep 6 09:56:02.413804 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 6 09:56:02.413841 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 6 09:56:02.413866 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 6 09:56:02.413885 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 6 09:56:02.413900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 6 09:56:02.413915 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 09:56:02.413927 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 6 09:56:02.413942 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 6 09:56:02.413954 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 6 09:56:02.413968 systemd[1]: Mounting media.mount - External Media Directory... Sep 6 09:56:02.413982 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 09:56:02.414002 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 6 09:56:02.414014 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 6 09:56:02.414026 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 6 09:56:02.414041 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 09:56:02.414056 systemd[1]: Reached target machines.target - Containers. Sep 6 09:56:02.414068 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 6 09:56:02.414080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 09:56:02.414092 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 6 09:56:02.414107 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 6 09:56:02.414122 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 09:56:02.414138 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 6 09:56:02.414154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 09:56:02.414166 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 6 09:56:02.414178 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 09:56:02.414190 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 09:56:02.414201 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 09:56:02.414213 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 6 09:56:02.414225 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 09:56:02.414245 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 09:56:02.414258 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 6 09:56:02.414273 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 6 09:56:02.414294 kernel: loop: module loaded Sep 6 09:56:02.414305 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 6 09:56:02.414318 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 6 09:56:02.414329 kernel: fuse: init (API version 7.41) Sep 6 09:56:02.414341 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 6 09:56:02.414353 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 6 09:56:02.414364 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 6 09:56:02.414381 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 09:56:02.414393 systemd[1]: Stopped verity-setup.service. Sep 6 09:56:02.414405 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 09:56:02.414417 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 6 09:56:02.414433 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 6 09:56:02.414447 systemd[1]: Mounted media.mount - External Media Directory. Sep 6 09:56:02.414459 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 6 09:56:02.414495 systemd-journald[1195]: Collecting audit messages is disabled. Sep 6 09:56:02.414527 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 6 09:56:02.414542 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 6 09:56:02.414554 systemd-journald[1195]: Journal started Sep 6 09:56:02.414576 systemd-journald[1195]: Runtime Journal (/run/log/journal/6c54d2f29c574b7d8593641af77c5ef0) is 6M, max 48.6M, 42.5M free. Sep 6 09:56:02.178542 systemd[1]: Queued start job for default target multi-user.target. Sep 6 09:56:02.191790 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 6 09:56:02.192292 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 09:56:02.416855 systemd[1]: Started systemd-journald.service - Journal Service. Sep 6 09:56:02.416878 kernel: ACPI: bus type drm_connector registered Sep 6 09:56:02.419079 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 6 09:56:02.420711 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 09:56:02.422263 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 09:56:02.422546 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 6 09:56:02.424054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 09:56:02.424291 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 09:56:02.425696 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 09:56:02.426144 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 6 09:56:02.427524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 09:56:02.427736 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 09:56:02.429190 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 09:56:02.429412 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 6 09:56:02.430727 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 09:56:02.430956 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 09:56:02.432381 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 6 09:56:02.433794 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 6 09:56:02.435442 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 6 09:56:02.436997 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 6 09:56:02.451355 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 6 09:56:02.453868 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 6 09:56:02.456991 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 6 09:56:02.458227 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 09:56:02.458324 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 6 09:56:02.460459 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 6 09:56:02.466462 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 6 09:56:02.469155 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 09:56:02.470455 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 6 09:56:02.472534 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 6 09:56:02.473763 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 09:56:02.476003 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 6 09:56:02.477216 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 09:56:02.480091 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 09:56:02.484044 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 6 09:56:02.491018 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 6 09:56:02.494158 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 6 09:56:02.495862 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 6 09:56:02.504187 systemd-journald[1195]: Time spent on flushing to /var/log/journal/6c54d2f29c574b7d8593641af77c5ef0 is 18.557ms for 983 entries. Sep 6 09:56:02.504187 systemd-journald[1195]: System Journal (/var/log/journal/6c54d2f29c574b7d8593641af77c5ef0) is 8M, max 195.6M, 187.6M free. Sep 6 09:56:02.537776 systemd-journald[1195]: Received client request to flush runtime journal. Sep 6 09:56:02.537845 kernel: loop0: detected capacity change from 0 to 110984 Sep 6 09:56:02.504557 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 09:56:02.511165 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 6 09:56:02.512586 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 6 09:56:02.516422 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 6 09:56:02.534322 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 09:56:02.539377 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 6 09:56:02.542913 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 09:56:02.553641 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 6 09:56:02.557357 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 6 09:56:02.559014 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 6 09:56:02.564842 kernel: loop1: detected capacity change from 0 to 224512 Sep 6 09:56:02.584718 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Sep 6 09:56:02.584745 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Sep 6 09:56:02.590433 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 09:56:02.594886 kernel: loop2: detected capacity change from 0 to 128016 Sep 6 09:56:02.627866 kernel: loop3: detected capacity change from 0 to 110984 Sep 6 09:56:02.636871 kernel: loop4: detected capacity change from 0 to 224512 Sep 6 09:56:02.646846 kernel: loop5: detected capacity change from 0 to 128016 Sep 6 09:56:02.655060 (sd-merge)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 6 09:56:02.655637 (sd-merge)[1267]: Merged extensions into '/usr'. Sep 6 09:56:02.660780 systemd[1]: Reload requested from client PID 1243 ('systemd-sysext') (unit systemd-sysext.service)... Sep 6 09:56:02.660970 systemd[1]: Reloading... Sep 6 09:56:02.863940 zram_generator::config[1296]: No configuration found. Sep 6 09:56:03.014503 ldconfig[1238]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 09:56:03.058183 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 09:56:03.058458 systemd[1]: Reloading finished in 396 ms. Sep 6 09:56:03.081853 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 6 09:56:03.083505 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 6 09:56:03.108647 systemd[1]: Starting ensure-sysext.service... Sep 6 09:56:03.110872 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 6 09:56:03.127470 systemd[1]: Reload requested from client PID 1330 ('systemctl') (unit ensure-sysext.service)... Sep 6 09:56:03.127487 systemd[1]: Reloading... Sep 6 09:56:03.134939 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 6 09:56:03.134978 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 6 09:56:03.135284 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 09:56:03.135568 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 6 09:56:03.136478 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 09:56:03.136749 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Sep 6 09:56:03.137015 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Sep 6 09:56:03.141450 systemd-tmpfiles[1331]: Detected autofs mount point /boot during canonicalization of boot. Sep 6 09:56:03.141462 systemd-tmpfiles[1331]: Skipping /boot Sep 6 09:56:03.151773 systemd-tmpfiles[1331]: Detected autofs mount point /boot during canonicalization of boot. Sep 6 09:56:03.151903 systemd-tmpfiles[1331]: Skipping /boot Sep 6 09:56:03.190883 zram_generator::config[1358]: No configuration found. Sep 6 09:56:03.364390 systemd[1]: Reloading finished in 236 ms. Sep 6 09:56:03.386322 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 6 09:56:03.408851 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 09:56:03.417612 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 6 09:56:03.420506 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 6 09:56:03.431840 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 6 09:56:03.435895 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 6 09:56:03.440088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 09:56:03.442573 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 6 09:56:03.446438 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 09:56:03.446605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 09:56:03.447992 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 09:56:03.456705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 09:56:03.459165 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 09:56:03.460426 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 09:56:03.460617 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 6 09:56:03.463038 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 6 09:56:03.464055 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 09:56:03.465429 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 6 09:56:03.467122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 09:56:03.467337 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 09:56:03.470711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 09:56:03.477367 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 09:56:03.481300 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 09:56:03.482585 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 09:56:03.492885 augenrules[1430]: No rules Sep 6 09:56:03.494019 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 09:56:03.494291 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 6 09:56:03.495107 systemd-udevd[1402]: Using default interface naming scheme 'v255'. Sep 6 09:56:03.499082 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 6 09:56:03.508058 systemd[1]: Finished ensure-sysext.service. Sep 6 09:56:03.511170 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 6 09:56:03.513679 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 09:56:03.515021 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 6 09:56:03.516094 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 09:56:03.517016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 09:56:03.521367 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 6 09:56:03.528999 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 09:56:03.537034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 09:56:03.538172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 09:56:03.538225 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 6 09:56:03.540100 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 6 09:56:03.544993 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 6 09:56:03.546036 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 09:56:03.546061 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 09:56:03.548082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 09:56:03.551055 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 6 09:56:03.552689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 09:56:03.552933 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 09:56:03.560661 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 6 09:56:03.562000 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 09:56:03.572294 augenrules[1439]: /sbin/augenrules: No change Sep 6 09:56:03.579932 augenrules[1498]: No rules Sep 6 09:56:03.581612 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 09:56:03.581936 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 6 09:56:03.585259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 09:56:03.585482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 09:56:03.587228 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 09:56:03.587444 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 6 09:56:03.600183 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 09:56:03.600465 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 09:56:03.603948 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 09:56:03.606674 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 6 09:56:03.644830 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 6 09:56:03.713063 systemd-resolved[1400]: Positive Trust Anchors: Sep 6 09:56:03.713092 systemd-resolved[1400]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 09:56:03.713154 systemd-resolved[1400]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 6 09:56:03.723582 systemd-resolved[1400]: Defaulting to hostname 'linux'. Sep 6 09:56:03.726779 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 6 09:56:03.728412 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 6 09:56:03.742775 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 6 09:56:03.781566 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 6 09:56:03.788922 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 6 09:56:03.790859 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 09:56:03.793856 kernel: ACPI: button: Power Button [PWRF] Sep 6 09:56:03.806921 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 6 09:56:03.845806 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 6 09:56:03.846139 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 6 09:56:03.855810 systemd-networkd[1482]: lo: Link UP Sep 6 09:56:03.855839 systemd-networkd[1482]: lo: Gained carrier Sep 6 09:56:03.857371 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 6 09:56:03.857515 systemd-networkd[1482]: Enumeration completed Sep 6 09:56:03.857933 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 09:56:03.857943 systemd-networkd[1482]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 09:56:03.858949 systemd-networkd[1482]: eth0: Link UP Sep 6 09:56:03.859129 systemd-networkd[1482]: eth0: Gained carrier Sep 6 09:56:03.859143 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 09:56:03.859917 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 6 09:56:03.861094 systemd[1]: Reached target network.target - Network. Sep 6 09:56:03.862084 systemd[1]: Reached target sysinit.target - System Initialization. Sep 6 09:56:03.863428 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 6 09:56:03.864946 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 6 09:56:03.866192 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 6 09:56:03.867898 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 6 09:56:03.869199 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 09:56:03.869242 systemd[1]: Reached target paths.target - Path Units. Sep 6 09:56:03.870128 systemd[1]: Reached target time-set.target - System Time Set. Sep 6 09:56:03.870874 systemd-networkd[1482]: eth0: DHCPv4 address 10.0.0.40/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 09:56:03.871319 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 6 09:56:03.871465 systemd-timesyncd[1473]: Network configuration changed, trying to establish connection. Sep 6 09:56:03.872654 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 6 09:56:03.874071 systemd-timesyncd[1473]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 6 09:56:03.874176 systemd[1]: Reached target timers.target - Timer Units. Sep 6 09:56:03.874259 systemd-timesyncd[1473]: Initial clock synchronization to Sat 2025-09-06 09:56:04.273581 UTC. Sep 6 09:56:03.876750 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 6 09:56:03.879606 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 6 09:56:03.884067 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 6 09:56:03.886142 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 6 09:56:03.887389 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 6 09:56:03.901778 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 6 09:56:03.903309 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 6 09:56:03.906394 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 6 09:56:03.910005 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 6 09:56:03.912480 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 6 09:56:03.914310 systemd[1]: Reached target sockets.target - Socket Units. Sep 6 09:56:03.915290 systemd[1]: Reached target basic.target - Basic System. Sep 6 09:56:03.916310 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 6 09:56:03.916341 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 6 09:56:03.918026 systemd[1]: Starting containerd.service - containerd container runtime... Sep 6 09:56:03.919475 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 6 09:56:03.924902 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 6 09:56:03.927390 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 6 09:56:03.932975 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 6 09:56:03.933993 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 6 09:56:03.943068 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 6 09:56:03.945277 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 6 09:56:03.948095 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 6 09:56:03.949927 jq[1542]: false Sep 6 09:56:03.950110 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 6 09:56:03.952256 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 6 09:56:03.959223 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 6 09:56:03.961227 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 09:56:03.961803 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 09:56:03.963035 systemd[1]: Starting update-engine.service - Update Engine... Sep 6 09:56:03.966960 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 6 09:56:03.969900 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Refreshing passwd entry cache Sep 6 09:56:03.969357 oslogin_cache_refresh[1544]: Refreshing passwd entry cache Sep 6 09:56:03.975861 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 6 09:56:03.981068 oslogin_cache_refresh[1544]: Failure getting users, quitting Sep 6 09:56:03.982148 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Failure getting users, quitting Sep 6 09:56:03.982148 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 6 09:56:03.982216 extend-filesystems[1543]: Found /dev/vda6 Sep 6 09:56:03.978445 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 09:56:03.981092 oslogin_cache_refresh[1544]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 6 09:56:03.978686 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 6 09:56:03.986353 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Refreshing group entry cache Sep 6 09:56:03.986306 oslogin_cache_refresh[1544]: Refreshing group entry cache Sep 6 09:56:03.981795 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 09:56:03.982085 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 6 09:56:03.994431 oslogin_cache_refresh[1544]: Failure getting groups, quitting Sep 6 09:56:03.997113 extend-filesystems[1543]: Found /dev/vda9 Sep 6 09:56:03.997969 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Failure getting groups, quitting Sep 6 09:56:03.997969 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 6 09:56:03.994445 oslogin_cache_refresh[1544]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 6 09:56:04.004815 jq[1554]: true Sep 6 09:56:04.009815 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 6 09:56:04.010476 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 6 09:56:04.016302 extend-filesystems[1543]: Checking size of /dev/vda9 Sep 6 09:56:04.024621 (ntainerd)[1568]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 6 09:56:04.025482 update_engine[1552]: I20250906 09:56:04.025389 1552 main.cc:92] Flatcar Update Engine starting Sep 6 09:56:04.032520 dbus-daemon[1540]: [system] SELinux support is enabled Sep 6 09:56:04.037821 update_engine[1552]: I20250906 09:56:04.037659 1552 update_check_scheduler.cc:74] Next update check in 7m2s Sep 6 09:56:04.045101 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 6 09:56:04.062415 tar[1559]: linux-amd64/LICENSE Sep 6 09:56:04.062415 tar[1559]: linux-amd64/helm Sep 6 09:56:04.049900 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 09:56:04.049922 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 6 09:56:04.051230 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 09:56:04.051244 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 6 09:56:04.061366 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 09:56:04.069699 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 6 09:56:04.072157 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 6 09:56:04.083553 systemd[1]: Started update-engine.service - Update Engine. Sep 6 09:56:04.088199 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 6 09:56:04.101995 extend-filesystems[1543]: Resized partition /dev/vda9 Sep 6 09:56:04.108301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 09:56:04.115891 extend-filesystems[1587]: resize2fs 1.47.3 (8-Jul-2025) Sep 6 09:56:04.122122 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 6 09:56:04.122165 jq[1575]: true Sep 6 09:56:04.275989 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 09:56:04.282466 systemd-logind[1550]: Watching system buttons on /dev/input/event2 (Power Button) Sep 6 09:56:04.282490 systemd-logind[1550]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 09:56:04.293183 systemd-logind[1550]: New seat seat0. Sep 6 09:56:04.321581 systemd[1]: Started systemd-logind.service - User Login Management. Sep 6 09:56:04.325274 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 6 09:56:04.336686 locksmithd[1583]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 09:56:04.337373 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 6 09:56:04.342534 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 6 09:56:04.360817 extend-filesystems[1587]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 09:56:04.360817 extend-filesystems[1587]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 09:56:04.360817 extend-filesystems[1587]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 6 09:56:04.360603 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 09:56:04.365390 extend-filesystems[1543]: Resized filesystem in /dev/vda9 Sep 6 09:56:04.366730 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 6 09:56:04.394793 kernel: kvm_amd: TSC scaling supported Sep 6 09:56:04.395713 kernel: kvm_amd: Nested Virtualization enabled Sep 6 09:56:04.395771 kernel: kvm_amd: Nested Paging enabled Sep 6 09:56:04.395824 kernel: kvm_amd: LBR virtualization supported Sep 6 09:56:04.395892 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 6 09:56:04.395938 kernel: kvm_amd: Virtual GIF supported Sep 6 09:56:04.599104 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 09:56:04.601423 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 09:56:04.602243 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 6 09:56:04.613899 bash[1616]: Updated "/home/core/.ssh/authorized_keys" Sep 6 09:56:04.617782 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 6 09:56:04.636271 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 6 09:56:04.637742 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 6 09:56:04.641889 kernel: EDAC MC: Ver: 3.0.0 Sep 6 09:56:04.665732 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 6 09:56:04.670188 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 6 09:56:04.674206 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 6 09:56:04.676208 systemd[1]: Reached target getty.target - Login Prompts. Sep 6 09:56:04.759914 containerd[1568]: time="2025-09-06T09:56:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 6 09:56:04.759914 containerd[1568]: time="2025-09-06T09:56:04.756373703Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 6 09:56:04.773972 containerd[1568]: time="2025-09-06T09:56:04.773901071Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.293µs" Sep 6 09:56:04.773972 containerd[1568]: time="2025-09-06T09:56:04.773955497Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 6 09:56:04.774138 containerd[1568]: time="2025-09-06T09:56:04.773989621Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 6 09:56:04.774304 containerd[1568]: time="2025-09-06T09:56:04.774269063Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 6 09:56:04.774304 containerd[1568]: time="2025-09-06T09:56:04.774289482Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 6 09:56:04.774354 containerd[1568]: time="2025-09-06T09:56:04.774322533Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 6 09:56:04.774435 containerd[1568]: time="2025-09-06T09:56:04.774412083Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 6 09:56:04.774435 containerd[1568]: time="2025-09-06T09:56:04.774428176Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 6 09:56:04.774811 containerd[1568]: time="2025-09-06T09:56:04.774773164Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 6 09:56:04.774811 containerd[1568]: time="2025-09-06T09:56:04.774789847Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 6 09:56:04.774811 containerd[1568]: time="2025-09-06T09:56:04.774800409Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 6 09:56:04.774811 containerd[1568]: time="2025-09-06T09:56:04.774808824Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 6 09:56:04.774967 containerd[1568]: time="2025-09-06T09:56:04.774939430Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 6 09:56:04.775248 containerd[1568]: time="2025-09-06T09:56:04.775224111Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 6 09:56:04.775279 containerd[1568]: time="2025-09-06T09:56:04.775262916Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 6 09:56:04.775279 containerd[1568]: time="2025-09-06T09:56:04.775272742Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 6 09:56:04.775363 containerd[1568]: time="2025-09-06T09:56:04.775342221Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 6 09:56:04.775588 containerd[1568]: time="2025-09-06T09:56:04.775568026Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 6 09:56:04.775665 containerd[1568]: time="2025-09-06T09:56:04.775648970Z" level=info msg="metadata content store policy set" policy=shared Sep 6 09:56:04.782258 containerd[1568]: time="2025-09-06T09:56:04.782210714Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 6 09:56:04.782308 containerd[1568]: time="2025-09-06T09:56:04.782265299Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 6 09:56:04.782308 containerd[1568]: time="2025-09-06T09:56:04.782284486Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 6 09:56:04.782308 containerd[1568]: time="2025-09-06T09:56:04.782297761Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 6 09:56:04.782375 containerd[1568]: time="2025-09-06T09:56:04.782310952Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 6 09:56:04.782375 containerd[1568]: time="2025-09-06T09:56:04.782335168Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 6 09:56:04.782375 containerd[1568]: time="2025-09-06T09:56:04.782370880Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 6 09:56:04.782443 containerd[1568]: time="2025-09-06T09:56:04.782396810Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 6 09:56:04.782443 containerd[1568]: time="2025-09-06T09:56:04.782409034Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 6 09:56:04.782443 containerd[1568]: time="2025-09-06T09:56:04.782419426Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 6 09:56:04.782443 containerd[1568]: time="2025-09-06T09:56:04.782429725Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 6 09:56:04.782513 containerd[1568]: time="2025-09-06T09:56:04.782452362Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 6 09:56:04.782624 containerd[1568]: time="2025-09-06T09:56:04.782591794Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 6 09:56:04.782624 containerd[1568]: time="2025-09-06T09:56:04.782617176Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 6 09:56:04.782665 containerd[1568]: time="2025-09-06T09:56:04.782632640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 6 09:56:04.782665 containerd[1568]: time="2025-09-06T09:56:04.782646136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 6 09:56:04.782665 containerd[1568]: time="2025-09-06T09:56:04.782657812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 6 09:56:04.782723 containerd[1568]: time="2025-09-06T09:56:04.782705243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 6 09:56:04.782748 containerd[1568]: time="2025-09-06T09:56:04.782735518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 6 09:56:04.782777 containerd[1568]: time="2025-09-06T09:56:04.782751433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 6 09:56:04.782777 containerd[1568]: time="2025-09-06T09:56:04.782763393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 6 09:56:04.782777 containerd[1568]: time="2025-09-06T09:56:04.782774092Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 6 09:56:04.782848 containerd[1568]: time="2025-09-06T09:56:04.782788713Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 6 09:56:04.784046 containerd[1568]: time="2025-09-06T09:56:04.784006751Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 6 09:56:04.784046 containerd[1568]: time="2025-09-06T09:56:04.784043462Z" level=info msg="Start snapshots syncer" Sep 6 09:56:04.784126 containerd[1568]: time="2025-09-06T09:56:04.784075893Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 6 09:56:04.784697 containerd[1568]: time="2025-09-06T09:56:04.784603515Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 6 09:56:04.784842 containerd[1568]: time="2025-09-06T09:56:04.784702290Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 6 09:56:04.784842 containerd[1568]: time="2025-09-06T09:56:04.784817927Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 6 09:56:04.785005 containerd[1568]: time="2025-09-06T09:56:04.784983436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 6 09:56:04.785032 containerd[1568]: time="2025-09-06T09:56:04.785018949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 6 09:56:04.785052 containerd[1568]: time="2025-09-06T09:56:04.785037821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 6 09:56:04.785093 containerd[1568]: time="2025-09-06T09:56:04.785052989Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 6 09:56:04.785146 containerd[1568]: time="2025-09-06T09:56:04.785071450Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 6 09:56:04.785169 containerd[1568]: time="2025-09-06T09:56:04.785147630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 6 09:56:04.785205 containerd[1568]: time="2025-09-06T09:56:04.785169784Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 6 09:56:04.785225 containerd[1568]: time="2025-09-06T09:56:04.785203961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 6 09:56:04.785246 containerd[1568]: time="2025-09-06T09:56:04.785222833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 6 09:56:04.785246 containerd[1568]: time="2025-09-06T09:56:04.785241083Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 6 09:56:04.785385 containerd[1568]: time="2025-09-06T09:56:04.785324048Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 6 09:56:04.785406 containerd[1568]: time="2025-09-06T09:56:04.785384091Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 6 09:56:04.785406 containerd[1568]: time="2025-09-06T09:56:04.785400965Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 6 09:56:04.785453 containerd[1568]: time="2025-09-06T09:56:04.785417722Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 6 09:56:04.785453 containerd[1568]: time="2025-09-06T09:56:04.785429619Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 6 09:56:04.785453 containerd[1568]: time="2025-09-06T09:56:04.785446407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 6 09:56:04.785537 containerd[1568]: time="2025-09-06T09:56:04.785468371Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 6 09:56:04.785537 containerd[1568]: time="2025-09-06T09:56:04.785494879Z" level=info msg="runtime interface created" Sep 6 09:56:04.785537 containerd[1568]: time="2025-09-06T09:56:04.785526069Z" level=info msg="created NRI interface" Sep 6 09:56:04.785606 containerd[1568]: time="2025-09-06T09:56:04.785540290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 6 09:56:04.785606 containerd[1568]: time="2025-09-06T09:56:04.785557196Z" level=info msg="Connect containerd service" Sep 6 09:56:04.785606 containerd[1568]: time="2025-09-06T09:56:04.785591456Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 6 09:56:04.788593 containerd[1568]: time="2025-09-06T09:56:04.788537314Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 09:56:04.908503 tar[1559]: linux-amd64/README.md Sep 6 09:56:04.936257 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 6 09:56:04.968614 containerd[1568]: time="2025-09-06T09:56:04.968550542Z" level=info msg="Start subscribing containerd event" Sep 6 09:56:04.968714 containerd[1568]: time="2025-09-06T09:56:04.968640270Z" level=info msg="Start recovering state" Sep 6 09:56:04.968851 containerd[1568]: time="2025-09-06T09:56:04.968796017Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 09:56:04.968884 containerd[1568]: time="2025-09-06T09:56:04.968819002Z" level=info msg="Start event monitor" Sep 6 09:56:04.968884 containerd[1568]: time="2025-09-06T09:56:04.968872861Z" level=info msg="Start cni network conf syncer for default" Sep 6 09:56:04.968949 containerd[1568]: time="2025-09-06T09:56:04.968882980Z" level=info msg="Start streaming server" Sep 6 09:56:04.968949 containerd[1568]: time="2025-09-06T09:56:04.968901809Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 6 09:56:04.968949 containerd[1568]: time="2025-09-06T09:56:04.968913812Z" level=info msg="runtime interface starting up..." Sep 6 09:56:04.968949 containerd[1568]: time="2025-09-06T09:56:04.968924194Z" level=info msg="starting plugins..." Sep 6 09:56:04.968949 containerd[1568]: time="2025-09-06T09:56:04.968948525Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 6 09:56:04.969039 containerd[1568]: time="2025-09-06T09:56:04.968913181Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 09:56:04.969181 containerd[1568]: time="2025-09-06T09:56:04.969160939Z" level=info msg="containerd successfully booted in 0.214459s" Sep 6 09:56:04.969310 systemd[1]: Started containerd.service - containerd container runtime. Sep 6 09:56:05.508174 systemd-networkd[1482]: eth0: Gained IPv6LL Sep 6 09:56:05.511837 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 6 09:56:05.513835 systemd[1]: Reached target network-online.target - Network is Online. Sep 6 09:56:05.517050 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 6 09:56:05.519592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 09:56:05.521777 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 6 09:56:05.554023 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 6 09:56:05.563435 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 6 09:56:05.563723 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 6 09:56:05.565288 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 6 09:56:06.891380 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:56:06.893165 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 6 09:56:06.894592 systemd[1]: Startup finished in 2.822s (kernel) + 7.013s (initrd) + 5.314s (userspace) = 15.149s. Sep 6 09:56:06.923259 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 09:56:07.483836 kubelet[1682]: E0906 09:56:07.483758 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 09:56:07.488073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 09:56:07.488299 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 09:56:07.488716 systemd[1]: kubelet.service: Consumed 1.698s CPU time, 265.8M memory peak. Sep 6 09:56:07.692783 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 6 09:56:07.694255 systemd[1]: Started sshd@0-10.0.0.40:22-10.0.0.1:50986.service - OpenSSH per-connection server daemon (10.0.0.1:50986). Sep 6 09:56:07.775436 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 50986 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:56:07.777093 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:56:07.783735 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 6 09:56:07.784967 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 6 09:56:07.792114 systemd-logind[1550]: New session 1 of user core. Sep 6 09:56:07.815479 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 6 09:56:07.819080 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 6 09:56:07.844163 (systemd)[1700]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 09:56:07.846504 systemd-logind[1550]: New session c1 of user core. Sep 6 09:56:08.006527 systemd[1700]: Queued start job for default target default.target. Sep 6 09:56:08.025460 systemd[1700]: Created slice app.slice - User Application Slice. Sep 6 09:56:08.025497 systemd[1700]: Reached target paths.target - Paths. Sep 6 09:56:08.025549 systemd[1700]: Reached target timers.target - Timers. Sep 6 09:56:08.027362 systemd[1700]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 6 09:56:08.038391 systemd[1700]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 6 09:56:08.038509 systemd[1700]: Reached target sockets.target - Sockets. Sep 6 09:56:08.038552 systemd[1700]: Reached target basic.target - Basic System. Sep 6 09:56:08.038592 systemd[1700]: Reached target default.target - Main User Target. Sep 6 09:56:08.038628 systemd[1700]: Startup finished in 185ms. Sep 6 09:56:08.039161 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 6 09:56:08.050009 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 6 09:56:08.122778 systemd[1]: Started sshd@1-10.0.0.40:22-10.0.0.1:50992.service - OpenSSH per-connection server daemon (10.0.0.1:50992). Sep 6 09:56:08.175984 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 50992 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:56:08.177729 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:56:08.183010 systemd-logind[1550]: New session 2 of user core. Sep 6 09:56:08.201062 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 6 09:56:08.257994 sshd[1714]: Connection closed by 10.0.0.1 port 50992 Sep 6 09:56:08.258352 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Sep 6 09:56:08.277807 systemd[1]: sshd@1-10.0.0.40:22-10.0.0.1:50992.service: Deactivated successfully. Sep 6 09:56:08.280339 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 09:56:08.281202 systemd-logind[1550]: Session 2 logged out. Waiting for processes to exit. Sep 6 09:56:08.284585 systemd[1]: Started sshd@2-10.0.0.40:22-10.0.0.1:51006.service - OpenSSH per-connection server daemon (10.0.0.1:51006). Sep 6 09:56:08.285299 systemd-logind[1550]: Removed session 2. Sep 6 09:56:08.430077 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 51006 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:56:08.431710 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:56:08.436982 systemd-logind[1550]: New session 3 of user core. Sep 6 09:56:08.447980 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 6 09:56:08.498381 sshd[1723]: Connection closed by 10.0.0.1 port 51006 Sep 6 09:56:08.498752 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Sep 6 09:56:08.514814 systemd[1]: sshd@2-10.0.0.40:22-10.0.0.1:51006.service: Deactivated successfully. Sep 6 09:56:08.516572 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 09:56:08.517433 systemd-logind[1550]: Session 3 logged out. Waiting for processes to exit. Sep 6 09:56:08.519894 systemd[1]: Started sshd@3-10.0.0.40:22-10.0.0.1:51008.service - OpenSSH per-connection server daemon (10.0.0.1:51008). Sep 6 09:56:08.520701 systemd-logind[1550]: Removed session 3. Sep 6 09:56:08.586223 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 51008 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:56:08.588231 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:56:08.593376 systemd-logind[1550]: New session 4 of user core. Sep 6 09:56:08.608041 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 6 09:56:08.665397 sshd[1732]: Connection closed by 10.0.0.1 port 51008 Sep 6 09:56:08.665904 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Sep 6 09:56:08.677684 systemd[1]: sshd@3-10.0.0.40:22-10.0.0.1:51008.service: Deactivated successfully. Sep 6 09:56:08.679450 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 09:56:08.680348 systemd-logind[1550]: Session 4 logged out. Waiting for processes to exit. Sep 6 09:56:08.683038 systemd[1]: Started sshd@4-10.0.0.40:22-10.0.0.1:51022.service - OpenSSH per-connection server daemon (10.0.0.1:51022). Sep 6 09:56:08.683625 systemd-logind[1550]: Removed session 4. Sep 6 09:56:08.755549 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 51022 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:56:08.756967 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:56:08.761869 systemd-logind[1550]: New session 5 of user core. Sep 6 09:56:08.777985 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 6 09:56:08.840403 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 6 09:56:08.840837 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 09:56:08.870426 sudo[1742]: pam_unix(sudo:session): session closed for user root Sep 6 09:56:08.871962 sshd[1741]: Connection closed by 10.0.0.1 port 51022 Sep 6 09:56:08.872334 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Sep 6 09:56:08.885417 systemd[1]: sshd@4-10.0.0.40:22-10.0.0.1:51022.service: Deactivated successfully. Sep 6 09:56:08.887325 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 09:56:08.888085 systemd-logind[1550]: Session 5 logged out. Waiting for processes to exit. Sep 6 09:56:08.891142 systemd[1]: Started sshd@5-10.0.0.40:22-10.0.0.1:51032.service - OpenSSH per-connection server daemon (10.0.0.1:51032). Sep 6 09:56:08.891746 systemd-logind[1550]: Removed session 5. Sep 6 09:56:08.949740 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 51032 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:56:08.951060 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:56:08.955591 systemd-logind[1550]: New session 6 of user core. Sep 6 09:56:08.964962 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 6 09:56:09.019313 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 6 09:56:09.019608 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 09:56:09.026934 sudo[1753]: pam_unix(sudo:session): session closed for user root Sep 6 09:56:09.035634 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 6 09:56:09.036092 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 09:56:09.046003 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 6 09:56:09.099158 augenrules[1775]: No rules Sep 6 09:56:09.101059 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 09:56:09.101361 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 6 09:56:09.102546 sudo[1752]: pam_unix(sudo:session): session closed for user root Sep 6 09:56:09.104198 sshd[1751]: Connection closed by 10.0.0.1 port 51032 Sep 6 09:56:09.104544 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Sep 6 09:56:09.117782 systemd[1]: sshd@5-10.0.0.40:22-10.0.0.1:51032.service: Deactivated successfully. Sep 6 09:56:09.119797 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 09:56:09.120682 systemd-logind[1550]: Session 6 logged out. Waiting for processes to exit. Sep 6 09:56:09.123609 systemd[1]: Started sshd@6-10.0.0.40:22-10.0.0.1:51046.service - OpenSSH per-connection server daemon (10.0.0.1:51046). Sep 6 09:56:09.124415 systemd-logind[1550]: Removed session 6. Sep 6 09:56:09.173628 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 51046 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:56:09.175291 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:56:09.180177 systemd-logind[1550]: New session 7 of user core. Sep 6 09:56:09.189993 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 6 09:56:09.244213 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 09:56:09.244527 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 09:56:09.973675 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 6 09:56:09.996168 (dockerd)[1809]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 6 09:56:10.498033 dockerd[1809]: time="2025-09-06T09:56:10.497955014Z" level=info msg="Starting up" Sep 6 09:56:10.498811 dockerd[1809]: time="2025-09-06T09:56:10.498773204Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 6 09:56:10.516270 dockerd[1809]: time="2025-09-06T09:56:10.516212572Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 6 09:56:10.863486 dockerd[1809]: time="2025-09-06T09:56:10.863357260Z" level=info msg="Loading containers: start." Sep 6 09:56:10.876883 kernel: Initializing XFRM netlink socket Sep 6 09:56:11.155041 systemd-networkd[1482]: docker0: Link UP Sep 6 09:56:11.159923 dockerd[1809]: time="2025-09-06T09:56:11.159866458Z" level=info msg="Loading containers: done." Sep 6 09:56:11.179529 dockerd[1809]: time="2025-09-06T09:56:11.179463658Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 09:56:11.179755 dockerd[1809]: time="2025-09-06T09:56:11.179556240Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 6 09:56:11.179755 dockerd[1809]: time="2025-09-06T09:56:11.179650250Z" level=info msg="Initializing buildkit" Sep 6 09:56:11.212846 dockerd[1809]: time="2025-09-06T09:56:11.212762894Z" level=info msg="Completed buildkit initialization" Sep 6 09:56:11.219936 dockerd[1809]: time="2025-09-06T09:56:11.219904375Z" level=info msg="Daemon has completed initialization" Sep 6 09:56:11.220027 dockerd[1809]: time="2025-09-06T09:56:11.219973052Z" level=info msg="API listen on /run/docker.sock" Sep 6 09:56:11.220783 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 6 09:56:12.569844 containerd[1568]: time="2025-09-06T09:56:12.569732320Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 6 09:56:13.180800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2003871022.mount: Deactivated successfully. Sep 6 09:56:14.839926 containerd[1568]: time="2025-09-06T09:56:14.839814687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:14.840682 containerd[1568]: time="2025-09-06T09:56:14.840619302Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 6 09:56:14.842097 containerd[1568]: time="2025-09-06T09:56:14.842062063Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:14.845973 containerd[1568]: time="2025-09-06T09:56:14.845922109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:14.847020 containerd[1568]: time="2025-09-06T09:56:14.846977013Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.277150387s" Sep 6 09:56:14.847020 containerd[1568]: time="2025-09-06T09:56:14.847014397Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 6 09:56:14.847963 containerd[1568]: time="2025-09-06T09:56:14.847913633Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 6 09:56:16.640319 containerd[1568]: time="2025-09-06T09:56:16.640229623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:16.641509 containerd[1568]: time="2025-09-06T09:56:16.641473146Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 6 09:56:16.642868 containerd[1568]: time="2025-09-06T09:56:16.642820801Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:16.645759 containerd[1568]: time="2025-09-06T09:56:16.645716325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:16.647092 containerd[1568]: time="2025-09-06T09:56:16.646905375Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.798950268s" Sep 6 09:56:16.647092 containerd[1568]: time="2025-09-06T09:56:16.646961185Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 6 09:56:16.647717 containerd[1568]: time="2025-09-06T09:56:16.647522285Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 6 09:56:17.604426 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 09:56:17.606179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 09:56:18.108585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:56:18.115224 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 09:56:18.333177 kubelet[2098]: E0906 09:56:18.333110 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 09:56:18.338782 containerd[1568]: time="2025-09-06T09:56:18.338722974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:18.340411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 09:56:18.340589 containerd[1568]: time="2025-09-06T09:56:18.340542750Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 6 09:56:18.340605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 09:56:18.341003 systemd[1]: kubelet.service: Consumed 644ms CPU time, 110.3M memory peak. Sep 6 09:56:18.341810 containerd[1568]: time="2025-09-06T09:56:18.341773930Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:18.344424 containerd[1568]: time="2025-09-06T09:56:18.344356182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:18.345255 containerd[1568]: time="2025-09-06T09:56:18.345202617Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.697650279s" Sep 6 09:56:18.345255 containerd[1568]: time="2025-09-06T09:56:18.345249725Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 6 09:56:18.345775 containerd[1568]: time="2025-09-06T09:56:18.345737520Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 6 09:56:19.429019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210531399.mount: Deactivated successfully. Sep 6 09:56:19.924206 containerd[1568]: time="2025-09-06T09:56:19.924143965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:19.925104 containerd[1568]: time="2025-09-06T09:56:19.925078139Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 6 09:56:19.926411 containerd[1568]: time="2025-09-06T09:56:19.926374985Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:19.928331 containerd[1568]: time="2025-09-06T09:56:19.928300099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:19.928841 containerd[1568]: time="2025-09-06T09:56:19.928778492Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.583008219s" Sep 6 09:56:19.928934 containerd[1568]: time="2025-09-06T09:56:19.928841017Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 6 09:56:19.929358 containerd[1568]: time="2025-09-06T09:56:19.929338454Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 09:56:20.536934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2084788581.mount: Deactivated successfully. Sep 6 09:56:21.620347 containerd[1568]: time="2025-09-06T09:56:21.620267247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:21.621282 containerd[1568]: time="2025-09-06T09:56:21.621219290Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 6 09:56:21.622578 containerd[1568]: time="2025-09-06T09:56:21.622503347Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:21.625731 containerd[1568]: time="2025-09-06T09:56:21.625688338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:21.627028 containerd[1568]: time="2025-09-06T09:56:21.626963976Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.697596361s" Sep 6 09:56:21.627028 containerd[1568]: time="2025-09-06T09:56:21.627010068Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 09:56:21.627641 containerd[1568]: time="2025-09-06T09:56:21.627603572Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 09:56:22.354238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869281634.mount: Deactivated successfully. Sep 6 09:56:22.367636 containerd[1568]: time="2025-09-06T09:56:22.367565500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 09:56:22.368850 containerd[1568]: time="2025-09-06T09:56:22.368795142Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 6 09:56:22.371097 containerd[1568]: time="2025-09-06T09:56:22.371025419Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 09:56:22.374080 containerd[1568]: time="2025-09-06T09:56:22.374013205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 09:56:22.374874 containerd[1568]: time="2025-09-06T09:56:22.374785568Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 747.152818ms" Sep 6 09:56:22.374916 containerd[1568]: time="2025-09-06T09:56:22.374871173Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 09:56:22.375639 containerd[1568]: time="2025-09-06T09:56:22.375589140Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 6 09:56:22.977259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2291159270.mount: Deactivated successfully. Sep 6 09:56:24.950635 containerd[1568]: time="2025-09-06T09:56:24.950552819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:24.951570 containerd[1568]: time="2025-09-06T09:56:24.951516148Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 6 09:56:24.952834 containerd[1568]: time="2025-09-06T09:56:24.952773508Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:24.955745 containerd[1568]: time="2025-09-06T09:56:24.955685612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:24.957071 containerd[1568]: time="2025-09-06T09:56:24.957009084Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.581369697s" Sep 6 09:56:24.957071 containerd[1568]: time="2025-09-06T09:56:24.957068831Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 6 09:56:27.624121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:56:27.624278 systemd[1]: kubelet.service: Consumed 644ms CPU time, 110.3M memory peak. Sep 6 09:56:27.626369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 09:56:27.649971 systemd[1]: Reload requested from client PID 2256 ('systemctl') (unit session-7.scope)... Sep 6 09:56:27.649984 systemd[1]: Reloading... Sep 6 09:56:27.726852 zram_generator::config[2296]: No configuration found. Sep 6 09:56:28.009164 systemd[1]: Reloading finished in 358 ms. Sep 6 09:56:28.073654 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 09:56:28.073765 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 09:56:28.074132 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:56:28.074186 systemd[1]: kubelet.service: Consumed 151ms CPU time, 98.4M memory peak. Sep 6 09:56:28.075816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 09:56:28.242036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:56:28.246677 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 6 09:56:28.301815 kubelet[2346]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 09:56:28.301815 kubelet[2346]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 09:56:28.301815 kubelet[2346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 09:56:28.302293 kubelet[2346]: I0906 09:56:28.301814 2346 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 09:56:28.613546 kubelet[2346]: I0906 09:56:28.613497 2346 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 09:56:28.613546 kubelet[2346]: I0906 09:56:28.613533 2346 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 09:56:28.613876 kubelet[2346]: I0906 09:56:28.613857 2346 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 09:56:28.738590 kubelet[2346]: E0906 09:56:28.738532 2346 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" Sep 6 09:56:28.739484 kubelet[2346]: I0906 09:56:28.739459 2346 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 09:56:28.747016 kubelet[2346]: I0906 09:56:28.746992 2346 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 6 09:56:28.753223 kubelet[2346]: I0906 09:56:28.753196 2346 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 09:56:28.754598 kubelet[2346]: I0906 09:56:28.754549 2346 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 09:56:28.754833 kubelet[2346]: I0906 09:56:28.754588 2346 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 09:56:28.754950 kubelet[2346]: I0906 09:56:28.754837 2346 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 09:56:28.754950 kubelet[2346]: I0906 09:56:28.754848 2346 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 09:56:28.755013 kubelet[2346]: I0906 09:56:28.755002 2346 state_mem.go:36] "Initialized new in-memory state store" Sep 6 09:56:28.757545 kubelet[2346]: I0906 09:56:28.757521 2346 kubelet.go:446] "Attempting to sync node with API server" Sep 6 09:56:28.757600 kubelet[2346]: I0906 09:56:28.757558 2346 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 09:56:28.757600 kubelet[2346]: I0906 09:56:28.757595 2346 kubelet.go:352] "Adding apiserver pod source" Sep 6 09:56:28.757644 kubelet[2346]: I0906 09:56:28.757614 2346 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 09:56:28.761841 kubelet[2346]: W0906 09:56:28.760256 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Sep 6 09:56:28.761841 kubelet[2346]: E0906 09:56:28.760348 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" Sep 6 09:56:28.761841 kubelet[2346]: W0906 09:56:28.760611 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Sep 6 09:56:28.761841 kubelet[2346]: E0906 09:56:28.760643 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" Sep 6 09:56:28.762260 kubelet[2346]: I0906 09:56:28.762235 2346 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 6 09:56:28.763126 kubelet[2346]: I0906 09:56:28.763105 2346 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 09:56:28.763187 kubelet[2346]: W0906 09:56:28.763170 2346 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 09:56:28.765479 kubelet[2346]: I0906 09:56:28.765454 2346 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 09:56:28.765518 kubelet[2346]: I0906 09:56:28.765493 2346 server.go:1287] "Started kubelet" Sep 6 09:56:28.765777 kubelet[2346]: I0906 09:56:28.765741 2346 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 09:56:28.766048 kubelet[2346]: I0906 09:56:28.765985 2346 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 09:56:28.766363 kubelet[2346]: I0906 09:56:28.766341 2346 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 09:56:28.767019 kubelet[2346]: I0906 09:56:28.767003 2346 server.go:479] "Adding debug handlers to kubelet server" Sep 6 09:56:28.769752 kubelet[2346]: I0906 09:56:28.769721 2346 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 09:56:28.769969 kubelet[2346]: I0906 09:56:28.769943 2346 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 09:56:28.772350 kubelet[2346]: E0906 09:56:28.772312 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:28.772413 kubelet[2346]: I0906 09:56:28.772368 2346 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 09:56:28.772534 kubelet[2346]: E0906 09:56:28.772456 2346 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 09:56:28.772626 kubelet[2346]: I0906 09:56:28.772611 2346 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 09:56:28.772717 kubelet[2346]: I0906 09:56:28.772695 2346 reconciler.go:26] "Reconciler: start to sync state" Sep 6 09:56:28.773113 kubelet[2346]: W0906 09:56:28.773053 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Sep 6 09:56:28.773156 kubelet[2346]: E0906 09:56:28.773112 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" Sep 6 09:56:28.773423 kubelet[2346]: E0906 09:56:28.773379 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="200ms" Sep 6 09:56:28.774184 kubelet[2346]: I0906 09:56:28.774166 2346 factory.go:221] Registration of the containerd container factory successfully Sep 6 09:56:28.774184 kubelet[2346]: I0906 09:56:28.774179 2346 factory.go:221] Registration of the systemd container factory successfully Sep 6 09:56:28.774280 kubelet[2346]: I0906 09:56:28.774249 2346 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 09:56:28.774802 kubelet[2346]: E0906 09:56:28.772699 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.40:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.40:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1862a8f83d861829 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 09:56:28.765468713 +0000 UTC m=+0.514387170,LastTimestamp:2025-09-06 09:56:28.765468713 +0000 UTC m=+0.514387170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 09:56:28.787887 kubelet[2346]: I0906 09:56:28.787627 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 09:56:28.788920 kubelet[2346]: I0906 09:56:28.788892 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 09:56:28.788961 kubelet[2346]: I0906 09:56:28.788930 2346 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 09:56:28.788961 kubelet[2346]: I0906 09:56:28.788956 2346 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 09:56:28.789024 kubelet[2346]: I0906 09:56:28.788969 2346 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 09:56:28.789072 kubelet[2346]: E0906 09:56:28.789024 2346 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 09:56:28.791447 kubelet[2346]: W0906 09:56:28.791400 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Sep 6 09:56:28.791506 kubelet[2346]: E0906 09:56:28.791460 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" Sep 6 09:56:28.792363 kubelet[2346]: I0906 09:56:28.792338 2346 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 09:56:28.792363 kubelet[2346]: I0906 09:56:28.792354 2346 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 09:56:28.792427 kubelet[2346]: I0906 09:56:28.792372 2346 state_mem.go:36] "Initialized new in-memory state store" Sep 6 09:56:28.872790 kubelet[2346]: E0906 09:56:28.872603 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:28.890014 kubelet[2346]: E0906 09:56:28.889951 2346 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 09:56:28.973543 kubelet[2346]: E0906 09:56:28.973468 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:28.974079 kubelet[2346]: E0906 09:56:28.974031 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="400ms" Sep 6 09:56:29.074531 kubelet[2346]: E0906 09:56:29.074476 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:29.090752 kubelet[2346]: E0906 09:56:29.090697 2346 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 09:56:29.175580 kubelet[2346]: E0906 09:56:29.175373 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:29.275551 kubelet[2346]: E0906 09:56:29.275496 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:29.283114 kubelet[2346]: I0906 09:56:29.283051 2346 policy_none.go:49] "None policy: Start" Sep 6 09:56:29.283114 kubelet[2346]: I0906 09:56:29.283103 2346 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 09:56:29.283114 kubelet[2346]: I0906 09:56:29.283130 2346 state_mem.go:35] "Initializing new in-memory state store" Sep 6 09:56:29.291389 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 6 09:56:29.305181 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 6 09:56:29.308898 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 6 09:56:29.362578 kubelet[2346]: I0906 09:56:29.362348 2346 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 09:56:29.363252 kubelet[2346]: I0906 09:56:29.362686 2346 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 09:56:29.363252 kubelet[2346]: I0906 09:56:29.362705 2346 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 09:56:29.363252 kubelet[2346]: I0906 09:56:29.362924 2346 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 09:56:29.364076 kubelet[2346]: E0906 09:56:29.364041 2346 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 09:56:29.364160 kubelet[2346]: E0906 09:56:29.364101 2346 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 6 09:56:29.374688 kubelet[2346]: E0906 09:56:29.374632 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="800ms" Sep 6 09:56:29.464341 kubelet[2346]: I0906 09:56:29.464215 2346 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 09:56:29.464733 kubelet[2346]: E0906 09:56:29.464697 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Sep 6 09:56:29.501857 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 6 09:56:29.525128 kubelet[2346]: E0906 09:56:29.525054 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:56:29.528350 systemd[1]: Created slice kubepods-burstable-pod49965dd4df1a622bb76f8930886ba15a.slice - libcontainer container kubepods-burstable-pod49965dd4df1a622bb76f8930886ba15a.slice. Sep 6 09:56:29.539319 kubelet[2346]: E0906 09:56:29.539272 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:56:29.542276 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 6 09:56:29.544408 kubelet[2346]: E0906 09:56:29.544368 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:56:29.575812 kubelet[2346]: I0906 09:56:29.575751 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49965dd4df1a622bb76f8930886ba15a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"49965dd4df1a622bb76f8930886ba15a\") " pod="kube-system/kube-apiserver-localhost" Sep 6 09:56:29.575943 kubelet[2346]: I0906 09:56:29.575848 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49965dd4df1a622bb76f8930886ba15a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"49965dd4df1a622bb76f8930886ba15a\") " pod="kube-system/kube-apiserver-localhost" Sep 6 09:56:29.575943 kubelet[2346]: I0906 09:56:29.575879 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49965dd4df1a622bb76f8930886ba15a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"49965dd4df1a622bb76f8930886ba15a\") " pod="kube-system/kube-apiserver-localhost" Sep 6 09:56:29.575943 kubelet[2346]: I0906 09:56:29.575903 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:56:29.575943 kubelet[2346]: I0906 09:56:29.575925 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:56:29.576037 kubelet[2346]: I0906 09:56:29.575950 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 6 09:56:29.576037 kubelet[2346]: I0906 09:56:29.575990 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:56:29.576037 kubelet[2346]: I0906 09:56:29.576025 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:56:29.576108 kubelet[2346]: I0906 09:56:29.576050 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:56:29.616665 kubelet[2346]: W0906 09:56:29.616579 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Sep 6 09:56:29.616776 kubelet[2346]: E0906 09:56:29.616665 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" Sep 6 09:56:29.626636 kubelet[2346]: W0906 09:56:29.626561 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Sep 6 09:56:29.626697 kubelet[2346]: E0906 09:56:29.626649 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" Sep 6 09:56:29.666713 kubelet[2346]: I0906 09:56:29.666645 2346 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 09:56:29.667181 kubelet[2346]: E0906 09:56:29.667135 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Sep 6 09:56:29.826853 kubelet[2346]: E0906 09:56:29.826670 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:29.827704 containerd[1568]: time="2025-09-06T09:56:29.827653942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 6 09:56:29.839849 kubelet[2346]: E0906 09:56:29.839774 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:29.840409 containerd[1568]: time="2025-09-06T09:56:29.840369064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:49965dd4df1a622bb76f8930886ba15a,Namespace:kube-system,Attempt:0,}" Sep 6 09:56:29.845795 kubelet[2346]: E0906 09:56:29.845759 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:29.846429 containerd[1568]: time="2025-09-06T09:56:29.846380506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 6 09:56:30.045416 containerd[1568]: time="2025-09-06T09:56:30.045360149Z" level=info msg="connecting to shim ad3de4f8f43dbeb5d017e231f99af0f15b74abe5f47c408d158d3bc66f66ed84" address="unix:///run/containerd/s/1b10c9d2bc9965b786812342d16d700a357a468632fc1a8c735e5f5c38258400" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:56:30.052237 containerd[1568]: time="2025-09-06T09:56:30.052182719Z" level=info msg="connecting to shim 77b5867c58891989548880291e0f3ba105530cd4d87577c06b9c5f84683bafae" address="unix:///run/containerd/s/b887ee4fe74a3f6b4e0fbed8bd745dea0bfec40c41cddd3fddc3c23e36d9fefb" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:56:30.055828 containerd[1568]: time="2025-09-06T09:56:30.055772179Z" level=info msg="connecting to shim 4e04c67ed374066a0035742f5c989ef04b4aa63a31ebc9a3e845ef9880bd5482" address="unix:///run/containerd/s/b8fb4f0422521440fa2c66686bab35fc3a63af6c3686b3fc335d983620bc8057" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:56:30.071332 kubelet[2346]: I0906 09:56:30.071297 2346 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 09:56:30.071800 kubelet[2346]: E0906 09:56:30.071767 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Sep 6 09:56:30.085082 systemd[1]: Started cri-containerd-ad3de4f8f43dbeb5d017e231f99af0f15b74abe5f47c408d158d3bc66f66ed84.scope - libcontainer container ad3de4f8f43dbeb5d017e231f99af0f15b74abe5f47c408d158d3bc66f66ed84. Sep 6 09:56:30.093137 systemd[1]: Started cri-containerd-4e04c67ed374066a0035742f5c989ef04b4aa63a31ebc9a3e845ef9880bd5482.scope - libcontainer container 4e04c67ed374066a0035742f5c989ef04b4aa63a31ebc9a3e845ef9880bd5482. Sep 6 09:56:30.097891 systemd[1]: Started cri-containerd-77b5867c58891989548880291e0f3ba105530cd4d87577c06b9c5f84683bafae.scope - libcontainer container 77b5867c58891989548880291e0f3ba105530cd4d87577c06b9c5f84683bafae. Sep 6 09:56:30.140564 kubelet[2346]: W0906 09:56:30.140413 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Sep 6 09:56:30.140857 kubelet[2346]: E0906 09:56:30.140831 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" Sep 6 09:56:30.150775 containerd[1568]: time="2025-09-06T09:56:30.150536203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad3de4f8f43dbeb5d017e231f99af0f15b74abe5f47c408d158d3bc66f66ed84\"" Sep 6 09:56:30.151719 kubelet[2346]: E0906 09:56:30.151691 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:30.156288 containerd[1568]: time="2025-09-06T09:56:30.156258098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e04c67ed374066a0035742f5c989ef04b4aa63a31ebc9a3e845ef9880bd5482\"" Sep 6 09:56:30.156432 containerd[1568]: time="2025-09-06T09:56:30.156407871Z" level=info msg="CreateContainer within sandbox \"ad3de4f8f43dbeb5d017e231f99af0f15b74abe5f47c408d158d3bc66f66ed84\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 09:56:30.156966 kubelet[2346]: E0906 09:56:30.156945 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:30.158449 kubelet[2346]: W0906 09:56:30.158314 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.40:6443: connect: connection refused Sep 6 09:56:30.158449 kubelet[2346]: E0906 09:56:30.158407 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" Sep 6 09:56:30.158574 containerd[1568]: time="2025-09-06T09:56:30.158450639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:49965dd4df1a622bb76f8930886ba15a,Namespace:kube-system,Attempt:0,} returns sandbox id \"77b5867c58891989548880291e0f3ba105530cd4d87577c06b9c5f84683bafae\"" Sep 6 09:56:30.158772 containerd[1568]: time="2025-09-06T09:56:30.158746521Z" level=info msg="CreateContainer within sandbox \"4e04c67ed374066a0035742f5c989ef04b4aa63a31ebc9a3e845ef9880bd5482\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 09:56:30.158917 kubelet[2346]: E0906 09:56:30.158891 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:30.160567 containerd[1568]: time="2025-09-06T09:56:30.160529631Z" level=info msg="CreateContainer within sandbox \"77b5867c58891989548880291e0f3ba105530cd4d87577c06b9c5f84683bafae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 09:56:30.166893 containerd[1568]: time="2025-09-06T09:56:30.166793511Z" level=info msg="Container b535186984501e83dc4e196fbbaa5f69508a3c2235cc1501d83278046c1934a1: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:56:30.172031 containerd[1568]: time="2025-09-06T09:56:30.171988462Z" level=info msg="Container 39626cbe5913621604772bccbf8a4ad8637e686450b3c92475cde6ff49ac629a: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:56:30.175221 kubelet[2346]: E0906 09:56:30.175192 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="1.6s" Sep 6 09:56:30.179582 containerd[1568]: time="2025-09-06T09:56:30.179533163Z" level=info msg="CreateContainer within sandbox \"ad3de4f8f43dbeb5d017e231f99af0f15b74abe5f47c408d158d3bc66f66ed84\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b535186984501e83dc4e196fbbaa5f69508a3c2235cc1501d83278046c1934a1\"" Sep 6 09:56:30.180083 containerd[1568]: time="2025-09-06T09:56:30.180049972Z" level=info msg="StartContainer for \"b535186984501e83dc4e196fbbaa5f69508a3c2235cc1501d83278046c1934a1\"" Sep 6 09:56:30.181050 containerd[1568]: time="2025-09-06T09:56:30.181021435Z" level=info msg="connecting to shim b535186984501e83dc4e196fbbaa5f69508a3c2235cc1501d83278046c1934a1" address="unix:///run/containerd/s/1b10c9d2bc9965b786812342d16d700a357a468632fc1a8c735e5f5c38258400" protocol=ttrpc version=3 Sep 6 09:56:30.186610 containerd[1568]: time="2025-09-06T09:56:30.185988648Z" level=info msg="Container f9c096f632bb085f2672a4c58a7ab0a78d1f2ed59b2c9354db0a3f2285b85a3d: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:56:30.186610 containerd[1568]: time="2025-09-06T09:56:30.186168192Z" level=info msg="CreateContainer within sandbox \"4e04c67ed374066a0035742f5c989ef04b4aa63a31ebc9a3e845ef9880bd5482\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"39626cbe5913621604772bccbf8a4ad8637e686450b3c92475cde6ff49ac629a\"" Sep 6 09:56:30.187007 containerd[1568]: time="2025-09-06T09:56:30.186969283Z" level=info msg="StartContainer for \"39626cbe5913621604772bccbf8a4ad8637e686450b3c92475cde6ff49ac629a\"" Sep 6 09:56:30.187980 containerd[1568]: time="2025-09-06T09:56:30.187950349Z" level=info msg="connecting to shim 39626cbe5913621604772bccbf8a4ad8637e686450b3c92475cde6ff49ac629a" address="unix:///run/containerd/s/b8fb4f0422521440fa2c66686bab35fc3a63af6c3686b3fc335d983620bc8057" protocol=ttrpc version=3 Sep 6 09:56:30.199454 containerd[1568]: time="2025-09-06T09:56:30.199397671Z" level=info msg="CreateContainer within sandbox \"77b5867c58891989548880291e0f3ba105530cd4d87577c06b9c5f84683bafae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f9c096f632bb085f2672a4c58a7ab0a78d1f2ed59b2c9354db0a3f2285b85a3d\"" Sep 6 09:56:30.201106 containerd[1568]: time="2025-09-06T09:56:30.199993279Z" level=info msg="StartContainer for \"f9c096f632bb085f2672a4c58a7ab0a78d1f2ed59b2c9354db0a3f2285b85a3d\"" Sep 6 09:56:30.200950 systemd[1]: Started cri-containerd-b535186984501e83dc4e196fbbaa5f69508a3c2235cc1501d83278046c1934a1.scope - libcontainer container b535186984501e83dc4e196fbbaa5f69508a3c2235cc1501d83278046c1934a1. Sep 6 09:56:30.201654 containerd[1568]: time="2025-09-06T09:56:30.201635918Z" level=info msg="connecting to shim f9c096f632bb085f2672a4c58a7ab0a78d1f2ed59b2c9354db0a3f2285b85a3d" address="unix:///run/containerd/s/b887ee4fe74a3f6b4e0fbed8bd745dea0bfec40c41cddd3fddc3c23e36d9fefb" protocol=ttrpc version=3 Sep 6 09:56:30.210081 systemd[1]: Started cri-containerd-39626cbe5913621604772bccbf8a4ad8637e686450b3c92475cde6ff49ac629a.scope - libcontainer container 39626cbe5913621604772bccbf8a4ad8637e686450b3c92475cde6ff49ac629a. Sep 6 09:56:30.227934 systemd[1]: Started cri-containerd-f9c096f632bb085f2672a4c58a7ab0a78d1f2ed59b2c9354db0a3f2285b85a3d.scope - libcontainer container f9c096f632bb085f2672a4c58a7ab0a78d1f2ed59b2c9354db0a3f2285b85a3d. Sep 6 09:56:30.259990 containerd[1568]: time="2025-09-06T09:56:30.259953597Z" level=info msg="StartContainer for \"b535186984501e83dc4e196fbbaa5f69508a3c2235cc1501d83278046c1934a1\" returns successfully" Sep 6 09:56:30.336534 containerd[1568]: time="2025-09-06T09:56:30.335922401Z" level=info msg="StartContainer for \"39626cbe5913621604772bccbf8a4ad8637e686450b3c92475cde6ff49ac629a\" returns successfully" Sep 6 09:56:30.343312 containerd[1568]: time="2025-09-06T09:56:30.343260816Z" level=info msg="StartContainer for \"f9c096f632bb085f2672a4c58a7ab0a78d1f2ed59b2c9354db0a3f2285b85a3d\" returns successfully" Sep 6 09:56:30.802111 kubelet[2346]: E0906 09:56:30.802076 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:56:30.802528 kubelet[2346]: E0906 09:56:30.802216 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:30.805608 kubelet[2346]: E0906 09:56:30.805585 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:56:30.805705 kubelet[2346]: E0906 09:56:30.805686 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:30.829657 kubelet[2346]: E0906 09:56:30.829611 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:56:30.829784 kubelet[2346]: E0906 09:56:30.829763 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:30.873849 kubelet[2346]: I0906 09:56:30.873145 2346 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 09:56:31.506783 kubelet[2346]: I0906 09:56:31.506675 2346 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 6 09:56:31.506783 kubelet[2346]: E0906 09:56:31.506712 2346 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 6 09:56:31.515371 kubelet[2346]: E0906 09:56:31.515341 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:31.616270 kubelet[2346]: E0906 09:56:31.616205 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:31.717208 kubelet[2346]: E0906 09:56:31.717142 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:31.810499 kubelet[2346]: E0906 09:56:31.810369 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:56:31.810930 kubelet[2346]: E0906 09:56:31.810555 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:31.817570 kubelet[2346]: E0906 09:56:31.817496 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:31.828839 kubelet[2346]: E0906 09:56:31.828794 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 09:56:31.828989 kubelet[2346]: E0906 09:56:31.828972 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:31.918521 kubelet[2346]: E0906 09:56:31.918473 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:32.019165 kubelet[2346]: E0906 09:56:32.019084 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:32.119860 kubelet[2346]: E0906 09:56:32.119800 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:32.220654 kubelet[2346]: E0906 09:56:32.220605 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:32.321305 kubelet[2346]: E0906 09:56:32.321247 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:32.422491 kubelet[2346]: E0906 09:56:32.422338 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:32.573747 kubelet[2346]: I0906 09:56:32.573677 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 09:56:32.582843 kubelet[2346]: I0906 09:56:32.582711 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 09:56:32.586544 kubelet[2346]: I0906 09:56:32.586504 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 09:56:32.762353 kubelet[2346]: I0906 09:56:32.762029 2346 apiserver.go:52] "Watching apiserver" Sep 6 09:56:32.764963 kubelet[2346]: E0906 09:56:32.764933 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:32.773464 kubelet[2346]: I0906 09:56:32.773425 2346 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 09:56:32.811125 kubelet[2346]: E0906 09:56:32.811080 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:32.811535 kubelet[2346]: E0906 09:56:32.811253 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:33.691189 systemd[1]: Reload requested from client PID 2622 ('systemctl') (unit session-7.scope)... Sep 6 09:56:33.691206 systemd[1]: Reloading... Sep 6 09:56:33.768880 zram_generator::config[2668]: No configuration found. Sep 6 09:56:34.005763 systemd[1]: Reloading finished in 314 ms. Sep 6 09:56:34.036280 kubelet[2346]: I0906 09:56:34.036242 2346 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 09:56:34.036446 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 09:56:34.053585 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 09:56:34.053940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:56:34.054001 systemd[1]: kubelet.service: Consumed 1.175s CPU time, 131.5M memory peak. Sep 6 09:56:34.056842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 09:56:34.264166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 09:56:34.268276 (kubelet)[2710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 6 09:56:34.402847 kubelet[2710]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 09:56:34.402847 kubelet[2710]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 09:56:34.402847 kubelet[2710]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 09:56:34.402847 kubelet[2710]: I0906 09:56:34.402770 2710 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 09:56:34.409837 kubelet[2710]: I0906 09:56:34.409800 2710 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 09:56:34.409837 kubelet[2710]: I0906 09:56:34.409839 2710 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 09:56:34.410067 kubelet[2710]: I0906 09:56:34.410049 2710 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 09:56:34.411126 kubelet[2710]: I0906 09:56:34.411108 2710 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 09:56:34.415946 kubelet[2710]: I0906 09:56:34.413010 2710 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 09:56:34.418884 kubelet[2710]: I0906 09:56:34.418860 2710 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 6 09:56:34.423484 kubelet[2710]: I0906 09:56:34.423448 2710 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 09:56:34.423708 kubelet[2710]: I0906 09:56:34.423671 2710 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 09:56:34.423915 kubelet[2710]: I0906 09:56:34.423702 2710 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 09:56:34.424012 kubelet[2710]: I0906 09:56:34.423917 2710 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 09:56:34.424012 kubelet[2710]: I0906 09:56:34.423926 2710 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 09:56:34.424012 kubelet[2710]: I0906 09:56:34.423975 2710 state_mem.go:36] "Initialized new in-memory state store" Sep 6 09:56:34.424144 kubelet[2710]: I0906 09:56:34.424127 2710 kubelet.go:446] "Attempting to sync node with API server" Sep 6 09:56:34.424178 kubelet[2710]: I0906 09:56:34.424150 2710 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 09:56:34.424178 kubelet[2710]: I0906 09:56:34.424170 2710 kubelet.go:352] "Adding apiserver pod source" Sep 6 09:56:34.424223 kubelet[2710]: I0906 09:56:34.424181 2710 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 09:56:34.425163 kubelet[2710]: I0906 09:56:34.425144 2710 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 6 09:56:34.425489 kubelet[2710]: I0906 09:56:34.425468 2710 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 09:56:34.425955 kubelet[2710]: I0906 09:56:34.425931 2710 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 09:56:34.425991 kubelet[2710]: I0906 09:56:34.425970 2710 server.go:1287] "Started kubelet" Sep 6 09:56:34.426244 kubelet[2710]: I0906 09:56:34.426217 2710 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 09:56:34.426571 kubelet[2710]: I0906 09:56:34.426518 2710 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 09:56:34.427560 kubelet[2710]: I0906 09:56:34.427034 2710 server.go:479] "Adding debug handlers to kubelet server" Sep 6 09:56:34.427560 kubelet[2710]: I0906 09:56:34.427217 2710 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 09:56:34.430185 kubelet[2710]: I0906 09:56:34.430168 2710 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 09:56:34.436588 kubelet[2710]: I0906 09:56:34.436335 2710 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 09:56:34.438681 kubelet[2710]: I0906 09:56:34.438660 2710 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 09:56:34.439085 kubelet[2710]: I0906 09:56:34.439068 2710 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 09:56:34.439202 kubelet[2710]: I0906 09:56:34.439190 2710 reconciler.go:26] "Reconciler: start to sync state" Sep 6 09:56:34.439538 kubelet[2710]: E0906 09:56:34.439519 2710 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 09:56:34.439723 kubelet[2710]: E0906 09:56:34.439707 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 09:56:34.443323 kubelet[2710]: I0906 09:56:34.443307 2710 factory.go:221] Registration of the systemd container factory successfully Sep 6 09:56:34.443474 kubelet[2710]: I0906 09:56:34.443458 2710 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 09:56:34.445181 kubelet[2710]: I0906 09:56:34.445165 2710 factory.go:221] Registration of the containerd container factory successfully Sep 6 09:56:34.445263 kubelet[2710]: I0906 09:56:34.445164 2710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 09:56:34.446512 kubelet[2710]: I0906 09:56:34.446492 2710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 09:56:34.446573 kubelet[2710]: I0906 09:56:34.446520 2710 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 09:56:34.446573 kubelet[2710]: I0906 09:56:34.446548 2710 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 09:56:34.446573 kubelet[2710]: I0906 09:56:34.446555 2710 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 09:56:34.446644 kubelet[2710]: E0906 09:56:34.446601 2710 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 09:56:34.479685 kubelet[2710]: I0906 09:56:34.479626 2710 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 09:56:34.479685 kubelet[2710]: I0906 09:56:34.479648 2710 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 09:56:34.479685 kubelet[2710]: I0906 09:56:34.479668 2710 state_mem.go:36] "Initialized new in-memory state store" Sep 6 09:56:34.479915 kubelet[2710]: I0906 09:56:34.479877 2710 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 09:56:34.479915 kubelet[2710]: I0906 09:56:34.479894 2710 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 09:56:34.479915 kubelet[2710]: I0906 09:56:34.479919 2710 policy_none.go:49] "None policy: Start" Sep 6 09:56:34.479915 kubelet[2710]: I0906 09:56:34.479929 2710 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 09:56:34.480120 kubelet[2710]: I0906 09:56:34.479939 2710 state_mem.go:35] "Initializing new in-memory state store" Sep 6 09:56:34.480120 kubelet[2710]: I0906 09:56:34.480053 2710 state_mem.go:75] "Updated machine memory state" Sep 6 09:56:34.484270 kubelet[2710]: I0906 09:56:34.484238 2710 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 09:56:34.484513 kubelet[2710]: I0906 09:56:34.484495 2710 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 09:56:34.484558 kubelet[2710]: I0906 09:56:34.484514 2710 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 09:56:34.484740 kubelet[2710]: I0906 09:56:34.484723 2710 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 09:56:34.486771 kubelet[2710]: E0906 09:56:34.486745 2710 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 09:56:34.547186 kubelet[2710]: I0906 09:56:34.547055 2710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 09:56:34.547186 kubelet[2710]: I0906 09:56:34.547099 2710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 09:56:34.547186 kubelet[2710]: I0906 09:56:34.547140 2710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 09:56:34.554737 kubelet[2710]: E0906 09:56:34.554649 2710 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 6 09:56:34.554995 kubelet[2710]: E0906 09:56:34.554760 2710 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 09:56:34.555155 kubelet[2710]: E0906 09:56:34.555127 2710 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 6 09:56:34.589834 kubelet[2710]: I0906 09:56:34.589798 2710 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 09:56:34.597113 kubelet[2710]: I0906 09:56:34.597087 2710 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 6 09:56:34.597215 kubelet[2710]: I0906 09:56:34.597151 2710 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 6 09:56:34.639564 kubelet[2710]: I0906 09:56:34.639527 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49965dd4df1a622bb76f8930886ba15a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"49965dd4df1a622bb76f8930886ba15a\") " pod="kube-system/kube-apiserver-localhost" Sep 6 09:56:34.639564 kubelet[2710]: I0906 09:56:34.639559 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49965dd4df1a622bb76f8930886ba15a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"49965dd4df1a622bb76f8930886ba15a\") " pod="kube-system/kube-apiserver-localhost" Sep 6 09:56:34.639754 kubelet[2710]: I0906 09:56:34.639584 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:56:34.639754 kubelet[2710]: I0906 09:56:34.639603 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:56:34.639754 kubelet[2710]: I0906 09:56:34.639641 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49965dd4df1a622bb76f8930886ba15a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"49965dd4df1a622bb76f8930886ba15a\") " pod="kube-system/kube-apiserver-localhost" Sep 6 09:56:34.639754 kubelet[2710]: I0906 09:56:34.639687 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:56:34.639754 kubelet[2710]: I0906 09:56:34.639714 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:56:34.639893 kubelet[2710]: I0906 09:56:34.639736 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 09:56:34.639893 kubelet[2710]: I0906 09:56:34.639759 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 6 09:56:34.855623 kubelet[2710]: E0906 09:56:34.855582 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:34.855839 kubelet[2710]: E0906 09:56:34.855682 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:34.856005 kubelet[2710]: E0906 09:56:34.855977 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:35.425518 kubelet[2710]: I0906 09:56:35.425413 2710 apiserver.go:52] "Watching apiserver" Sep 6 09:56:35.439944 kubelet[2710]: I0906 09:56:35.439891 2710 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 09:56:35.461532 kubelet[2710]: I0906 09:56:35.461486 2710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 09:56:35.461885 kubelet[2710]: I0906 09:56:35.461865 2710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 09:56:35.462294 kubelet[2710]: E0906 09:56:35.462245 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:35.466461 kubelet[2710]: E0906 09:56:35.466363 2710 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 6 09:56:35.466614 kubelet[2710]: E0906 09:56:35.466597 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:35.467852 kubelet[2710]: E0906 09:56:35.467807 2710 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 09:56:35.468697 kubelet[2710]: E0906 09:56:35.467957 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:35.510250 kubelet[2710]: I0906 09:56:35.510168 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.510125269 podStartE2EDuration="3.510125269s" podCreationTimestamp="2025-09-06 09:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 09:56:35.510090526 +0000 UTC m=+1.237609226" watchObservedRunningTime="2025-09-06 09:56:35.510125269 +0000 UTC m=+1.237643970" Sep 6 09:56:35.522877 kubelet[2710]: I0906 09:56:35.522800 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.522782366 podStartE2EDuration="3.522782366s" podCreationTimestamp="2025-09-06 09:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 09:56:35.516003987 +0000 UTC m=+1.243522687" watchObservedRunningTime="2025-09-06 09:56:35.522782366 +0000 UTC m=+1.250301066" Sep 6 09:56:36.463025 kubelet[2710]: E0906 09:56:36.462934 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:36.463387 kubelet[2710]: E0906 09:56:36.463037 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:37.466846 kubelet[2710]: E0906 09:56:37.465168 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:39.725803 kubelet[2710]: I0906 09:56:39.725765 2710 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 09:56:39.726345 containerd[1568]: time="2025-09-06T09:56:39.726231068Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 09:56:39.726586 kubelet[2710]: I0906 09:56:39.726414 2710 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 09:56:40.030486 kubelet[2710]: E0906 09:56:40.030356 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:40.044887 kubelet[2710]: I0906 09:56:40.044794 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.044743584 podStartE2EDuration="8.044743584s" podCreationTimestamp="2025-09-06 09:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 09:56:35.522882442 +0000 UTC m=+1.250401162" watchObservedRunningTime="2025-09-06 09:56:40.044743584 +0000 UTC m=+5.772262284" Sep 6 09:56:40.346312 systemd[1]: Created slice kubepods-besteffort-pod0dbc900e_758e_4543_b001_05207498b5b2.slice - libcontainer container kubepods-besteffort-pod0dbc900e_758e_4543_b001_05207498b5b2.slice. Sep 6 09:56:40.392303 kubelet[2710]: I0906 09:56:40.392237 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0dbc900e-758e-4543-b001-05207498b5b2-kube-proxy\") pod \"kube-proxy-h95wh\" (UID: \"0dbc900e-758e-4543-b001-05207498b5b2\") " pod="kube-system/kube-proxy-h95wh" Sep 6 09:56:40.392303 kubelet[2710]: I0906 09:56:40.392276 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0dbc900e-758e-4543-b001-05207498b5b2-lib-modules\") pod \"kube-proxy-h95wh\" (UID: \"0dbc900e-758e-4543-b001-05207498b5b2\") " pod="kube-system/kube-proxy-h95wh" Sep 6 09:56:40.392303 kubelet[2710]: I0906 09:56:40.392299 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0dbc900e-758e-4543-b001-05207498b5b2-xtables-lock\") pod \"kube-proxy-h95wh\" (UID: \"0dbc900e-758e-4543-b001-05207498b5b2\") " pod="kube-system/kube-proxy-h95wh" Sep 6 09:56:40.392303 kubelet[2710]: I0906 09:56:40.392317 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2txrj\" (UniqueName: \"kubernetes.io/projected/0dbc900e-758e-4543-b001-05207498b5b2-kube-api-access-2txrj\") pod \"kube-proxy-h95wh\" (UID: \"0dbc900e-758e-4543-b001-05207498b5b2\") " pod="kube-system/kube-proxy-h95wh" Sep 6 09:56:40.470757 kubelet[2710]: E0906 09:56:40.470686 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:40.658473 kubelet[2710]: E0906 09:56:40.658423 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:40.659865 containerd[1568]: time="2025-09-06T09:56:40.659678966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h95wh,Uid:0dbc900e-758e-4543-b001-05207498b5b2,Namespace:kube-system,Attempt:0,}" Sep 6 09:56:40.682599 containerd[1568]: time="2025-09-06T09:56:40.682549525Z" level=info msg="connecting to shim b2cb812d45d64edb99cb9c39446abadc6ab8f343e5961e2fd07d3dbe0b655665" address="unix:///run/containerd/s/ceb189686bdcbd63f3690607d1f5702b71f19a211e9d5907838d0c02037cf1fa" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:56:40.711969 systemd[1]: Started cri-containerd-b2cb812d45d64edb99cb9c39446abadc6ab8f343e5961e2fd07d3dbe0b655665.scope - libcontainer container b2cb812d45d64edb99cb9c39446abadc6ab8f343e5961e2fd07d3dbe0b655665. Sep 6 09:56:40.751726 containerd[1568]: time="2025-09-06T09:56:40.751683798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h95wh,Uid:0dbc900e-758e-4543-b001-05207498b5b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2cb812d45d64edb99cb9c39446abadc6ab8f343e5961e2fd07d3dbe0b655665\"" Sep 6 09:56:40.752997 kubelet[2710]: E0906 09:56:40.752965 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:40.756566 containerd[1568]: time="2025-09-06T09:56:40.756513970Z" level=info msg="CreateContainer within sandbox \"b2cb812d45d64edb99cb9c39446abadc6ab8f343e5961e2fd07d3dbe0b655665\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 09:56:40.770937 systemd[1]: Created slice kubepods-besteffort-poda9ab7d91_f0e5_4f47_8a1b_8777077f8c59.slice - libcontainer container kubepods-besteffort-poda9ab7d91_f0e5_4f47_8a1b_8777077f8c59.slice. Sep 6 09:56:40.774410 containerd[1568]: time="2025-09-06T09:56:40.774359791Z" level=info msg="Container 5f8f8c8a196d8ab9a665f6a2cf02b2c1fddf57b2adf66ce43f08d3f31747d6d7: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:56:40.783557 containerd[1568]: time="2025-09-06T09:56:40.783505090Z" level=info msg="CreateContainer within sandbox \"b2cb812d45d64edb99cb9c39446abadc6ab8f343e5961e2fd07d3dbe0b655665\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5f8f8c8a196d8ab9a665f6a2cf02b2c1fddf57b2adf66ce43f08d3f31747d6d7\"" Sep 6 09:56:40.784091 containerd[1568]: time="2025-09-06T09:56:40.784059476Z" level=info msg="StartContainer for \"5f8f8c8a196d8ab9a665f6a2cf02b2c1fddf57b2adf66ce43f08d3f31747d6d7\"" Sep 6 09:56:40.785842 containerd[1568]: time="2025-09-06T09:56:40.785467186Z" level=info msg="connecting to shim 5f8f8c8a196d8ab9a665f6a2cf02b2c1fddf57b2adf66ce43f08d3f31747d6d7" address="unix:///run/containerd/s/ceb189686bdcbd63f3690607d1f5702b71f19a211e9d5907838d0c02037cf1fa" protocol=ttrpc version=3 Sep 6 09:56:40.796051 kubelet[2710]: I0906 09:56:40.796020 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzjcp\" (UniqueName: \"kubernetes.io/projected/a9ab7d91-f0e5-4f47-8a1b-8777077f8c59-kube-api-access-bzjcp\") pod \"tigera-operator-755d956888-7mwbd\" (UID: \"a9ab7d91-f0e5-4f47-8a1b-8777077f8c59\") " pod="tigera-operator/tigera-operator-755d956888-7mwbd" Sep 6 09:56:40.796116 kubelet[2710]: I0906 09:56:40.796056 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a9ab7d91-f0e5-4f47-8a1b-8777077f8c59-var-lib-calico\") pod \"tigera-operator-755d956888-7mwbd\" (UID: \"a9ab7d91-f0e5-4f47-8a1b-8777077f8c59\") " pod="tigera-operator/tigera-operator-755d956888-7mwbd" Sep 6 09:56:40.808965 systemd[1]: Started cri-containerd-5f8f8c8a196d8ab9a665f6a2cf02b2c1fddf57b2adf66ce43f08d3f31747d6d7.scope - libcontainer container 5f8f8c8a196d8ab9a665f6a2cf02b2c1fddf57b2adf66ce43f08d3f31747d6d7. Sep 6 09:56:40.853173 containerd[1568]: time="2025-09-06T09:56:40.853109583Z" level=info msg="StartContainer for \"5f8f8c8a196d8ab9a665f6a2cf02b2c1fddf57b2adf66ce43f08d3f31747d6d7\" returns successfully" Sep 6 09:56:41.078454 containerd[1568]: time="2025-09-06T09:56:41.078318201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-7mwbd,Uid:a9ab7d91-f0e5-4f47-8a1b-8777077f8c59,Namespace:tigera-operator,Attempt:0,}" Sep 6 09:56:41.101349 containerd[1568]: time="2025-09-06T09:56:41.101295620Z" level=info msg="connecting to shim 4ce55428f8bc240dd40ee2d48a9a20be17a29542b1b4359e827dde9004d76f9c" address="unix:///run/containerd/s/ee77f36dbc227fefdd9f86cc312205469015aec8dac38e9299a457522c2870c9" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:56:41.129963 systemd[1]: Started cri-containerd-4ce55428f8bc240dd40ee2d48a9a20be17a29542b1b4359e827dde9004d76f9c.scope - libcontainer container 4ce55428f8bc240dd40ee2d48a9a20be17a29542b1b4359e827dde9004d76f9c. Sep 6 09:56:41.179083 containerd[1568]: time="2025-09-06T09:56:41.179034021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-7mwbd,Uid:a9ab7d91-f0e5-4f47-8a1b-8777077f8c59,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4ce55428f8bc240dd40ee2d48a9a20be17a29542b1b4359e827dde9004d76f9c\"" Sep 6 09:56:41.180992 containerd[1568]: time="2025-09-06T09:56:41.180940033Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 6 09:56:41.473806 kubelet[2710]: E0906 09:56:41.473665 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:42.688151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2952419144.mount: Deactivated successfully. Sep 6 09:56:43.237717 containerd[1568]: time="2025-09-06T09:56:43.237664266Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:43.238533 containerd[1568]: time="2025-09-06T09:56:43.238470413Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 6 09:56:43.239596 containerd[1568]: time="2025-09-06T09:56:43.239551935Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:43.241594 containerd[1568]: time="2025-09-06T09:56:43.241544852Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:43.242072 containerd[1568]: time="2025-09-06T09:56:43.242038257Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.061057774s" Sep 6 09:56:43.242072 containerd[1568]: time="2025-09-06T09:56:43.242067424Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 6 09:56:43.243971 containerd[1568]: time="2025-09-06T09:56:43.243930226Z" level=info msg="CreateContainer within sandbox \"4ce55428f8bc240dd40ee2d48a9a20be17a29542b1b4359e827dde9004d76f9c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 6 09:56:43.252472 containerd[1568]: time="2025-09-06T09:56:43.252422998Z" level=info msg="Container d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:56:43.258600 containerd[1568]: time="2025-09-06T09:56:43.258560225Z" level=info msg="CreateContainer within sandbox \"4ce55428f8bc240dd40ee2d48a9a20be17a29542b1b4359e827dde9004d76f9c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146\"" Sep 6 09:56:43.259114 containerd[1568]: time="2025-09-06T09:56:43.259021052Z" level=info msg="StartContainer for \"d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146\"" Sep 6 09:56:43.259873 containerd[1568]: time="2025-09-06T09:56:43.259841929Z" level=info msg="connecting to shim d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146" address="unix:///run/containerd/s/ee77f36dbc227fefdd9f86cc312205469015aec8dac38e9299a457522c2870c9" protocol=ttrpc version=3 Sep 6 09:56:43.310410 systemd[1]: Started cri-containerd-d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146.scope - libcontainer container d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146. Sep 6 09:56:43.342719 containerd[1568]: time="2025-09-06T09:56:43.342677270Z" level=info msg="StartContainer for \"d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146\" returns successfully" Sep 6 09:56:43.506658 kubelet[2710]: I0906 09:56:43.506416 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h95wh" podStartSLOduration=3.50638437 podStartE2EDuration="3.50638437s" podCreationTimestamp="2025-09-06 09:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 09:56:41.481276885 +0000 UTC m=+7.208795585" watchObservedRunningTime="2025-09-06 09:56:43.50638437 +0000 UTC m=+9.233903070" Sep 6 09:56:43.506658 kubelet[2710]: I0906 09:56:43.506504 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-7mwbd" podStartSLOduration=1.443878571 podStartE2EDuration="3.50650074s" podCreationTimestamp="2025-09-06 09:56:40 +0000 UTC" firstStartedPulling="2025-09-06 09:56:41.180234458 +0000 UTC m=+6.907753158" lastFinishedPulling="2025-09-06 09:56:43.242856627 +0000 UTC m=+8.970375327" observedRunningTime="2025-09-06 09:56:43.50628647 +0000 UTC m=+9.233805170" watchObservedRunningTime="2025-09-06 09:56:43.50650074 +0000 UTC m=+9.234019430" Sep 6 09:56:45.064596 kubelet[2710]: E0906 09:56:45.064509 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:45.265467 kubelet[2710]: E0906 09:56:45.264996 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:45.504843 kubelet[2710]: E0906 09:56:45.504574 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:45.505333 kubelet[2710]: E0906 09:56:45.505271 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:45.558044 systemd[1]: cri-containerd-d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146.scope: Deactivated successfully. Sep 6 09:56:45.559617 containerd[1568]: time="2025-09-06T09:56:45.559550132Z" level=info msg="received exit event container_id:\"d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146\" id:\"d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146\" pid:3030 exit_status:1 exited_at:{seconds:1757152605 nanos:558969112}" Sep 6 09:56:45.560516 containerd[1568]: time="2025-09-06T09:56:45.560471873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146\" id:\"d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146\" pid:3030 exit_status:1 exited_at:{seconds:1757152605 nanos:558969112}" Sep 6 09:56:45.597614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146-rootfs.mount: Deactivated successfully. Sep 6 09:56:46.507466 kubelet[2710]: I0906 09:56:46.507429 2710 scope.go:117] "RemoveContainer" containerID="d391531c860a2fee256b4f8a52bd469defd9815d1889b4786531315bbbd90146" Sep 6 09:56:46.509876 containerd[1568]: time="2025-09-06T09:56:46.509839152Z" level=info msg="CreateContainer within sandbox \"4ce55428f8bc240dd40ee2d48a9a20be17a29542b1b4359e827dde9004d76f9c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 6 09:56:46.522311 containerd[1568]: time="2025-09-06T09:56:46.522250898Z" level=info msg="Container 10917404ecbaf1641bb1f5b44f8ab4fd46f0e237cf54e511e0ef63586b995893: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:56:46.526089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408872110.mount: Deactivated successfully. Sep 6 09:56:46.530771 containerd[1568]: time="2025-09-06T09:56:46.530735253Z" level=info msg="CreateContainer within sandbox \"4ce55428f8bc240dd40ee2d48a9a20be17a29542b1b4359e827dde9004d76f9c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"10917404ecbaf1641bb1f5b44f8ab4fd46f0e237cf54e511e0ef63586b995893\"" Sep 6 09:56:46.531501 containerd[1568]: time="2025-09-06T09:56:46.531467638Z" level=info msg="StartContainer for \"10917404ecbaf1641bb1f5b44f8ab4fd46f0e237cf54e511e0ef63586b995893\"" Sep 6 09:56:46.532754 containerd[1568]: time="2025-09-06T09:56:46.532647211Z" level=info msg="connecting to shim 10917404ecbaf1641bb1f5b44f8ab4fd46f0e237cf54e511e0ef63586b995893" address="unix:///run/containerd/s/ee77f36dbc227fefdd9f86cc312205469015aec8dac38e9299a457522c2870c9" protocol=ttrpc version=3 Sep 6 09:56:46.560964 systemd[1]: Started cri-containerd-10917404ecbaf1641bb1f5b44f8ab4fd46f0e237cf54e511e0ef63586b995893.scope - libcontainer container 10917404ecbaf1641bb1f5b44f8ab4fd46f0e237cf54e511e0ef63586b995893. Sep 6 09:56:46.596752 containerd[1568]: time="2025-09-06T09:56:46.596652909Z" level=info msg="StartContainer for \"10917404ecbaf1641bb1f5b44f8ab4fd46f0e237cf54e511e0ef63586b995893\" returns successfully" Sep 6 09:56:48.954244 sudo[1789]: pam_unix(sudo:session): session closed for user root Sep 6 09:56:48.955964 sshd[1788]: Connection closed by 10.0.0.1 port 51046 Sep 6 09:56:48.956712 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Sep 6 09:56:48.961194 systemd[1]: sshd@6-10.0.0.40:22-10.0.0.1:51046.service: Deactivated successfully. Sep 6 09:56:48.963708 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 09:56:48.963965 systemd[1]: session-7.scope: Consumed 5.737s CPU time, 223.7M memory peak. Sep 6 09:56:48.965556 systemd-logind[1550]: Session 7 logged out. Waiting for processes to exit. Sep 6 09:56:48.966713 systemd-logind[1550]: Removed session 7. Sep 6 09:56:49.745725 update_engine[1552]: I20250906 09:56:49.744953 1552 update_attempter.cc:509] Updating boot flags... Sep 6 09:56:52.348676 systemd[1]: Created slice kubepods-besteffort-pod1cf4184e_c4ec_4c41_b715_45ce527b1841.slice - libcontainer container kubepods-besteffort-pod1cf4184e_c4ec_4c41_b715_45ce527b1841.slice. Sep 6 09:56:52.369401 kubelet[2710]: I0906 09:56:52.369321 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjknd\" (UniqueName: \"kubernetes.io/projected/1cf4184e-c4ec-4c41-b715-45ce527b1841-kube-api-access-jjknd\") pod \"calico-typha-b4b5fb4f8-ph8jk\" (UID: \"1cf4184e-c4ec-4c41-b715-45ce527b1841\") " pod="calico-system/calico-typha-b4b5fb4f8-ph8jk" Sep 6 09:56:52.369401 kubelet[2710]: I0906 09:56:52.369397 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1cf4184e-c4ec-4c41-b715-45ce527b1841-typha-certs\") pod \"calico-typha-b4b5fb4f8-ph8jk\" (UID: \"1cf4184e-c4ec-4c41-b715-45ce527b1841\") " pod="calico-system/calico-typha-b4b5fb4f8-ph8jk" Sep 6 09:56:52.369981 kubelet[2710]: I0906 09:56:52.369428 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cf4184e-c4ec-4c41-b715-45ce527b1841-tigera-ca-bundle\") pod \"calico-typha-b4b5fb4f8-ph8jk\" (UID: \"1cf4184e-c4ec-4c41-b715-45ce527b1841\") " pod="calico-system/calico-typha-b4b5fb4f8-ph8jk" Sep 6 09:56:52.656000 kubelet[2710]: E0906 09:56:52.655776 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:52.658035 containerd[1568]: time="2025-09-06T09:56:52.657971506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b4b5fb4f8-ph8jk,Uid:1cf4184e-c4ec-4c41-b715-45ce527b1841,Namespace:calico-system,Attempt:0,}" Sep 6 09:56:52.664091 systemd[1]: Created slice kubepods-besteffort-poda3b2506f_1b7b_4921_91b1_f1f5aff3c8ed.slice - libcontainer container kubepods-besteffort-poda3b2506f_1b7b_4921_91b1_f1f5aff3c8ed.slice. Sep 6 09:56:52.706328 containerd[1568]: time="2025-09-06T09:56:52.706276455Z" level=info msg="connecting to shim 8fa4430c549a67fe9557be116f4239bfd651b4ad2ea422de4b517a2a17002b06" address="unix:///run/containerd/s/16738e33b171e9f06d2778ddf75c45f05ba0419ca2c4838a8a436ccf5addde83" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:56:52.733353 systemd[1]: Started cri-containerd-8fa4430c549a67fe9557be116f4239bfd651b4ad2ea422de4b517a2a17002b06.scope - libcontainer container 8fa4430c549a67fe9557be116f4239bfd651b4ad2ea422de4b517a2a17002b06. Sep 6 09:56:52.773192 kubelet[2710]: I0906 09:56:52.773136 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed-node-certs\") pod \"calico-node-n5zz9\" (UID: \"a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed\") " pod="calico-system/calico-node-n5zz9" Sep 6 09:56:52.773192 kubelet[2710]: I0906 09:56:52.773196 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed-flexvol-driver-host\") pod \"calico-node-n5zz9\" (UID: \"a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed\") " pod="calico-system/calico-node-n5zz9" Sep 6 09:56:52.773450 kubelet[2710]: I0906 09:56:52.773216 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed-tigera-ca-bundle\") pod \"calico-node-n5zz9\" (UID: \"a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed\") " pod="calico-system/calico-node-n5zz9" Sep 6 09:56:52.773450 kubelet[2710]: I0906 09:56:52.773229 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed-var-lib-calico\") pod \"calico-node-n5zz9\" (UID: \"a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed\") " pod="calico-system/calico-node-n5zz9" Sep 6 09:56:52.773450 kubelet[2710]: I0906 09:56:52.773243 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed-cni-net-dir\") pod \"calico-node-n5zz9\" (UID: \"a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed\") " pod="calico-system/calico-node-n5zz9" Sep 6 09:56:52.773450 kubelet[2710]: I0906 09:56:52.773256 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed-policysync\") pod \"calico-node-n5zz9\" (UID: \"a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed\") " pod="calico-system/calico-node-n5zz9" Sep 6 09:56:52.773450 kubelet[2710]: I0906 09:56:52.773273 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed-cni-bin-dir\") pod \"calico-node-n5zz9\" (UID: \"a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed\") " pod="calico-system/calico-node-n5zz9" Sep 6 09:56:52.773572 kubelet[2710]: I0906 09:56:52.773291 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed-lib-modules\") pod \"calico-node-n5zz9\" (UID: \"a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed\") " pod="calico-system/calico-node-n5zz9" Sep 6 09:56:52.773572 kubelet[2710]: I0906 09:56:52.773313 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz5ff\" (UniqueName: \"kubernetes.io/projected/a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed-kube-api-access-sz5ff\") pod \"calico-node-n5zz9\" (UID: \"a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed\") " pod="calico-system/calico-node-n5zz9" Sep 6 09:56:52.773572 kubelet[2710]: I0906 09:56:52.773326 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed-xtables-lock\") pod \"calico-node-n5zz9\" (UID: \"a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed\") " pod="calico-system/calico-node-n5zz9" Sep 6 09:56:52.773572 kubelet[2710]: I0906 09:56:52.773343 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed-cni-log-dir\") pod \"calico-node-n5zz9\" (UID: \"a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed\") " pod="calico-system/calico-node-n5zz9" Sep 6 09:56:52.773572 kubelet[2710]: I0906 09:56:52.773358 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed-var-run-calico\") pod \"calico-node-n5zz9\" (UID: \"a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed\") " pod="calico-system/calico-node-n5zz9" Sep 6 09:56:52.790302 containerd[1568]: time="2025-09-06T09:56:52.790247423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b4b5fb4f8-ph8jk,Uid:1cf4184e-c4ec-4c41-b715-45ce527b1841,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fa4430c549a67fe9557be116f4239bfd651b4ad2ea422de4b517a2a17002b06\"" Sep 6 09:56:52.790860 kubelet[2710]: E0906 09:56:52.790815 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:52.792422 containerd[1568]: time="2025-09-06T09:56:52.792389030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 6 09:56:52.876609 kubelet[2710]: E0906 09:56:52.876435 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:52.876609 kubelet[2710]: W0906 09:56:52.876577 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:52.876609 kubelet[2710]: E0906 09:56:52.876617 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:52.880460 kubelet[2710]: E0906 09:56:52.880427 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:52.880460 kubelet[2710]: W0906 09:56:52.880446 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:52.880460 kubelet[2710]: E0906 09:56:52.880466 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:52.883746 kubelet[2710]: E0906 09:56:52.883720 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:52.883746 kubelet[2710]: W0906 09:56:52.883739 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:52.883882 kubelet[2710]: E0906 09:56:52.883755 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:52.949851 kubelet[2710]: E0906 09:56:52.949528 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66kxk" podUID="706810f4-e487-43f2-bf65-01d2ffa5e4cc" Sep 6 09:56:52.968869 containerd[1568]: time="2025-09-06T09:56:52.967604208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n5zz9,Uid:a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed,Namespace:calico-system,Attempt:0,}" Sep 6 09:56:52.993768 containerd[1568]: time="2025-09-06T09:56:52.993473437Z" level=info msg="connecting to shim 491729467e6e40b2407bc74b472996dd0a0da83ddaf59c95da0d78b2d7f85815" address="unix:///run/containerd/s/abf6ed4d8e6eba2d1c9c6514182aa16aaf11fd493c3eb2cb8f18efb08c71fdc9" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:56:53.026023 systemd[1]: Started cri-containerd-491729467e6e40b2407bc74b472996dd0a0da83ddaf59c95da0d78b2d7f85815.scope - libcontainer container 491729467e6e40b2407bc74b472996dd0a0da83ddaf59c95da0d78b2d7f85815. Sep 6 09:56:53.048893 kubelet[2710]: E0906 09:56:53.048846 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.048893 kubelet[2710]: W0906 09:56:53.048875 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.049042 kubelet[2710]: E0906 09:56:53.048901 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.050153 kubelet[2710]: E0906 09:56:53.050121 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.050153 kubelet[2710]: W0906 09:56:53.050137 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.050153 kubelet[2710]: E0906 09:56:53.050148 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.050395 kubelet[2710]: E0906 09:56:53.050379 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.050395 kubelet[2710]: W0906 09:56:53.050388 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.050395 kubelet[2710]: E0906 09:56:53.050397 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.051224 kubelet[2710]: E0906 09:56:53.051148 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.051224 kubelet[2710]: W0906 09:56:53.051162 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.051224 kubelet[2710]: E0906 09:56:53.051171 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.051410 kubelet[2710]: E0906 09:56:53.051385 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.051410 kubelet[2710]: W0906 09:56:53.051396 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.051410 kubelet[2710]: E0906 09:56:53.051405 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.051624 kubelet[2710]: E0906 09:56:53.051608 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.051624 kubelet[2710]: W0906 09:56:53.051619 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.051624 kubelet[2710]: E0906 09:56:53.051627 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.051860 kubelet[2710]: E0906 09:56:53.051843 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.051860 kubelet[2710]: W0906 09:56:53.051855 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.051958 kubelet[2710]: E0906 09:56:53.051867 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.052091 kubelet[2710]: E0906 09:56:53.052055 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.052091 kubelet[2710]: W0906 09:56:53.052066 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.052091 kubelet[2710]: E0906 09:56:53.052074 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.052275 kubelet[2710]: E0906 09:56:53.052259 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.052275 kubelet[2710]: W0906 09:56:53.052270 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.052361 kubelet[2710]: E0906 09:56:53.052279 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.052459 kubelet[2710]: E0906 09:56:53.052441 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.052459 kubelet[2710]: W0906 09:56:53.052451 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.052459 kubelet[2710]: E0906 09:56:53.052459 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.052640 kubelet[2710]: E0906 09:56:53.052616 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.052640 kubelet[2710]: W0906 09:56:53.052628 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.052710 kubelet[2710]: E0906 09:56:53.052660 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.052926 kubelet[2710]: E0906 09:56:53.052910 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.052926 kubelet[2710]: W0906 09:56:53.052922 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.052982 kubelet[2710]: E0906 09:56:53.052932 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.053179 kubelet[2710]: E0906 09:56:53.053161 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.053179 kubelet[2710]: W0906 09:56:53.053174 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.053297 kubelet[2710]: E0906 09:56:53.053182 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.053359 kubelet[2710]: E0906 09:56:53.053343 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.053359 kubelet[2710]: W0906 09:56:53.053354 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.053445 kubelet[2710]: E0906 09:56:53.053362 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.053574 kubelet[2710]: E0906 09:56:53.053537 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.053574 kubelet[2710]: W0906 09:56:53.053548 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.053574 kubelet[2710]: E0906 09:56:53.053556 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.053736 kubelet[2710]: E0906 09:56:53.053722 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.053736 kubelet[2710]: W0906 09:56:53.053734 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.053794 kubelet[2710]: E0906 09:56:53.053744 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.054359 kubelet[2710]: E0906 09:56:53.054341 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.054359 kubelet[2710]: W0906 09:56:53.054354 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.054445 kubelet[2710]: E0906 09:56:53.054378 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.054624 kubelet[2710]: E0906 09:56:53.054608 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.054624 kubelet[2710]: W0906 09:56:53.054619 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.054710 kubelet[2710]: E0906 09:56:53.054629 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.054884 kubelet[2710]: E0906 09:56:53.054863 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.054884 kubelet[2710]: W0906 09:56:53.054877 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.055060 kubelet[2710]: E0906 09:56:53.054886 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.055229 kubelet[2710]: E0906 09:56:53.055206 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.055229 kubelet[2710]: W0906 09:56:53.055225 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.055442 kubelet[2710]: E0906 09:56:53.055270 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.067691 containerd[1568]: time="2025-09-06T09:56:53.067651668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n5zz9,Uid:a3b2506f-1b7b-4921-91b1-f1f5aff3c8ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"491729467e6e40b2407bc74b472996dd0a0da83ddaf59c95da0d78b2d7f85815\"" Sep 6 09:56:53.076138 kubelet[2710]: E0906 09:56:53.076100 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.076138 kubelet[2710]: W0906 09:56:53.076130 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.076247 kubelet[2710]: E0906 09:56:53.076151 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.076247 kubelet[2710]: I0906 09:56:53.076180 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/706810f4-e487-43f2-bf65-01d2ffa5e4cc-socket-dir\") pod \"csi-node-driver-66kxk\" (UID: \"706810f4-e487-43f2-bf65-01d2ffa5e4cc\") " pod="calico-system/csi-node-driver-66kxk" Sep 6 09:56:53.076586 kubelet[2710]: E0906 09:56:53.076538 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.076586 kubelet[2710]: W0906 09:56:53.076553 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.076878 kubelet[2710]: E0906 09:56:53.076587 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.076878 kubelet[2710]: I0906 09:56:53.076605 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/706810f4-e487-43f2-bf65-01d2ffa5e4cc-kubelet-dir\") pod \"csi-node-driver-66kxk\" (UID: \"706810f4-e487-43f2-bf65-01d2ffa5e4cc\") " pod="calico-system/csi-node-driver-66kxk" Sep 6 09:56:53.077364 kubelet[2710]: E0906 09:56:53.076927 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.077364 kubelet[2710]: W0906 09:56:53.076944 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.077364 kubelet[2710]: E0906 09:56:53.076985 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.077364 kubelet[2710]: I0906 09:56:53.077233 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8v6z\" (UniqueName: \"kubernetes.io/projected/706810f4-e487-43f2-bf65-01d2ffa5e4cc-kube-api-access-m8v6z\") pod \"csi-node-driver-66kxk\" (UID: \"706810f4-e487-43f2-bf65-01d2ffa5e4cc\") " pod="calico-system/csi-node-driver-66kxk" Sep 6 09:56:53.077675 kubelet[2710]: E0906 09:56:53.077656 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.077675 kubelet[2710]: W0906 09:56:53.077671 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.077743 kubelet[2710]: E0906 09:56:53.077695 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.078004 kubelet[2710]: E0906 09:56:53.077986 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.078004 kubelet[2710]: W0906 09:56:53.077999 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.078471 kubelet[2710]: E0906 09:56:53.078427 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.078578 kubelet[2710]: E0906 09:56:53.078559 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.078578 kubelet[2710]: W0906 09:56:53.078573 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.078661 kubelet[2710]: E0906 09:56:53.078641 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.078874 kubelet[2710]: E0906 09:56:53.078857 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.078874 kubelet[2710]: W0906 09:56:53.078869 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.078945 kubelet[2710]: E0906 09:56:53.078920 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.079067 kubelet[2710]: I0906 09:56:53.078978 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/706810f4-e487-43f2-bf65-01d2ffa5e4cc-registration-dir\") pod \"csi-node-driver-66kxk\" (UID: \"706810f4-e487-43f2-bf65-01d2ffa5e4cc\") " pod="calico-system/csi-node-driver-66kxk" Sep 6 09:56:53.079242 kubelet[2710]: E0906 09:56:53.079225 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.079294 kubelet[2710]: W0906 09:56:53.079248 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.079317 kubelet[2710]: E0906 09:56:53.079299 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.079506 kubelet[2710]: E0906 09:56:53.079485 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.079541 kubelet[2710]: W0906 09:56:53.079509 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.079541 kubelet[2710]: E0906 09:56:53.079518 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.079828 kubelet[2710]: E0906 09:56:53.079784 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.079861 kubelet[2710]: W0906 09:56:53.079813 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.079886 kubelet[2710]: E0906 09:56:53.079864 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.080384 kubelet[2710]: E0906 09:56:53.080144 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.080384 kubelet[2710]: W0906 09:56:53.080158 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.080384 kubelet[2710]: E0906 09:56:53.080171 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.080384 kubelet[2710]: E0906 09:56:53.080350 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.080384 kubelet[2710]: W0906 09:56:53.080357 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.080384 kubelet[2710]: E0906 09:56:53.080365 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.081076 kubelet[2710]: E0906 09:56:53.081057 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.081076 kubelet[2710]: W0906 09:56:53.081071 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.081148 kubelet[2710]: E0906 09:56:53.081082 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.081148 kubelet[2710]: I0906 09:56:53.081099 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/706810f4-e487-43f2-bf65-01d2ffa5e4cc-varrun\") pod \"csi-node-driver-66kxk\" (UID: \"706810f4-e487-43f2-bf65-01d2ffa5e4cc\") " pod="calico-system/csi-node-driver-66kxk" Sep 6 09:56:53.082059 kubelet[2710]: E0906 09:56:53.082034 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.082059 kubelet[2710]: W0906 09:56:53.082051 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.082059 kubelet[2710]: E0906 09:56:53.082062 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.083901 kubelet[2710]: E0906 09:56:53.083881 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.083901 kubelet[2710]: W0906 09:56:53.083895 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.083901 kubelet[2710]: E0906 09:56:53.083905 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.182518 kubelet[2710]: E0906 09:56:53.182472 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.182518 kubelet[2710]: W0906 09:56:53.182508 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.182712 kubelet[2710]: E0906 09:56:53.182544 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.182961 kubelet[2710]: E0906 09:56:53.182943 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.182961 kubelet[2710]: W0906 09:56:53.182957 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.183034 kubelet[2710]: E0906 09:56:53.182974 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.183232 kubelet[2710]: E0906 09:56:53.183214 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.183232 kubelet[2710]: W0906 09:56:53.183226 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.183290 kubelet[2710]: E0906 09:56:53.183253 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.183528 kubelet[2710]: E0906 09:56:53.183510 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.183528 kubelet[2710]: W0906 09:56:53.183521 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.183591 kubelet[2710]: E0906 09:56:53.183534 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.184228 kubelet[2710]: E0906 09:56:53.184181 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.184228 kubelet[2710]: W0906 09:56:53.184212 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.184425 kubelet[2710]: E0906 09:56:53.184261 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.184471 kubelet[2710]: E0906 09:56:53.184455 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.184471 kubelet[2710]: W0906 09:56:53.184468 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.184637 kubelet[2710]: E0906 09:56:53.184602 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.184637 kubelet[2710]: E0906 09:56:53.184652 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.184858 kubelet[2710]: W0906 09:56:53.184658 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.184858 kubelet[2710]: E0906 09:56:53.184714 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.184858 kubelet[2710]: E0906 09:56:53.184799 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.184858 kubelet[2710]: W0906 09:56:53.184806 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.184994 kubelet[2710]: E0906 09:56:53.184897 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.185061 kubelet[2710]: E0906 09:56:53.185026 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.185061 kubelet[2710]: W0906 09:56:53.185044 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.185139 kubelet[2710]: E0906 09:56:53.185085 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.185345 kubelet[2710]: E0906 09:56:53.185324 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.185345 kubelet[2710]: W0906 09:56:53.185337 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.185424 kubelet[2710]: E0906 09:56:53.185376 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.185524 kubelet[2710]: E0906 09:56:53.185508 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.185524 kubelet[2710]: W0906 09:56:53.185519 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.185595 kubelet[2710]: E0906 09:56:53.185531 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.185761 kubelet[2710]: E0906 09:56:53.185730 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.185761 kubelet[2710]: W0906 09:56:53.185741 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.185761 kubelet[2710]: E0906 09:56:53.185753 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.186053 kubelet[2710]: E0906 09:56:53.186022 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.186053 kubelet[2710]: W0906 09:56:53.186039 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.186053 kubelet[2710]: E0906 09:56:53.186062 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.186362 kubelet[2710]: E0906 09:56:53.186211 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.186362 kubelet[2710]: W0906 09:56:53.186218 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.186446 kubelet[2710]: E0906 09:56:53.186422 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.186495 kubelet[2710]: E0906 09:56:53.186420 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.186495 kubelet[2710]: W0906 09:56:53.186477 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.186597 kubelet[2710]: E0906 09:56:53.186576 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.186773 kubelet[2710]: E0906 09:56:53.186758 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.186773 kubelet[2710]: W0906 09:56:53.186768 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.186893 kubelet[2710]: E0906 09:56:53.186866 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.186955 kubelet[2710]: E0906 09:56:53.186939 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.186955 kubelet[2710]: W0906 09:56:53.186952 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.187141 kubelet[2710]: E0906 09:56:53.187114 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.187239 kubelet[2710]: E0906 09:56:53.187189 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.187239 kubelet[2710]: W0906 09:56:53.187196 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.187239 kubelet[2710]: E0906 09:56:53.187213 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.187605 kubelet[2710]: E0906 09:56:53.187388 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.187605 kubelet[2710]: W0906 09:56:53.187400 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.187605 kubelet[2710]: E0906 09:56:53.187426 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.187721 kubelet[2710]: E0906 09:56:53.187695 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.187721 kubelet[2710]: W0906 09:56:53.187703 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.187721 kubelet[2710]: E0906 09:56:53.187715 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.187987 kubelet[2710]: E0906 09:56:53.187930 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.187987 kubelet[2710]: W0906 09:56:53.187944 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.187987 kubelet[2710]: E0906 09:56:53.187959 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.188189 kubelet[2710]: E0906 09:56:53.188112 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.188189 kubelet[2710]: W0906 09:56:53.188121 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.188189 kubelet[2710]: E0906 09:56:53.188132 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.188415 kubelet[2710]: E0906 09:56:53.188320 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.188415 kubelet[2710]: W0906 09:56:53.188327 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.188530 kubelet[2710]: E0906 09:56:53.188457 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.188530 kubelet[2710]: W0906 09:56:53.188464 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.188530 kubelet[2710]: E0906 09:56:53.188472 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.188917 kubelet[2710]: E0906 09:56:53.188895 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.188975 kubelet[2710]: W0906 09:56:53.188907 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.188975 kubelet[2710]: E0906 09:56:53.188942 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.189937 kubelet[2710]: E0906 09:56:53.189893 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:53.194585 kubelet[2710]: E0906 09:56:53.194557 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:53.194644 kubelet[2710]: W0906 09:56:53.194583 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:53.194644 kubelet[2710]: E0906 09:56:53.194610 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:54.109983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3970210752.mount: Deactivated successfully. Sep 6 09:56:54.447805 kubelet[2710]: E0906 09:56:54.447650 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66kxk" podUID="706810f4-e487-43f2-bf65-01d2ffa5e4cc" Sep 6 09:56:55.771800 containerd[1568]: time="2025-09-06T09:56:55.771745387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:55.772545 containerd[1568]: time="2025-09-06T09:56:55.772526619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 6 09:56:55.773657 containerd[1568]: time="2025-09-06T09:56:55.773611930Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:55.775653 containerd[1568]: time="2025-09-06T09:56:55.775601075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:55.776307 containerd[1568]: time="2025-09-06T09:56:55.776247825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.983821587s" Sep 6 09:56:55.776307 containerd[1568]: time="2025-09-06T09:56:55.776296828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 6 09:56:55.778359 containerd[1568]: time="2025-09-06T09:56:55.778307733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 6 09:56:55.787790 containerd[1568]: time="2025-09-06T09:56:55.787750025Z" level=info msg="CreateContainer within sandbox \"8fa4430c549a67fe9557be116f4239bfd651b4ad2ea422de4b517a2a17002b06\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 6 09:56:55.795760 containerd[1568]: time="2025-09-06T09:56:55.795709800Z" level=info msg="Container 28c89271c453b81d3326110bc558784bd90b1ec37284047224fa86f2615cd18e: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:56:55.803203 containerd[1568]: time="2025-09-06T09:56:55.803161294Z" level=info msg="CreateContainer within sandbox \"8fa4430c549a67fe9557be116f4239bfd651b4ad2ea422de4b517a2a17002b06\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"28c89271c453b81d3326110bc558784bd90b1ec37284047224fa86f2615cd18e\"" Sep 6 09:56:55.803630 containerd[1568]: time="2025-09-06T09:56:55.803605447Z" level=info msg="StartContainer for \"28c89271c453b81d3326110bc558784bd90b1ec37284047224fa86f2615cd18e\"" Sep 6 09:56:55.804509 containerd[1568]: time="2025-09-06T09:56:55.804486549Z" level=info msg="connecting to shim 28c89271c453b81d3326110bc558784bd90b1ec37284047224fa86f2615cd18e" address="unix:///run/containerd/s/16738e33b171e9f06d2778ddf75c45f05ba0419ca2c4838a8a436ccf5addde83" protocol=ttrpc version=3 Sep 6 09:56:55.827002 systemd[1]: Started cri-containerd-28c89271c453b81d3326110bc558784bd90b1ec37284047224fa86f2615cd18e.scope - libcontainer container 28c89271c453b81d3326110bc558784bd90b1ec37284047224fa86f2615cd18e. Sep 6 09:56:55.878383 containerd[1568]: time="2025-09-06T09:56:55.878327724Z" level=info msg="StartContainer for \"28c89271c453b81d3326110bc558784bd90b1ec37284047224fa86f2615cd18e\" returns successfully" Sep 6 09:56:56.447781 kubelet[2710]: E0906 09:56:56.447696 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66kxk" podUID="706810f4-e487-43f2-bf65-01d2ffa5e4cc" Sep 6 09:56:56.532772 kubelet[2710]: E0906 09:56:56.532734 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:56.574132 kubelet[2710]: E0906 09:56:56.574091 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.574132 kubelet[2710]: W0906 09:56:56.574121 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.574325 kubelet[2710]: E0906 09:56:56.574151 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.574463 kubelet[2710]: E0906 09:56:56.574437 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.574463 kubelet[2710]: W0906 09:56:56.574450 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.574509 kubelet[2710]: E0906 09:56:56.574461 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.574918 kubelet[2710]: E0906 09:56:56.574887 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.574918 kubelet[2710]: W0906 09:56:56.574901 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.574918 kubelet[2710]: E0906 09:56:56.574912 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.575167 kubelet[2710]: E0906 09:56:56.575148 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.575167 kubelet[2710]: W0906 09:56:56.575161 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.575216 kubelet[2710]: E0906 09:56:56.575171 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.575384 kubelet[2710]: E0906 09:56:56.575364 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.575384 kubelet[2710]: W0906 09:56:56.575376 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.575440 kubelet[2710]: E0906 09:56:56.575386 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.575660 kubelet[2710]: E0906 09:56:56.575625 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.575714 kubelet[2710]: W0906 09:56:56.575656 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.575714 kubelet[2710]: E0906 09:56:56.575688 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.575956 kubelet[2710]: E0906 09:56:56.575941 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.575956 kubelet[2710]: W0906 09:56:56.575951 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.576027 kubelet[2710]: E0906 09:56:56.575960 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.576149 kubelet[2710]: E0906 09:56:56.576133 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.576149 kubelet[2710]: W0906 09:56:56.576143 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.576212 kubelet[2710]: E0906 09:56:56.576152 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.576415 kubelet[2710]: E0906 09:56:56.576400 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.576442 kubelet[2710]: W0906 09:56:56.576420 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.576442 kubelet[2710]: E0906 09:56:56.576429 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.576629 kubelet[2710]: E0906 09:56:56.576613 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.576629 kubelet[2710]: W0906 09:56:56.576623 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.576714 kubelet[2710]: E0906 09:56:56.576631 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.576792 kubelet[2710]: E0906 09:56:56.576778 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.576792 kubelet[2710]: W0906 09:56:56.576787 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.576867 kubelet[2710]: E0906 09:56:56.576795 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.577014 kubelet[2710]: E0906 09:56:56.576997 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.577014 kubelet[2710]: W0906 09:56:56.577009 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.577074 kubelet[2710]: E0906 09:56:56.577018 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.577877 kubelet[2710]: E0906 09:56:56.577857 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.577877 kubelet[2710]: W0906 09:56:56.577873 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.578044 kubelet[2710]: E0906 09:56:56.577884 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.578925 kubelet[2710]: E0906 09:56:56.578901 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.578925 kubelet[2710]: W0906 09:56:56.578913 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.578925 kubelet[2710]: E0906 09:56:56.578923 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.579130 kubelet[2710]: E0906 09:56:56.579112 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.579130 kubelet[2710]: W0906 09:56:56.579122 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.579130 kubelet[2710]: E0906 09:56:56.579132 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.613290 kubelet[2710]: E0906 09:56:56.613253 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.613290 kubelet[2710]: W0906 09:56:56.613277 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.613290 kubelet[2710]: E0906 09:56:56.613301 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.613631 kubelet[2710]: E0906 09:56:56.613580 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.613667 kubelet[2710]: W0906 09:56:56.613630 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.613693 kubelet[2710]: E0906 09:56:56.613672 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.614013 kubelet[2710]: E0906 09:56:56.613989 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.614013 kubelet[2710]: W0906 09:56:56.614001 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.614082 kubelet[2710]: E0906 09:56:56.614016 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.614316 kubelet[2710]: E0906 09:56:56.614282 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.614316 kubelet[2710]: W0906 09:56:56.614305 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.614359 kubelet[2710]: E0906 09:56:56.614331 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.614546 kubelet[2710]: E0906 09:56:56.614530 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.614546 kubelet[2710]: W0906 09:56:56.614540 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.614616 kubelet[2710]: E0906 09:56:56.614554 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.614750 kubelet[2710]: E0906 09:56:56.614737 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.614750 kubelet[2710]: W0906 09:56:56.614746 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.614800 kubelet[2710]: E0906 09:56:56.614758 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.614967 kubelet[2710]: E0906 09:56:56.614952 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.614996 kubelet[2710]: W0906 09:56:56.614965 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.614996 kubelet[2710]: E0906 09:56:56.614981 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.615181 kubelet[2710]: E0906 09:56:56.615142 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.615181 kubelet[2710]: W0906 09:56:56.615161 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.615181 kubelet[2710]: E0906 09:56:56.615190 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.615434 kubelet[2710]: E0906 09:56:56.615321 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.615434 kubelet[2710]: W0906 09:56:56.615328 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.615434 kubelet[2710]: E0906 09:56:56.615352 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.615547 kubelet[2710]: E0906 09:56:56.615518 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.615547 kubelet[2710]: W0906 09:56:56.615527 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.615547 kubelet[2710]: E0906 09:56:56.615541 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.615736 kubelet[2710]: E0906 09:56:56.615717 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.615736 kubelet[2710]: W0906 09:56:56.615729 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.615806 kubelet[2710]: E0906 09:56:56.615747 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.615963 kubelet[2710]: E0906 09:56:56.615947 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.615963 kubelet[2710]: W0906 09:56:56.615957 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.616019 kubelet[2710]: E0906 09:56:56.615971 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.616180 kubelet[2710]: E0906 09:56:56.616162 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.616180 kubelet[2710]: W0906 09:56:56.616176 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.616240 kubelet[2710]: E0906 09:56:56.616190 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.616382 kubelet[2710]: E0906 09:56:56.616366 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.616382 kubelet[2710]: W0906 09:56:56.616377 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.616435 kubelet[2710]: E0906 09:56:56.616392 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.616586 kubelet[2710]: E0906 09:56:56.616568 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.616586 kubelet[2710]: W0906 09:56:56.616581 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.616658 kubelet[2710]: E0906 09:56:56.616608 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.616812 kubelet[2710]: E0906 09:56:56.616797 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.616812 kubelet[2710]: W0906 09:56:56.616807 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.616877 kubelet[2710]: E0906 09:56:56.616833 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.617025 kubelet[2710]: E0906 09:56:56.617009 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.617025 kubelet[2710]: W0906 09:56:56.617019 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.617081 kubelet[2710]: E0906 09:56:56.617029 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:56.617603 kubelet[2710]: E0906 09:56:56.617552 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 09:56:56.617603 kubelet[2710]: W0906 09:56:56.617586 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 09:56:56.617680 kubelet[2710]: E0906 09:56:56.617626 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 09:56:57.165394 containerd[1568]: time="2025-09-06T09:56:57.165334955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:57.166173 containerd[1568]: time="2025-09-06T09:56:57.166114974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 6 09:56:57.167240 containerd[1568]: time="2025-09-06T09:56:57.167201517Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:57.169427 containerd[1568]: time="2025-09-06T09:56:57.169373552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:56:57.170110 containerd[1568]: time="2025-09-06T09:56:57.170071155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.391729134s" Sep 6 09:56:57.170158 containerd[1568]: time="2025-09-06T09:56:57.170116608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 6 09:56:57.172675 containerd[1568]: time="2025-09-06T09:56:57.172620224Z" level=info msg="CreateContainer within sandbox \"491729467e6e40b2407bc74b472996dd0a0da83ddaf59c95da0d78b2d7f85815\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 6 09:56:57.182615 containerd[1568]: time="2025-09-06T09:56:57.182568002Z" level=info msg="Container 4b05df8e1aafa8acbd193ba32327385c3c9e04d8fe4aa2e05b10e8fe6754e9c0: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:56:57.191041 containerd[1568]: time="2025-09-06T09:56:57.190989088Z" level=info msg="CreateContainer within sandbox \"491729467e6e40b2407bc74b472996dd0a0da83ddaf59c95da0d78b2d7f85815\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4b05df8e1aafa8acbd193ba32327385c3c9e04d8fe4aa2e05b10e8fe6754e9c0\"" Sep 6 09:56:57.191640 containerd[1568]: time="2025-09-06T09:56:57.191601997Z" level=info msg="StartContainer for \"4b05df8e1aafa8acbd193ba32327385c3c9e04d8fe4aa2e05b10e8fe6754e9c0\"" Sep 6 09:56:57.193026 containerd[1568]: time="2025-09-06T09:56:57.192998665Z" level=info msg="connecting to shim 4b05df8e1aafa8acbd193ba32327385c3c9e04d8fe4aa2e05b10e8fe6754e9c0" address="unix:///run/containerd/s/abf6ed4d8e6eba2d1c9c6514182aa16aaf11fd493c3eb2cb8f18efb08c71fdc9" protocol=ttrpc version=3 Sep 6 09:56:57.217126 systemd[1]: Started cri-containerd-4b05df8e1aafa8acbd193ba32327385c3c9e04d8fe4aa2e05b10e8fe6754e9c0.scope - libcontainer container 4b05df8e1aafa8acbd193ba32327385c3c9e04d8fe4aa2e05b10e8fe6754e9c0. Sep 6 09:56:57.270305 systemd[1]: cri-containerd-4b05df8e1aafa8acbd193ba32327385c3c9e04d8fe4aa2e05b10e8fe6754e9c0.scope: Deactivated successfully. Sep 6 09:56:57.272449 containerd[1568]: time="2025-09-06T09:56:57.272390592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b05df8e1aafa8acbd193ba32327385c3c9e04d8fe4aa2e05b10e8fe6754e9c0\" id:\"4b05df8e1aafa8acbd193ba32327385c3c9e04d8fe4aa2e05b10e8fe6754e9c0\" pid:3454 exited_at:{seconds:1757152617 nanos:271859147}" Sep 6 09:56:57.298705 containerd[1568]: time="2025-09-06T09:56:57.298639915Z" level=info msg="received exit event container_id:\"4b05df8e1aafa8acbd193ba32327385c3c9e04d8fe4aa2e05b10e8fe6754e9c0\" id:\"4b05df8e1aafa8acbd193ba32327385c3c9e04d8fe4aa2e05b10e8fe6754e9c0\" pid:3454 exited_at:{seconds:1757152617 nanos:271859147}" Sep 6 09:56:57.300585 containerd[1568]: time="2025-09-06T09:56:57.300435397Z" level=info msg="StartContainer for \"4b05df8e1aafa8acbd193ba32327385c3c9e04d8fe4aa2e05b10e8fe6754e9c0\" returns successfully" Sep 6 09:56:57.322070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b05df8e1aafa8acbd193ba32327385c3c9e04d8fe4aa2e05b10e8fe6754e9c0-rootfs.mount: Deactivated successfully. Sep 6 09:56:57.535914 kubelet[2710]: I0906 09:56:57.535795 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 09:56:57.536915 kubelet[2710]: E0906 09:56:57.536578 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:56:57.548615 kubelet[2710]: I0906 09:56:57.548333 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b4b5fb4f8-ph8jk" podStartSLOduration=2.563103753 podStartE2EDuration="5.548313712s" podCreationTimestamp="2025-09-06 09:56:52 +0000 UTC" firstStartedPulling="2025-09-06 09:56:52.792071478 +0000 UTC m=+18.519590178" lastFinishedPulling="2025-09-06 09:56:55.777281427 +0000 UTC m=+21.504800137" observedRunningTime="2025-09-06 09:56:56.543303205 +0000 UTC m=+22.270821905" watchObservedRunningTime="2025-09-06 09:56:57.548313712 +0000 UTC m=+23.275832412" Sep 6 09:56:58.447099 kubelet[2710]: E0906 09:56:58.447022 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66kxk" podUID="706810f4-e487-43f2-bf65-01d2ffa5e4cc" Sep 6 09:56:58.540402 containerd[1568]: time="2025-09-06T09:56:58.540344189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 6 09:57:00.450623 kubelet[2710]: E0906 09:57:00.450577 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66kxk" podUID="706810f4-e487-43f2-bf65-01d2ffa5e4cc" Sep 6 09:57:01.199605 containerd[1568]: time="2025-09-06T09:57:01.199546855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:01.200984 containerd[1568]: time="2025-09-06T09:57:01.200921638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 6 09:57:01.202140 containerd[1568]: time="2025-09-06T09:57:01.202095088Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:01.204879 containerd[1568]: time="2025-09-06T09:57:01.204851730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:01.205396 containerd[1568]: time="2025-09-06T09:57:01.205360873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.664971121s" Sep 6 09:57:01.205396 containerd[1568]: time="2025-09-06T09:57:01.205386649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 6 09:57:01.208390 containerd[1568]: time="2025-09-06T09:57:01.208357954Z" level=info msg="CreateContainer within sandbox \"491729467e6e40b2407bc74b472996dd0a0da83ddaf59c95da0d78b2d7f85815\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 6 09:57:01.220081 containerd[1568]: time="2025-09-06T09:57:01.220022799Z" level=info msg="Container 41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:57:01.231712 containerd[1568]: time="2025-09-06T09:57:01.231659633Z" level=info msg="CreateContainer within sandbox \"491729467e6e40b2407bc74b472996dd0a0da83ddaf59c95da0d78b2d7f85815\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875\"" Sep 6 09:57:01.232320 containerd[1568]: time="2025-09-06T09:57:01.232294022Z" level=info msg="StartContainer for \"41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875\"" Sep 6 09:57:01.233954 containerd[1568]: time="2025-09-06T09:57:01.233927806Z" level=info msg="connecting to shim 41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875" address="unix:///run/containerd/s/abf6ed4d8e6eba2d1c9c6514182aa16aaf11fd493c3eb2cb8f18efb08c71fdc9" protocol=ttrpc version=3 Sep 6 09:57:01.258009 systemd[1]: Started cri-containerd-41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875.scope - libcontainer container 41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875. Sep 6 09:57:01.311495 containerd[1568]: time="2025-09-06T09:57:01.311442288Z" level=info msg="StartContainer for \"41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875\" returns successfully" Sep 6 09:57:02.156577 systemd[1]: cri-containerd-41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875.scope: Deactivated successfully. Sep 6 09:57:02.158375 containerd[1568]: time="2025-09-06T09:57:02.156745519Z" level=info msg="received exit event container_id:\"41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875\" id:\"41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875\" pid:3515 exited_at:{seconds:1757152622 nanos:156505301}" Sep 6 09:57:02.158375 containerd[1568]: time="2025-09-06T09:57:02.156847453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875\" id:\"41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875\" pid:3515 exited_at:{seconds:1757152622 nanos:156505301}" Sep 6 09:57:02.156951 systemd[1]: cri-containerd-41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875.scope: Consumed 669ms CPU time, 176.9M memory peak, 3.7M read from disk, 171.3M written to disk. Sep 6 09:57:02.164365 containerd[1568]: time="2025-09-06T09:57:02.164311232Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Sep 6 09:57:02.183005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41479dbfc6e27a98366f080c0de959d9bec9752d57323de84a94e4939aa3c875-rootfs.mount: Deactivated successfully. Sep 6 09:57:02.216714 kubelet[2710]: I0906 09:57:02.216671 2710 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 09:57:02.342371 systemd[1]: Created slice kubepods-burstable-podf073e93d_308a_419d_9aef_53ec14939c7f.slice - libcontainer container kubepods-burstable-podf073e93d_308a_419d_9aef_53ec14939c7f.slice. Sep 6 09:57:02.350249 systemd[1]: Created slice kubepods-burstable-pod8bdca78a_1afd_4841_ba10_9be11ff08a73.slice - libcontainer container kubepods-burstable-pod8bdca78a_1afd_4841_ba10_9be11ff08a73.slice. Sep 6 09:57:02.355404 systemd[1]: Created slice kubepods-besteffort-podb52c2275_ca01_47e2_b309_9c143dbd379f.slice - libcontainer container kubepods-besteffort-podb52c2275_ca01_47e2_b309_9c143dbd379f.slice. Sep 6 09:57:02.360909 systemd[1]: Created slice kubepods-besteffort-pod2ab86289_9337_435a_b297_d07fa563f8bc.slice - libcontainer container kubepods-besteffort-pod2ab86289_9337_435a_b297_d07fa563f8bc.slice. Sep 6 09:57:02.365480 systemd[1]: Created slice kubepods-besteffort-pod079cc4e0_2aa0_41d7_9081_c51ef12b400b.slice - libcontainer container kubepods-besteffort-pod079cc4e0_2aa0_41d7_9081_c51ef12b400b.slice. Sep 6 09:57:02.371528 systemd[1]: Created slice kubepods-besteffort-pod3302899f_43a5_4706_984b_0ceb054c80c1.slice - libcontainer container kubepods-besteffort-pod3302899f_43a5_4706_984b_0ceb054c80c1.slice. Sep 6 09:57:02.377168 systemd[1]: Created slice kubepods-besteffort-podfdb51cd1_c225_4687_92fd_8f11d468f91a.slice - libcontainer container kubepods-besteffort-podfdb51cd1_c225_4687_92fd_8f11d468f91a.slice. Sep 6 09:57:02.452443 kubelet[2710]: I0906 09:57:02.452271 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/079cc4e0-2aa0-41d7-9081-c51ef12b400b-config\") pod \"goldmane-54d579b49d-tstlj\" (UID: \"079cc4e0-2aa0-41d7-9081-c51ef12b400b\") " pod="calico-system/goldmane-54d579b49d-tstlj" Sep 6 09:57:02.452443 kubelet[2710]: I0906 09:57:02.452318 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bdca78a-1afd-4841-ba10-9be11ff08a73-config-volume\") pod \"coredns-668d6bf9bc-s247t\" (UID: \"8bdca78a-1afd-4841-ba10-9be11ff08a73\") " pod="kube-system/coredns-668d6bf9bc-s247t" Sep 6 09:57:02.452443 kubelet[2710]: I0906 09:57:02.452340 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhk5z\" (UniqueName: \"kubernetes.io/projected/8bdca78a-1afd-4841-ba10-9be11ff08a73-kube-api-access-lhk5z\") pod \"coredns-668d6bf9bc-s247t\" (UID: \"8bdca78a-1afd-4841-ba10-9be11ff08a73\") " pod="kube-system/coredns-668d6bf9bc-s247t" Sep 6 09:57:02.452443 kubelet[2710]: I0906 09:57:02.452360 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3302899f-43a5-4706-984b-0ceb054c80c1-calico-apiserver-certs\") pod \"calico-apiserver-7d4dcd664c-65v6c\" (UID: \"3302899f-43a5-4706-984b-0ceb054c80c1\") " pod="calico-apiserver/calico-apiserver-7d4dcd664c-65v6c" Sep 6 09:57:02.452443 kubelet[2710]: I0906 09:57:02.452376 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdb51cd1-c225-4687-92fd-8f11d468f91a-tigera-ca-bundle\") pod \"calico-kube-controllers-84fc7f6d85-wm7kx\" (UID: \"fdb51cd1-c225-4687-92fd-8f11d468f91a\") " pod="calico-system/calico-kube-controllers-84fc7f6d85-wm7kx" Sep 6 09:57:02.452724 kubelet[2710]: I0906 09:57:02.452392 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m466x\" (UniqueName: \"kubernetes.io/projected/fdb51cd1-c225-4687-92fd-8f11d468f91a-kube-api-access-m466x\") pod \"calico-kube-controllers-84fc7f6d85-wm7kx\" (UID: \"fdb51cd1-c225-4687-92fd-8f11d468f91a\") " pod="calico-system/calico-kube-controllers-84fc7f6d85-wm7kx" Sep 6 09:57:02.452724 kubelet[2710]: I0906 09:57:02.452409 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b52c2275-ca01-47e2-b309-9c143dbd379f-calico-apiserver-certs\") pod \"calico-apiserver-7d4dcd664c-mz26f\" (UID: \"b52c2275-ca01-47e2-b309-9c143dbd379f\") " pod="calico-apiserver/calico-apiserver-7d4dcd664c-mz26f" Sep 6 09:57:02.452724 kubelet[2710]: I0906 09:57:02.452424 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bqqr\" (UniqueName: \"kubernetes.io/projected/b52c2275-ca01-47e2-b309-9c143dbd379f-kube-api-access-6bqqr\") pod \"calico-apiserver-7d4dcd664c-mz26f\" (UID: \"b52c2275-ca01-47e2-b309-9c143dbd379f\") " pod="calico-apiserver/calico-apiserver-7d4dcd664c-mz26f" Sep 6 09:57:02.452919 kubelet[2710]: I0906 09:57:02.452899 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2ab86289-9337-435a-b297-d07fa563f8bc-whisker-backend-key-pair\") pod \"whisker-68b9c49647-jrbbr\" (UID: \"2ab86289-9337-435a-b297-d07fa563f8bc\") " pod="calico-system/whisker-68b9c49647-jrbbr" Sep 6 09:57:02.452956 kubelet[2710]: I0906 09:57:02.452922 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lsfs\" (UniqueName: \"kubernetes.io/projected/2ab86289-9337-435a-b297-d07fa563f8bc-kube-api-access-4lsfs\") pod \"whisker-68b9c49647-jrbbr\" (UID: \"2ab86289-9337-435a-b297-d07fa563f8bc\") " pod="calico-system/whisker-68b9c49647-jrbbr" Sep 6 09:57:02.452956 kubelet[2710]: I0906 09:57:02.452937 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q79b\" (UniqueName: \"kubernetes.io/projected/079cc4e0-2aa0-41d7-9081-c51ef12b400b-kube-api-access-4q79b\") pod \"goldmane-54d579b49d-tstlj\" (UID: \"079cc4e0-2aa0-41d7-9081-c51ef12b400b\") " pod="calico-system/goldmane-54d579b49d-tstlj" Sep 6 09:57:02.453012 kubelet[2710]: I0906 09:57:02.452960 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls64p\" (UniqueName: \"kubernetes.io/projected/f073e93d-308a-419d-9aef-53ec14939c7f-kube-api-access-ls64p\") pod \"coredns-668d6bf9bc-h85kl\" (UID: \"f073e93d-308a-419d-9aef-53ec14939c7f\") " pod="kube-system/coredns-668d6bf9bc-h85kl" Sep 6 09:57:02.453012 kubelet[2710]: I0906 09:57:02.452978 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/079cc4e0-2aa0-41d7-9081-c51ef12b400b-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-tstlj\" (UID: \"079cc4e0-2aa0-41d7-9081-c51ef12b400b\") " pod="calico-system/goldmane-54d579b49d-tstlj" Sep 6 09:57:02.453012 kubelet[2710]: I0906 09:57:02.452999 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ab86289-9337-435a-b297-d07fa563f8bc-whisker-ca-bundle\") pod \"whisker-68b9c49647-jrbbr\" (UID: \"2ab86289-9337-435a-b297-d07fa563f8bc\") " pod="calico-system/whisker-68b9c49647-jrbbr" Sep 6 09:57:02.453164 kubelet[2710]: I0906 09:57:02.453112 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcl4x\" (UniqueName: \"kubernetes.io/projected/3302899f-43a5-4706-984b-0ceb054c80c1-kube-api-access-xcl4x\") pod \"calico-apiserver-7d4dcd664c-65v6c\" (UID: \"3302899f-43a5-4706-984b-0ceb054c80c1\") " pod="calico-apiserver/calico-apiserver-7d4dcd664c-65v6c" Sep 6 09:57:02.453281 kubelet[2710]: I0906 09:57:02.453212 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/079cc4e0-2aa0-41d7-9081-c51ef12b400b-goldmane-key-pair\") pod \"goldmane-54d579b49d-tstlj\" (UID: \"079cc4e0-2aa0-41d7-9081-c51ef12b400b\") " pod="calico-system/goldmane-54d579b49d-tstlj" Sep 6 09:57:02.453307 kubelet[2710]: I0906 09:57:02.453240 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f073e93d-308a-419d-9aef-53ec14939c7f-config-volume\") pod \"coredns-668d6bf9bc-h85kl\" (UID: \"f073e93d-308a-419d-9aef-53ec14939c7f\") " pod="kube-system/coredns-668d6bf9bc-h85kl" Sep 6 09:57:02.454778 systemd[1]: Created slice kubepods-besteffort-pod706810f4_e487_43f2_bf65_01d2ffa5e4cc.slice - libcontainer container kubepods-besteffort-pod706810f4_e487_43f2_bf65_01d2ffa5e4cc.slice. Sep 6 09:57:02.457735 containerd[1568]: time="2025-09-06T09:57:02.457400485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-66kxk,Uid:706810f4-e487-43f2-bf65-01d2ffa5e4cc,Namespace:calico-system,Attempt:0,}" Sep 6 09:57:02.526905 containerd[1568]: time="2025-09-06T09:57:02.526840589Z" level=error msg="Failed to destroy network for sandbox \"20399a3b728d189eb613a86db49e3bee099f7be66c40a09f4ca708dd6cf92ce3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.528271 containerd[1568]: time="2025-09-06T09:57:02.528229996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-66kxk,Uid:706810f4-e487-43f2-bf65-01d2ffa5e4cc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20399a3b728d189eb613a86db49e3bee099f7be66c40a09f4ca708dd6cf92ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.529216 systemd[1]: run-netns-cni\x2dc38cdb63\x2dff16\x2d72eb\x2dbc06\x2dfc87ef359380.mount: Deactivated successfully. Sep 6 09:57:02.536226 kubelet[2710]: E0906 09:57:02.536162 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20399a3b728d189eb613a86db49e3bee099f7be66c40a09f4ca708dd6cf92ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.536377 kubelet[2710]: E0906 09:57:02.536253 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20399a3b728d189eb613a86db49e3bee099f7be66c40a09f4ca708dd6cf92ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-66kxk" Sep 6 09:57:02.536377 kubelet[2710]: E0906 09:57:02.536275 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20399a3b728d189eb613a86db49e3bee099f7be66c40a09f4ca708dd6cf92ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-66kxk" Sep 6 09:57:02.536377 kubelet[2710]: E0906 09:57:02.536330 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-66kxk_calico-system(706810f4-e487-43f2-bf65-01d2ffa5e4cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-66kxk_calico-system(706810f4-e487-43f2-bf65-01d2ffa5e4cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20399a3b728d189eb613a86db49e3bee099f7be66c40a09f4ca708dd6cf92ce3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-66kxk" podUID="706810f4-e487-43f2-bf65-01d2ffa5e4cc" Sep 6 09:57:02.569670 containerd[1568]: time="2025-09-06T09:57:02.565688879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 6 09:57:02.646503 kubelet[2710]: E0906 09:57:02.646449 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:02.647134 containerd[1568]: time="2025-09-06T09:57:02.647070301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h85kl,Uid:f073e93d-308a-419d-9aef-53ec14939c7f,Namespace:kube-system,Attempt:0,}" Sep 6 09:57:02.652974 kubelet[2710]: E0906 09:57:02.652931 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:02.653886 containerd[1568]: time="2025-09-06T09:57:02.653836962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s247t,Uid:8bdca78a-1afd-4841-ba10-9be11ff08a73,Namespace:kube-system,Attempt:0,}" Sep 6 09:57:02.658728 containerd[1568]: time="2025-09-06T09:57:02.658673009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4dcd664c-mz26f,Uid:b52c2275-ca01-47e2-b309-9c143dbd379f,Namespace:calico-apiserver,Attempt:0,}" Sep 6 09:57:02.666141 containerd[1568]: time="2025-09-06T09:57:02.666025052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68b9c49647-jrbbr,Uid:2ab86289-9337-435a-b297-d07fa563f8bc,Namespace:calico-system,Attempt:0,}" Sep 6 09:57:02.669912 containerd[1568]: time="2025-09-06T09:57:02.669874324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-tstlj,Uid:079cc4e0-2aa0-41d7-9081-c51ef12b400b,Namespace:calico-system,Attempt:0,}" Sep 6 09:57:02.676034 containerd[1568]: time="2025-09-06T09:57:02.675994377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4dcd664c-65v6c,Uid:3302899f-43a5-4706-984b-0ceb054c80c1,Namespace:calico-apiserver,Attempt:0,}" Sep 6 09:57:02.684685 containerd[1568]: time="2025-09-06T09:57:02.684639297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fc7f6d85-wm7kx,Uid:fdb51cd1-c225-4687-92fd-8f11d468f91a,Namespace:calico-system,Attempt:0,}" Sep 6 09:57:02.745228 containerd[1568]: time="2025-09-06T09:57:02.745089083Z" level=error msg="Failed to destroy network for sandbox \"70792d99e10ffb2667d8a60b068eb3c1361694e8b67550c12ec584167335eb70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.745833 containerd[1568]: time="2025-09-06T09:57:02.745721521Z" level=error msg="Failed to destroy network for sandbox \"4679b2c5eef7f019df49cbffc517ce14e4b0a2bdfb9c2772990980f7fd055175\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.746920 containerd[1568]: time="2025-09-06T09:57:02.746890143Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h85kl,Uid:f073e93d-308a-419d-9aef-53ec14939c7f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70792d99e10ffb2667d8a60b068eb3c1361694e8b67550c12ec584167335eb70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.747400 kubelet[2710]: E0906 09:57:02.747275 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70792d99e10ffb2667d8a60b068eb3c1361694e8b67550c12ec584167335eb70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.747400 kubelet[2710]: E0906 09:57:02.747357 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70792d99e10ffb2667d8a60b068eb3c1361694e8b67550c12ec584167335eb70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h85kl" Sep 6 09:57:02.747601 kubelet[2710]: E0906 09:57:02.747381 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70792d99e10ffb2667d8a60b068eb3c1361694e8b67550c12ec584167335eb70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h85kl" Sep 6 09:57:02.747848 kubelet[2710]: E0906 09:57:02.747573 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-h85kl_kube-system(f073e93d-308a-419d-9aef-53ec14939c7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-h85kl_kube-system(f073e93d-308a-419d-9aef-53ec14939c7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70792d99e10ffb2667d8a60b068eb3c1361694e8b67550c12ec584167335eb70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-h85kl" podUID="f073e93d-308a-419d-9aef-53ec14939c7f" Sep 6 09:57:02.749094 containerd[1568]: time="2025-09-06T09:57:02.748580400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s247t,Uid:8bdca78a-1afd-4841-ba10-9be11ff08a73,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4679b2c5eef7f019df49cbffc517ce14e4b0a2bdfb9c2772990980f7fd055175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.749175 kubelet[2710]: E0906 09:57:02.748733 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4679b2c5eef7f019df49cbffc517ce14e4b0a2bdfb9c2772990980f7fd055175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.749175 kubelet[2710]: E0906 09:57:02.748765 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4679b2c5eef7f019df49cbffc517ce14e4b0a2bdfb9c2772990980f7fd055175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s247t" Sep 6 09:57:02.749175 kubelet[2710]: E0906 09:57:02.748781 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4679b2c5eef7f019df49cbffc517ce14e4b0a2bdfb9c2772990980f7fd055175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s247t" Sep 6 09:57:02.749253 kubelet[2710]: E0906 09:57:02.748809 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s247t_kube-system(8bdca78a-1afd-4841-ba10-9be11ff08a73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s247t_kube-system(8bdca78a-1afd-4841-ba10-9be11ff08a73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4679b2c5eef7f019df49cbffc517ce14e4b0a2bdfb9c2772990980f7fd055175\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s247t" podUID="8bdca78a-1afd-4841-ba10-9be11ff08a73" Sep 6 09:57:02.765315 containerd[1568]: time="2025-09-06T09:57:02.765202395Z" level=error msg="Failed to destroy network for sandbox \"361cde322d62b06435bb4dd48fffd35146eddea9764f1bfbe76e31c4ac4140be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.766448 containerd[1568]: time="2025-09-06T09:57:02.766415265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4dcd664c-mz26f,Uid:b52c2275-ca01-47e2-b309-9c143dbd379f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"361cde322d62b06435bb4dd48fffd35146eddea9764f1bfbe76e31c4ac4140be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.766722 kubelet[2710]: E0906 09:57:02.766681 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"361cde322d62b06435bb4dd48fffd35146eddea9764f1bfbe76e31c4ac4140be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.766790 kubelet[2710]: E0906 09:57:02.766749 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"361cde322d62b06435bb4dd48fffd35146eddea9764f1bfbe76e31c4ac4140be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4dcd664c-mz26f" Sep 6 09:57:02.766790 kubelet[2710]: E0906 09:57:02.766769 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"361cde322d62b06435bb4dd48fffd35146eddea9764f1bfbe76e31c4ac4140be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4dcd664c-mz26f" Sep 6 09:57:02.766960 kubelet[2710]: E0906 09:57:02.766811 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4dcd664c-mz26f_calico-apiserver(b52c2275-ca01-47e2-b309-9c143dbd379f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4dcd664c-mz26f_calico-apiserver(b52c2275-ca01-47e2-b309-9c143dbd379f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"361cde322d62b06435bb4dd48fffd35146eddea9764f1bfbe76e31c4ac4140be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4dcd664c-mz26f" podUID="b52c2275-ca01-47e2-b309-9c143dbd379f" Sep 6 09:57:02.773515 containerd[1568]: time="2025-09-06T09:57:02.773450725Z" level=error msg="Failed to destroy network for sandbox \"886fb97caf218372526809e8d1c637353c0c78593e5d9f960c19c50c7cbf8c9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.775400 containerd[1568]: time="2025-09-06T09:57:02.775353027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68b9c49647-jrbbr,Uid:2ab86289-9337-435a-b297-d07fa563f8bc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"886fb97caf218372526809e8d1c637353c0c78593e5d9f960c19c50c7cbf8c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.775743 kubelet[2710]: E0906 09:57:02.775697 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886fb97caf218372526809e8d1c637353c0c78593e5d9f960c19c50c7cbf8c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.775814 kubelet[2710]: E0906 09:57:02.775803 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886fb97caf218372526809e8d1c637353c0c78593e5d9f960c19c50c7cbf8c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68b9c49647-jrbbr" Sep 6 09:57:02.775915 kubelet[2710]: E0906 09:57:02.775855 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886fb97caf218372526809e8d1c637353c0c78593e5d9f960c19c50c7cbf8c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68b9c49647-jrbbr" Sep 6 09:57:02.776097 kubelet[2710]: E0906 09:57:02.775937 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68b9c49647-jrbbr_calico-system(2ab86289-9337-435a-b297-d07fa563f8bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68b9c49647-jrbbr_calico-system(2ab86289-9337-435a-b297-d07fa563f8bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"886fb97caf218372526809e8d1c637353c0c78593e5d9f960c19c50c7cbf8c9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68b9c49647-jrbbr" podUID="2ab86289-9337-435a-b297-d07fa563f8bc" Sep 6 09:57:02.793792 containerd[1568]: time="2025-09-06T09:57:02.793675120Z" level=error msg="Failed to destroy network for sandbox \"ad398203c29d02ad2e63056eabbf91f8e92f75698098d43a684be69f458c56d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.795505 containerd[1568]: time="2025-09-06T09:57:02.795360357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4dcd664c-65v6c,Uid:3302899f-43a5-4706-984b-0ceb054c80c1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad398203c29d02ad2e63056eabbf91f8e92f75698098d43a684be69f458c56d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.795609 containerd[1568]: time="2025-09-06T09:57:02.795523585Z" level=error msg="Failed to destroy network for sandbox \"a01bc364379fac354c5e079d5f156bbe5fbf99ea743f0492107de20097a02541\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.795778 kubelet[2710]: E0906 09:57:02.795712 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad398203c29d02ad2e63056eabbf91f8e92f75698098d43a684be69f458c56d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.795974 kubelet[2710]: E0906 09:57:02.795806 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad398203c29d02ad2e63056eabbf91f8e92f75698098d43a684be69f458c56d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4dcd664c-65v6c" Sep 6 09:57:02.796031 kubelet[2710]: E0906 09:57:02.795982 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad398203c29d02ad2e63056eabbf91f8e92f75698098d43a684be69f458c56d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4dcd664c-65v6c" Sep 6 09:57:02.796065 kubelet[2710]: E0906 09:57:02.796043 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4dcd664c-65v6c_calico-apiserver(3302899f-43a5-4706-984b-0ceb054c80c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4dcd664c-65v6c_calico-apiserver(3302899f-43a5-4706-984b-0ceb054c80c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad398203c29d02ad2e63056eabbf91f8e92f75698098d43a684be69f458c56d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4dcd664c-65v6c" podUID="3302899f-43a5-4706-984b-0ceb054c80c1" Sep 6 09:57:02.796529 containerd[1568]: time="2025-09-06T09:57:02.796480884Z" level=error msg="Failed to destroy network for sandbox \"97e8e576c3fe8981dd581ed1d1c328730d3ee1f914495d73315af62bfe0e1091\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.797073 containerd[1568]: time="2025-09-06T09:57:02.797009322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fc7f6d85-wm7kx,Uid:fdb51cd1-c225-4687-92fd-8f11d468f91a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a01bc364379fac354c5e079d5f156bbe5fbf99ea743f0492107de20097a02541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.797414 kubelet[2710]: E0906 09:57:02.797184 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a01bc364379fac354c5e079d5f156bbe5fbf99ea743f0492107de20097a02541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.797414 kubelet[2710]: E0906 09:57:02.797257 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a01bc364379fac354c5e079d5f156bbe5fbf99ea743f0492107de20097a02541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84fc7f6d85-wm7kx" Sep 6 09:57:02.797414 kubelet[2710]: E0906 09:57:02.797279 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a01bc364379fac354c5e079d5f156bbe5fbf99ea743f0492107de20097a02541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84fc7f6d85-wm7kx" Sep 6 09:57:02.797525 kubelet[2710]: E0906 09:57:02.797340 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84fc7f6d85-wm7kx_calico-system(fdb51cd1-c225-4687-92fd-8f11d468f91a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84fc7f6d85-wm7kx_calico-system(fdb51cd1-c225-4687-92fd-8f11d468f91a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a01bc364379fac354c5e079d5f156bbe5fbf99ea743f0492107de20097a02541\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84fc7f6d85-wm7kx" podUID="fdb51cd1-c225-4687-92fd-8f11d468f91a" Sep 6 09:57:02.799094 containerd[1568]: time="2025-09-06T09:57:02.799049256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-tstlj,Uid:079cc4e0-2aa0-41d7-9081-c51ef12b400b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97e8e576c3fe8981dd581ed1d1c328730d3ee1f914495d73315af62bfe0e1091\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.799226 kubelet[2710]: E0906 09:57:02.799177 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97e8e576c3fe8981dd581ed1d1c328730d3ee1f914495d73315af62bfe0e1091\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 09:57:02.799261 kubelet[2710]: E0906 09:57:02.799228 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97e8e576c3fe8981dd581ed1d1c328730d3ee1f914495d73315af62bfe0e1091\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-tstlj" Sep 6 09:57:02.799261 kubelet[2710]: E0906 09:57:02.799253 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97e8e576c3fe8981dd581ed1d1c328730d3ee1f914495d73315af62bfe0e1091\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-tstlj" Sep 6 09:57:02.799313 kubelet[2710]: E0906 09:57:02.799284 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-tstlj_calico-system(079cc4e0-2aa0-41d7-9081-c51ef12b400b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-tstlj_calico-system(079cc4e0-2aa0-41d7-9081-c51ef12b400b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97e8e576c3fe8981dd581ed1d1c328730d3ee1f914495d73315af62bfe0e1091\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-tstlj" podUID="079cc4e0-2aa0-41d7-9081-c51ef12b400b" Sep 6 09:57:07.757810 kubelet[2710]: I0906 09:57:07.757746 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 09:57:07.758436 kubelet[2710]: E0906 09:57:07.758245 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:08.223183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2797272405.mount: Deactivated successfully. Sep 6 09:57:08.419443 containerd[1568]: time="2025-09-06T09:57:08.419361575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:08.420480 containerd[1568]: time="2025-09-06T09:57:08.420401731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 6 09:57:08.421588 containerd[1568]: time="2025-09-06T09:57:08.421550979Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:08.423786 containerd[1568]: time="2025-09-06T09:57:08.423672730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:08.424450 containerd[1568]: time="2025-09-06T09:57:08.424409218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 5.858659777s" Sep 6 09:57:08.424450 containerd[1568]: time="2025-09-06T09:57:08.424447690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 6 09:57:08.435771 containerd[1568]: time="2025-09-06T09:57:08.435728583Z" level=info msg="CreateContainer within sandbox \"491729467e6e40b2407bc74b472996dd0a0da83ddaf59c95da0d78b2d7f85815\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 6 09:57:08.444610 containerd[1568]: time="2025-09-06T09:57:08.444572194Z" level=info msg="Container 2af587bf88a66c3d5461c3c18a5d4c50067f2c5a4a138e3b2e808985bdef803e: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:57:08.479411 containerd[1568]: time="2025-09-06T09:57:08.479289296Z" level=info msg="CreateContainer within sandbox \"491729467e6e40b2407bc74b472996dd0a0da83ddaf59c95da0d78b2d7f85815\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2af587bf88a66c3d5461c3c18a5d4c50067f2c5a4a138e3b2e808985bdef803e\"" Sep 6 09:57:08.480090 containerd[1568]: time="2025-09-06T09:57:08.480055919Z" level=info msg="StartContainer for \"2af587bf88a66c3d5461c3c18a5d4c50067f2c5a4a138e3b2e808985bdef803e\"" Sep 6 09:57:08.482478 containerd[1568]: time="2025-09-06T09:57:08.482439998Z" level=info msg="connecting to shim 2af587bf88a66c3d5461c3c18a5d4c50067f2c5a4a138e3b2e808985bdef803e" address="unix:///run/containerd/s/abf6ed4d8e6eba2d1c9c6514182aa16aaf11fd493c3eb2cb8f18efb08c71fdc9" protocol=ttrpc version=3 Sep 6 09:57:08.525016 systemd[1]: Started cri-containerd-2af587bf88a66c3d5461c3c18a5d4c50067f2c5a4a138e3b2e808985bdef803e.scope - libcontainer container 2af587bf88a66c3d5461c3c18a5d4c50067f2c5a4a138e3b2e808985bdef803e. Sep 6 09:57:08.572402 containerd[1568]: time="2025-09-06T09:57:08.572350745Z" level=info msg="StartContainer for \"2af587bf88a66c3d5461c3c18a5d4c50067f2c5a4a138e3b2e808985bdef803e\" returns successfully" Sep 6 09:57:08.579741 kubelet[2710]: E0906 09:57:08.578887 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:08.593754 kubelet[2710]: I0906 09:57:08.593680 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n5zz9" podStartSLOduration=1.237588875 podStartE2EDuration="16.593660544s" podCreationTimestamp="2025-09-06 09:56:52 +0000 UTC" firstStartedPulling="2025-09-06 09:56:53.068988707 +0000 UTC m=+18.796507397" lastFinishedPulling="2025-09-06 09:57:08.425060366 +0000 UTC m=+34.152579066" observedRunningTime="2025-09-06 09:57:08.592757941 +0000 UTC m=+34.320276661" watchObservedRunningTime="2025-09-06 09:57:08.593660544 +0000 UTC m=+34.321179244" Sep 6 09:57:08.645022 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 6 09:57:08.646021 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 6 09:57:08.785417 kubelet[2710]: I0906 09:57:08.785021 2710 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2ab86289-9337-435a-b297-d07fa563f8bc-whisker-backend-key-pair\") pod \"2ab86289-9337-435a-b297-d07fa563f8bc\" (UID: \"2ab86289-9337-435a-b297-d07fa563f8bc\") " Sep 6 09:57:08.785992 kubelet[2710]: I0906 09:57:08.785961 2710 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ab86289-9337-435a-b297-d07fa563f8bc-whisker-ca-bundle\") pod \"2ab86289-9337-435a-b297-d07fa563f8bc\" (UID: \"2ab86289-9337-435a-b297-d07fa563f8bc\") " Sep 6 09:57:08.786513 kubelet[2710]: I0906 09:57:08.786494 2710 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lsfs\" (UniqueName: \"kubernetes.io/projected/2ab86289-9337-435a-b297-d07fa563f8bc-kube-api-access-4lsfs\") pod \"2ab86289-9337-435a-b297-d07fa563f8bc\" (UID: \"2ab86289-9337-435a-b297-d07fa563f8bc\") " Sep 6 09:57:08.786595 kubelet[2710]: I0906 09:57:08.786553 2710 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab86289-9337-435a-b297-d07fa563f8bc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2ab86289-9337-435a-b297-d07fa563f8bc" (UID: "2ab86289-9337-435a-b297-d07fa563f8bc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 09:57:08.790154 kubelet[2710]: I0906 09:57:08.790086 2710 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab86289-9337-435a-b297-d07fa563f8bc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2ab86289-9337-435a-b297-d07fa563f8bc" (UID: "2ab86289-9337-435a-b297-d07fa563f8bc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 09:57:08.792088 kubelet[2710]: I0906 09:57:08.792016 2710 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab86289-9337-435a-b297-d07fa563f8bc-kube-api-access-4lsfs" (OuterVolumeSpecName: "kube-api-access-4lsfs") pod "2ab86289-9337-435a-b297-d07fa563f8bc" (UID: "2ab86289-9337-435a-b297-d07fa563f8bc"). InnerVolumeSpecName "kube-api-access-4lsfs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 09:57:08.887659 kubelet[2710]: I0906 09:57:08.887617 2710 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4lsfs\" (UniqueName: \"kubernetes.io/projected/2ab86289-9337-435a-b297-d07fa563f8bc-kube-api-access-4lsfs\") on node \"localhost\" DevicePath \"\"" Sep 6 09:57:08.887659 kubelet[2710]: I0906 09:57:08.887649 2710 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2ab86289-9337-435a-b297-d07fa563f8bc-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 6 09:57:08.887659 kubelet[2710]: I0906 09:57:08.887658 2710 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ab86289-9337-435a-b297-d07fa563f8bc-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 6 09:57:09.224229 systemd[1]: var-lib-kubelet-pods-2ab86289\x2d9337\x2d435a\x2db297\x2dd07fa563f8bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4lsfs.mount: Deactivated successfully. Sep 6 09:57:09.224351 systemd[1]: var-lib-kubelet-pods-2ab86289\x2d9337\x2d435a\x2db297\x2dd07fa563f8bc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 6 09:57:09.590737 systemd[1]: Removed slice kubepods-besteffort-pod2ab86289_9337_435a_b297_d07fa563f8bc.slice - libcontainer container kubepods-besteffort-pod2ab86289_9337_435a_b297_d07fa563f8bc.slice. Sep 6 09:57:09.641060 systemd[1]: Created slice kubepods-besteffort-pode6ed8074_c42f_4cbd_8542_c0f6ec9d63e4.slice - libcontainer container kubepods-besteffort-pode6ed8074_c42f_4cbd_8542_c0f6ec9d63e4.slice. Sep 6 09:57:09.693576 kubelet[2710]: I0906 09:57:09.693507 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkkzs\" (UniqueName: \"kubernetes.io/projected/e6ed8074-c42f-4cbd-8542-c0f6ec9d63e4-kube-api-access-vkkzs\") pod \"whisker-c865679d-65x9w\" (UID: \"e6ed8074-c42f-4cbd-8542-c0f6ec9d63e4\") " pod="calico-system/whisker-c865679d-65x9w" Sep 6 09:57:09.693576 kubelet[2710]: I0906 09:57:09.693571 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e6ed8074-c42f-4cbd-8542-c0f6ec9d63e4-whisker-backend-key-pair\") pod \"whisker-c865679d-65x9w\" (UID: \"e6ed8074-c42f-4cbd-8542-c0f6ec9d63e4\") " pod="calico-system/whisker-c865679d-65x9w" Sep 6 09:57:09.693576 kubelet[2710]: I0906 09:57:09.693590 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6ed8074-c42f-4cbd-8542-c0f6ec9d63e4-whisker-ca-bundle\") pod \"whisker-c865679d-65x9w\" (UID: \"e6ed8074-c42f-4cbd-8542-c0f6ec9d63e4\") " pod="calico-system/whisker-c865679d-65x9w" Sep 6 09:57:09.725858 containerd[1568]: time="2025-09-06T09:57:09.725001402Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2af587bf88a66c3d5461c3c18a5d4c50067f2c5a4a138e3b2e808985bdef803e\" id:\"6a87327f1d687a6023305853c41bc3c6973272b9c854b617cc257da45a286b4c\" pid:3907 exit_status:1 exited_at:{seconds:1757152629 nanos:701933920}" Sep 6 09:57:09.946162 containerd[1568]: time="2025-09-06T09:57:09.946110893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c865679d-65x9w,Uid:e6ed8074-c42f-4cbd-8542-c0f6ec9d63e4,Namespace:calico-system,Attempt:0,}" Sep 6 09:57:10.408993 systemd-networkd[1482]: cali65ca0eeadf9: Link UP Sep 6 09:57:10.409679 systemd-networkd[1482]: cali65ca0eeadf9: Gained carrier Sep 6 09:57:10.416655 systemd-networkd[1482]: vxlan.calico: Link UP Sep 6 09:57:10.416662 systemd-networkd[1482]: vxlan.calico: Gained carrier Sep 6 09:57:10.429150 containerd[1568]: 2025-09-06 09:57:10.179 [INFO][4022] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--c865679d--65x9w-eth0 whisker-c865679d- calico-system e6ed8074-c42f-4cbd-8542-c0f6ec9d63e4 895 0 2025-09-06 09:57:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c865679d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-c865679d-65x9w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali65ca0eeadf9 [] [] }} ContainerID="2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" Namespace="calico-system" Pod="whisker-c865679d-65x9w" WorkloadEndpoint="localhost-k8s-whisker--c865679d--65x9w-" Sep 6 09:57:10.429150 containerd[1568]: 2025-09-06 09:57:10.180 [INFO][4022] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" Namespace="calico-system" Pod="whisker-c865679d-65x9w" WorkloadEndpoint="localhost-k8s-whisker--c865679d--65x9w-eth0" Sep 6 09:57:10.429150 containerd[1568]: 2025-09-06 09:57:10.348 [INFO][4066] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" HandleID="k8s-pod-network.2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" Workload="localhost-k8s-whisker--c865679d--65x9w-eth0" Sep 6 09:57:10.429366 containerd[1568]: 2025-09-06 09:57:10.349 [INFO][4066] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" HandleID="k8s-pod-network.2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" Workload="localhost-k8s-whisker--c865679d--65x9w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fa2f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-c865679d-65x9w", "timestamp":"2025-09-06 09:57:10.34828414 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 09:57:10.429366 containerd[1568]: 2025-09-06 09:57:10.349 [INFO][4066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 09:57:10.429366 containerd[1568]: 2025-09-06 09:57:10.351 [INFO][4066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 09:57:10.429366 containerd[1568]: 2025-09-06 09:57:10.351 [INFO][4066] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 09:57:10.429366 containerd[1568]: 2025-09-06 09:57:10.363 [INFO][4066] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" host="localhost" Sep 6 09:57:10.429366 containerd[1568]: 2025-09-06 09:57:10.370 [INFO][4066] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 09:57:10.429366 containerd[1568]: 2025-09-06 09:57:10.374 [INFO][4066] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 09:57:10.429366 containerd[1568]: 2025-09-06 09:57:10.377 [INFO][4066] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:10.429366 containerd[1568]: 2025-09-06 09:57:10.380 [INFO][4066] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:10.429366 containerd[1568]: 2025-09-06 09:57:10.381 [INFO][4066] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" host="localhost" Sep 6 09:57:10.429628 containerd[1568]: 2025-09-06 09:57:10.383 [INFO][4066] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53 Sep 6 09:57:10.429628 containerd[1568]: 2025-09-06 09:57:10.388 [INFO][4066] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" host="localhost" Sep 6 09:57:10.429628 containerd[1568]: 2025-09-06 09:57:10.394 [INFO][4066] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" host="localhost" Sep 6 09:57:10.429628 containerd[1568]: 2025-09-06 09:57:10.394 [INFO][4066] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" host="localhost" Sep 6 09:57:10.429628 containerd[1568]: 2025-09-06 09:57:10.394 [INFO][4066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 09:57:10.429628 containerd[1568]: 2025-09-06 09:57:10.394 [INFO][4066] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" HandleID="k8s-pod-network.2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" Workload="localhost-k8s-whisker--c865679d--65x9w-eth0" Sep 6 09:57:10.429779 containerd[1568]: 2025-09-06 09:57:10.399 [INFO][4022] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" Namespace="calico-system" Pod="whisker-c865679d-65x9w" WorkloadEndpoint="localhost-k8s-whisker--c865679d--65x9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c865679d--65x9w-eth0", GenerateName:"whisker-c865679d-", Namespace:"calico-system", SelfLink:"", UID:"e6ed8074-c42f-4cbd-8542-c0f6ec9d63e4", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c865679d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-c865679d-65x9w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali65ca0eeadf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:10.429779 containerd[1568]: 2025-09-06 09:57:10.399 [INFO][4022] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" Namespace="calico-system" Pod="whisker-c865679d-65x9w" WorkloadEndpoint="localhost-k8s-whisker--c865679d--65x9w-eth0" Sep 6 09:57:10.431914 containerd[1568]: 2025-09-06 09:57:10.399 [INFO][4022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65ca0eeadf9 ContainerID="2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" Namespace="calico-system" Pod="whisker-c865679d-65x9w" WorkloadEndpoint="localhost-k8s-whisker--c865679d--65x9w-eth0" Sep 6 09:57:10.431914 containerd[1568]: 2025-09-06 09:57:10.410 [INFO][4022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" Namespace="calico-system" Pod="whisker-c865679d-65x9w" WorkloadEndpoint="localhost-k8s-whisker--c865679d--65x9w-eth0" Sep 6 09:57:10.431969 containerd[1568]: 2025-09-06 09:57:10.410 [INFO][4022] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" Namespace="calico-system" Pod="whisker-c865679d-65x9w" WorkloadEndpoint="localhost-k8s-whisker--c865679d--65x9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c865679d--65x9w-eth0", GenerateName:"whisker-c865679d-", Namespace:"calico-system", SelfLink:"", UID:"e6ed8074-c42f-4cbd-8542-c0f6ec9d63e4", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c865679d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53", Pod:"whisker-c865679d-65x9w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali65ca0eeadf9", MAC:"46:85:c6:56:09:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:10.432023 containerd[1568]: 2025-09-06 09:57:10.425 [INFO][4022] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" Namespace="calico-system" Pod="whisker-c865679d-65x9w" WorkloadEndpoint="localhost-k8s-whisker--c865679d--65x9w-eth0" Sep 6 09:57:10.451844 kubelet[2710]: I0906 09:57:10.451787 2710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ab86289-9337-435a-b297-d07fa563f8bc" path="/var/lib/kubelet/pods/2ab86289-9337-435a-b297-d07fa563f8bc/volumes" Sep 6 09:57:10.683452 containerd[1568]: time="2025-09-06T09:57:10.682778827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2af587bf88a66c3d5461c3c18a5d4c50067f2c5a4a138e3b2e808985bdef803e\" id:\"8b52624f8a9738ccaf1f73aef395b2389392c0a401392badda7d40149fb363b1\" pid:4132 exit_status:1 exited_at:{seconds:1757152630 nanos:682430421}" Sep 6 09:57:10.746098 containerd[1568]: time="2025-09-06T09:57:10.746037177Z" level=info msg="connecting to shim 2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53" address="unix:///run/containerd/s/3ca9e994257c4bf6afa093071f143e98848df4dc1656b5a7a65693c2e2ea3582" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:57:10.779973 systemd[1]: Started cri-containerd-2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53.scope - libcontainer container 2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53. Sep 6 09:57:10.793152 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 09:57:10.833266 containerd[1568]: time="2025-09-06T09:57:10.833219110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c865679d-65x9w,Uid:e6ed8074-c42f-4cbd-8542-c0f6ec9d63e4,Namespace:calico-system,Attempt:0,} returns sandbox id \"2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53\"" Sep 6 09:57:10.835129 containerd[1568]: time="2025-09-06T09:57:10.835074743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 6 09:57:12.004029 systemd-networkd[1482]: vxlan.calico: Gained IPv6LL Sep 6 09:57:12.067983 systemd-networkd[1482]: cali65ca0eeadf9: Gained IPv6LL Sep 6 09:57:12.542667 containerd[1568]: time="2025-09-06T09:57:12.542605482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:12.543366 containerd[1568]: time="2025-09-06T09:57:12.543306696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 6 09:57:12.544618 containerd[1568]: time="2025-09-06T09:57:12.544584088Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:12.546899 containerd[1568]: time="2025-09-06T09:57:12.546868205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:12.547423 containerd[1568]: time="2025-09-06T09:57:12.547369448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.712239288s" Sep 6 09:57:12.547423 containerd[1568]: time="2025-09-06T09:57:12.547414604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 6 09:57:12.549936 containerd[1568]: time="2025-09-06T09:57:12.549889842Z" level=info msg="CreateContainer within sandbox \"2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 6 09:57:12.559582 containerd[1568]: time="2025-09-06T09:57:12.559054099Z" level=info msg="Container 3a148e44d6d19c9fc1f0a5519bcc5168ff24a1f0f1d4869cc59a3485620a0fea: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:57:12.567992 containerd[1568]: time="2025-09-06T09:57:12.567943478Z" level=info msg="CreateContainer within sandbox \"2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3a148e44d6d19c9fc1f0a5519bcc5168ff24a1f0f1d4869cc59a3485620a0fea\"" Sep 6 09:57:12.568577 containerd[1568]: time="2025-09-06T09:57:12.568535842Z" level=info msg="StartContainer for \"3a148e44d6d19c9fc1f0a5519bcc5168ff24a1f0f1d4869cc59a3485620a0fea\"" Sep 6 09:57:12.569598 containerd[1568]: time="2025-09-06T09:57:12.569572219Z" level=info msg="connecting to shim 3a148e44d6d19c9fc1f0a5519bcc5168ff24a1f0f1d4869cc59a3485620a0fea" address="unix:///run/containerd/s/3ca9e994257c4bf6afa093071f143e98848df4dc1656b5a7a65693c2e2ea3582" protocol=ttrpc version=3 Sep 6 09:57:12.590952 systemd[1]: Started cri-containerd-3a148e44d6d19c9fc1f0a5519bcc5168ff24a1f0f1d4869cc59a3485620a0fea.scope - libcontainer container 3a148e44d6d19c9fc1f0a5519bcc5168ff24a1f0f1d4869cc59a3485620a0fea. Sep 6 09:57:12.638606 containerd[1568]: time="2025-09-06T09:57:12.638551671Z" level=info msg="StartContainer for \"3a148e44d6d19c9fc1f0a5519bcc5168ff24a1f0f1d4869cc59a3485620a0fea\" returns successfully" Sep 6 09:57:12.639982 containerd[1568]: time="2025-09-06T09:57:12.639931018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 6 09:57:13.447598 kubelet[2710]: E0906 09:57:13.447553 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:13.448062 containerd[1568]: time="2025-09-06T09:57:13.447971215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h85kl,Uid:f073e93d-308a-419d-9aef-53ec14939c7f,Namespace:kube-system,Attempt:0,}" Sep 6 09:57:13.549038 systemd-networkd[1482]: cali6552be8cace: Link UP Sep 6 09:57:13.549454 systemd-networkd[1482]: cali6552be8cace: Gained carrier Sep 6 09:57:13.638535 containerd[1568]: 2025-09-06 09:57:13.488 [INFO][4271] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--h85kl-eth0 coredns-668d6bf9bc- kube-system f073e93d-308a-419d-9aef-53ec14939c7f 810 0 2025-09-06 09:56:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-h85kl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6552be8cace [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-h85kl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h85kl-" Sep 6 09:57:13.638535 containerd[1568]: 2025-09-06 09:57:13.488 [INFO][4271] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-h85kl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h85kl-eth0" Sep 6 09:57:13.638535 containerd[1568]: 2025-09-06 09:57:13.514 [INFO][4285] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" HandleID="k8s-pod-network.4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" Workload="localhost-k8s-coredns--668d6bf9bc--h85kl-eth0" Sep 6 09:57:13.639114 containerd[1568]: 2025-09-06 09:57:13.514 [INFO][4285] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" HandleID="k8s-pod-network.4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" Workload="localhost-k8s-coredns--668d6bf9bc--h85kl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000510a20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-h85kl", "timestamp":"2025-09-06 09:57:13.514192831 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 09:57:13.639114 containerd[1568]: 2025-09-06 09:57:13.514 [INFO][4285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 09:57:13.639114 containerd[1568]: 2025-09-06 09:57:13.514 [INFO][4285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 09:57:13.639114 containerd[1568]: 2025-09-06 09:57:13.514 [INFO][4285] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 09:57:13.639114 containerd[1568]: 2025-09-06 09:57:13.521 [INFO][4285] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" host="localhost" Sep 6 09:57:13.639114 containerd[1568]: 2025-09-06 09:57:13.525 [INFO][4285] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 09:57:13.639114 containerd[1568]: 2025-09-06 09:57:13.529 [INFO][4285] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 09:57:13.639114 containerd[1568]: 2025-09-06 09:57:13.530 [INFO][4285] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:13.639114 containerd[1568]: 2025-09-06 09:57:13.532 [INFO][4285] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:13.639114 containerd[1568]: 2025-09-06 09:57:13.532 [INFO][4285] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" host="localhost" Sep 6 09:57:13.639399 containerd[1568]: 2025-09-06 09:57:13.534 [INFO][4285] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6 Sep 6 09:57:13.639399 containerd[1568]: 2025-09-06 09:57:13.537 [INFO][4285] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" host="localhost" Sep 6 09:57:13.639399 containerd[1568]: 2025-09-06 09:57:13.542 [INFO][4285] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" host="localhost" Sep 6 09:57:13.639399 containerd[1568]: 2025-09-06 09:57:13.542 [INFO][4285] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" host="localhost" Sep 6 09:57:13.639399 containerd[1568]: 2025-09-06 09:57:13.542 [INFO][4285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 09:57:13.639399 containerd[1568]: 2025-09-06 09:57:13.542 [INFO][4285] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" HandleID="k8s-pod-network.4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" Workload="localhost-k8s-coredns--668d6bf9bc--h85kl-eth0" Sep 6 09:57:13.639619 containerd[1568]: 2025-09-06 09:57:13.546 [INFO][4271] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-h85kl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h85kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h85kl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f073e93d-308a-419d-9aef-53ec14939c7f", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-h85kl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6552be8cace", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:13.639732 containerd[1568]: 2025-09-06 09:57:13.546 [INFO][4271] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-h85kl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h85kl-eth0" Sep 6 09:57:13.639732 containerd[1568]: 2025-09-06 09:57:13.546 [INFO][4271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6552be8cace ContainerID="4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-h85kl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h85kl-eth0" Sep 6 09:57:13.639732 containerd[1568]: 2025-09-06 09:57:13.549 [INFO][4271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-h85kl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h85kl-eth0" Sep 6 09:57:13.639802 containerd[1568]: 2025-09-06 09:57:13.551 [INFO][4271] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-h85kl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h85kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h85kl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f073e93d-308a-419d-9aef-53ec14939c7f", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6", Pod:"coredns-668d6bf9bc-h85kl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6552be8cace", MAC:"46:47:32:49:ac:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:13.639802 containerd[1568]: 2025-09-06 09:57:13.633 [INFO][4271] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-h85kl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h85kl-eth0" Sep 6 09:57:13.661078 containerd[1568]: time="2025-09-06T09:57:13.661026103Z" level=info msg="connecting to shim 4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6" address="unix:///run/containerd/s/1083c5efde8ace68f9c638a6e3448077668d01048e22cc1773b4673ed8ec00bd" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:57:13.694142 systemd[1]: Started cri-containerd-4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6.scope - libcontainer container 4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6. Sep 6 09:57:13.706259 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 09:57:13.745038 containerd[1568]: time="2025-09-06T09:57:13.744974836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h85kl,Uid:f073e93d-308a-419d-9aef-53ec14939c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6\"" Sep 6 09:57:13.745808 kubelet[2710]: E0906 09:57:13.745784 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:13.747938 containerd[1568]: time="2025-09-06T09:57:13.747899499Z" level=info msg="CreateContainer within sandbox \"4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 09:57:13.767864 containerd[1568]: time="2025-09-06T09:57:13.767415332Z" level=info msg="Container 7b84fb9b15740f38ef6d74e58192eb9590a1686785e9205a43108b5d537028c1: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:57:13.769342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623418885.mount: Deactivated successfully. Sep 6 09:57:13.785631 containerd[1568]: time="2025-09-06T09:57:13.785576849Z" level=info msg="CreateContainer within sandbox \"4d817720417a629bd30a548dfa576b60c5a72ddc17a2fc4a620169a1826526e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7b84fb9b15740f38ef6d74e58192eb9590a1686785e9205a43108b5d537028c1\"" Sep 6 09:57:13.786139 containerd[1568]: time="2025-09-06T09:57:13.786103571Z" level=info msg="StartContainer for \"7b84fb9b15740f38ef6d74e58192eb9590a1686785e9205a43108b5d537028c1\"" Sep 6 09:57:13.787183 containerd[1568]: time="2025-09-06T09:57:13.787147077Z" level=info msg="connecting to shim 7b84fb9b15740f38ef6d74e58192eb9590a1686785e9205a43108b5d537028c1" address="unix:///run/containerd/s/1083c5efde8ace68f9c638a6e3448077668d01048e22cc1773b4673ed8ec00bd" protocol=ttrpc version=3 Sep 6 09:57:13.811036 systemd[1]: Started cri-containerd-7b84fb9b15740f38ef6d74e58192eb9590a1686785e9205a43108b5d537028c1.scope - libcontainer container 7b84fb9b15740f38ef6d74e58192eb9590a1686785e9205a43108b5d537028c1. Sep 6 09:57:14.351388 containerd[1568]: time="2025-09-06T09:57:14.351343303Z" level=info msg="StartContainer for \"7b84fb9b15740f38ef6d74e58192eb9590a1686785e9205a43108b5d537028c1\" returns successfully" Sep 6 09:57:14.451154 containerd[1568]: time="2025-09-06T09:57:14.450956146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-66kxk,Uid:706810f4-e487-43f2-bf65-01d2ffa5e4cc,Namespace:calico-system,Attempt:0,}" Sep 6 09:57:14.451154 containerd[1568]: time="2025-09-06T09:57:14.451003013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4dcd664c-65v6c,Uid:3302899f-43a5-4706-984b-0ceb054c80c1,Namespace:calico-apiserver,Attempt:0,}" Sep 6 09:57:14.615678 kubelet[2710]: E0906 09:57:14.615641 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:14.624581 systemd-networkd[1482]: cali724d2cecb8a: Link UP Sep 6 09:57:14.628445 systemd-networkd[1482]: cali724d2cecb8a: Gained carrier Sep 6 09:57:14.655995 kubelet[2710]: I0906 09:57:14.655901 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h85kl" podStartSLOduration=34.65588133 podStartE2EDuration="34.65588133s" podCreationTimestamp="2025-09-06 09:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 09:57:14.652830962 +0000 UTC m=+40.380349652" watchObservedRunningTime="2025-09-06 09:57:14.65588133 +0000 UTC m=+40.383400020" Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.489 [INFO][4384] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--66kxk-eth0 csi-node-driver- calico-system 706810f4-e487-43f2-bf65-01d2ffa5e4cc 700 0 2025-09-06 09:56:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-66kxk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali724d2cecb8a [] [] }} ContainerID="a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" Namespace="calico-system" Pod="csi-node-driver-66kxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--66kxk-" Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.490 [INFO][4384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" Namespace="calico-system" Pod="csi-node-driver-66kxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--66kxk-eth0" Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.527 [INFO][4410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" HandleID="k8s-pod-network.a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" Workload="localhost-k8s-csi--node--driver--66kxk-eth0" Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.530 [INFO][4410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" HandleID="k8s-pod-network.a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" Workload="localhost-k8s-csi--node--driver--66kxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-66kxk", "timestamp":"2025-09-06 09:57:14.527159393 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.530 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.531 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.531 [INFO][4410] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.539 [INFO][4410] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" host="localhost" Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.544 [INFO][4410] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.551 [INFO][4410] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.554 [INFO][4410] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.560 [INFO][4410] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.561 [INFO][4410] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" host="localhost" Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.566 [INFO][4410] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3 Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.578 [INFO][4410] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" host="localhost" Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.592 [INFO][4410] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" host="localhost" Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.593 [INFO][4410] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" host="localhost" Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.593 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 09:57:14.666009 containerd[1568]: 2025-09-06 09:57:14.594 [INFO][4410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" HandleID="k8s-pod-network.a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" Workload="localhost-k8s-csi--node--driver--66kxk-eth0" Sep 6 09:57:14.666814 containerd[1568]: 2025-09-06 09:57:14.607 [INFO][4384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" Namespace="calico-system" Pod="csi-node-driver-66kxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--66kxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--66kxk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"706810f4-e487-43f2-bf65-01d2ffa5e4cc", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-66kxk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724d2cecb8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:14.666814 containerd[1568]: 2025-09-06 09:57:14.608 [INFO][4384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" Namespace="calico-system" Pod="csi-node-driver-66kxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--66kxk-eth0" Sep 6 09:57:14.666814 containerd[1568]: 2025-09-06 09:57:14.609 [INFO][4384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali724d2cecb8a ContainerID="a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" Namespace="calico-system" Pod="csi-node-driver-66kxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--66kxk-eth0" Sep 6 09:57:14.666814 containerd[1568]: 2025-09-06 09:57:14.638 [INFO][4384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" Namespace="calico-system" Pod="csi-node-driver-66kxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--66kxk-eth0" Sep 6 09:57:14.666814 containerd[1568]: 2025-09-06 09:57:14.640 [INFO][4384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" Namespace="calico-system" Pod="csi-node-driver-66kxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--66kxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--66kxk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"706810f4-e487-43f2-bf65-01d2ffa5e4cc", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3", Pod:"csi-node-driver-66kxk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724d2cecb8a", MAC:"46:e3:0f:fa:56:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:14.666814 containerd[1568]: 2025-09-06 09:57:14.661 [INFO][4384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" Namespace="calico-system" Pod="csi-node-driver-66kxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--66kxk-eth0" Sep 6 09:57:14.700979 containerd[1568]: time="2025-09-06T09:57:14.700927598Z" level=info msg="connecting to shim a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3" address="unix:///run/containerd/s/b778a12c2805223d56bd3f98758581d6a61a94094512a49a748515c5adf0c12a" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:57:14.725367 systemd-networkd[1482]: cali9c9419d0820: Link UP Sep 6 09:57:14.726073 systemd-networkd[1482]: cali9c9419d0820: Gained carrier Sep 6 09:57:14.740952 systemd[1]: Started sshd@7-10.0.0.40:22-10.0.0.1:40044.service - OpenSSH per-connection server daemon (10.0.0.1:40044). Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.511 [INFO][4388] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-eth0 calico-apiserver-7d4dcd664c- calico-apiserver 3302899f-43a5-4706-984b-0ceb054c80c1 820 0 2025-09-06 09:56:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4dcd664c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d4dcd664c-65v6c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9c9419d0820 [] [] }} ContainerID="da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-65v6c" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-" Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.511 [INFO][4388] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-65v6c" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-eth0" Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.579 [INFO][4420] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" HandleID="k8s-pod-network.da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" Workload="localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-eth0" Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.580 [INFO][4420] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" HandleID="k8s-pod-network.da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" Workload="localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d4dcd664c-65v6c", "timestamp":"2025-09-06 09:57:14.574370972 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.581 [INFO][4420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.593 [INFO][4420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.593 [INFO][4420] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.646 [INFO][4420] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" host="localhost" Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.662 [INFO][4420] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.677 [INFO][4420] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.680 [INFO][4420] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.686 [INFO][4420] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.686 [INFO][4420] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" host="localhost" Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.688 [INFO][4420] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3 Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.692 [INFO][4420] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" host="localhost" Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.700 [INFO][4420] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" host="localhost" Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.700 [INFO][4420] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" host="localhost" Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.700 [INFO][4420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 09:57:14.748524 containerd[1568]: 2025-09-06 09:57:14.700 [INFO][4420] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" HandleID="k8s-pod-network.da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" Workload="localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-eth0" Sep 6 09:57:14.750302 containerd[1568]: 2025-09-06 09:57:14.705 [INFO][4388] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-65v6c" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-eth0", GenerateName:"calico-apiserver-7d4dcd664c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3302899f-43a5-4706-984b-0ceb054c80c1", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4dcd664c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d4dcd664c-65v6c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c9419d0820", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:14.750302 containerd[1568]: 2025-09-06 09:57:14.707 [INFO][4388] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-65v6c" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-eth0" Sep 6 09:57:14.750302 containerd[1568]: 2025-09-06 09:57:14.707 [INFO][4388] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c9419d0820 ContainerID="da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-65v6c" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-eth0" Sep 6 09:57:14.750302 containerd[1568]: 2025-09-06 09:57:14.725 [INFO][4388] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-65v6c" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-eth0" Sep 6 09:57:14.750302 containerd[1568]: 2025-09-06 09:57:14.726 [INFO][4388] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-65v6c" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-eth0", GenerateName:"calico-apiserver-7d4dcd664c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3302899f-43a5-4706-984b-0ceb054c80c1", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4dcd664c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3", Pod:"calico-apiserver-7d4dcd664c-65v6c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c9419d0820", MAC:"f2:46:52:2d:ce:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:14.750302 containerd[1568]: 2025-09-06 09:57:14.739 [INFO][4388] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-65v6c" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--65v6c-eth0" Sep 6 09:57:14.755773 systemd[1]: Started cri-containerd-a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3.scope - libcontainer container a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3. Sep 6 09:57:14.789338 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 09:57:14.799898 containerd[1568]: time="2025-09-06T09:57:14.799855822Z" level=info msg="connecting to shim da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3" address="unix:///run/containerd/s/512ec95e64a76ebb7236b109e3bf30e1901aea3be43d8c158ff95004bc7c0134" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:57:14.818698 containerd[1568]: time="2025-09-06T09:57:14.818571927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-66kxk,Uid:706810f4-e487-43f2-bf65-01d2ffa5e4cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3\"" Sep 6 09:57:14.847962 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 40044 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:14.850273 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:14.854239 systemd[1]: Started cri-containerd-da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3.scope - libcontainer container da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3. Sep 6 09:57:14.861372 systemd-logind[1550]: New session 8 of user core. Sep 6 09:57:14.862549 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 6 09:57:14.875629 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 09:57:15.097281 containerd[1568]: time="2025-09-06T09:57:15.097237114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4dcd664c-65v6c,Uid:3302899f-43a5-4706-984b-0ceb054c80c1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3\"" Sep 6 09:57:15.152017 sshd[4548]: Connection closed by 10.0.0.1 port 40044 Sep 6 09:57:15.152336 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:15.157262 systemd[1]: sshd@7-10.0.0.40:22-10.0.0.1:40044.service: Deactivated successfully. Sep 6 09:57:15.159686 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 09:57:15.161497 systemd-logind[1550]: Session 8 logged out. Waiting for processes to exit. Sep 6 09:57:15.162644 systemd-logind[1550]: Removed session 8. Sep 6 09:57:15.179947 containerd[1568]: time="2025-09-06T09:57:15.179893889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:15.180658 containerd[1568]: time="2025-09-06T09:57:15.180607335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 6 09:57:15.181918 containerd[1568]: time="2025-09-06T09:57:15.181879323Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:15.184216 containerd[1568]: time="2025-09-06T09:57:15.184175164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:15.184843 containerd[1568]: time="2025-09-06T09:57:15.184778390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.544810084s" Sep 6 09:57:15.184887 containerd[1568]: time="2025-09-06T09:57:15.184845489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 6 09:57:15.187054 containerd[1568]: time="2025-09-06T09:57:15.186987810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 6 09:57:15.187463 containerd[1568]: time="2025-09-06T09:57:15.187426374Z" level=info msg="CreateContainer within sandbox \"2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 6 09:57:15.203015 containerd[1568]: time="2025-09-06T09:57:15.202964947Z" level=info msg="Container 83d405aa0b1441a6eecdd20728d26e25fb324f6be32ae7b7f61c8877c2750115: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:57:15.211435 containerd[1568]: time="2025-09-06T09:57:15.211388156Z" level=info msg="CreateContainer within sandbox \"2da5bc2c6a778ca9b8c8c8126917b96816564cce9ce810ac294bb6f2ae946b53\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"83d405aa0b1441a6eecdd20728d26e25fb324f6be32ae7b7f61c8877c2750115\"" Sep 6 09:57:15.211850 containerd[1568]: time="2025-09-06T09:57:15.211790003Z" level=info msg="StartContainer for \"83d405aa0b1441a6eecdd20728d26e25fb324f6be32ae7b7f61c8877c2750115\"" Sep 6 09:57:15.212916 containerd[1568]: time="2025-09-06T09:57:15.212889362Z" level=info msg="connecting to shim 83d405aa0b1441a6eecdd20728d26e25fb324f6be32ae7b7f61c8877c2750115" address="unix:///run/containerd/s/3ca9e994257c4bf6afa093071f143e98848df4dc1656b5a7a65693c2e2ea3582" protocol=ttrpc version=3 Sep 6 09:57:15.234995 systemd[1]: Started cri-containerd-83d405aa0b1441a6eecdd20728d26e25fb324f6be32ae7b7f61c8877c2750115.scope - libcontainer container 83d405aa0b1441a6eecdd20728d26e25fb324f6be32ae7b7f61c8877c2750115. Sep 6 09:57:15.268196 systemd-networkd[1482]: cali6552be8cace: Gained IPv6LL Sep 6 09:57:15.289856 containerd[1568]: time="2025-09-06T09:57:15.289794592Z" level=info msg="StartContainer for \"83d405aa0b1441a6eecdd20728d26e25fb324f6be32ae7b7f61c8877c2750115\" returns successfully" Sep 6 09:57:15.621848 kubelet[2710]: E0906 09:57:15.621489 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:15.631802 kubelet[2710]: I0906 09:57:15.631741 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-c865679d-65x9w" podStartSLOduration=2.280835787 podStartE2EDuration="6.63171956s" podCreationTimestamp="2025-09-06 09:57:09 +0000 UTC" firstStartedPulling="2025-09-06 09:57:10.834799311 +0000 UTC m=+36.562318011" lastFinishedPulling="2025-09-06 09:57:15.185683084 +0000 UTC m=+40.913201784" observedRunningTime="2025-09-06 09:57:15.629967894 +0000 UTC m=+41.357486614" watchObservedRunningTime="2025-09-06 09:57:15.63171956 +0000 UTC m=+41.359238260" Sep 6 09:57:15.657949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount455423709.mount: Deactivated successfully. Sep 6 09:57:15.780027 systemd-networkd[1482]: cali9c9419d0820: Gained IPv6LL Sep 6 09:57:16.447869 kubelet[2710]: E0906 09:57:16.447618 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:16.448063 containerd[1568]: time="2025-09-06T09:57:16.448032383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s247t,Uid:8bdca78a-1afd-4841-ba10-9be11ff08a73,Namespace:kube-system,Attempt:0,}" Sep 6 09:57:16.448486 containerd[1568]: time="2025-09-06T09:57:16.448416622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-tstlj,Uid:079cc4e0-2aa0-41d7-9081-c51ef12b400b,Namespace:calico-system,Attempt:0,}" Sep 6 09:57:16.567485 systemd-networkd[1482]: califed9498bc66: Link UP Sep 6 09:57:16.568909 systemd-networkd[1482]: califed9498bc66: Gained carrier Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.491 [INFO][4624] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--s247t-eth0 coredns-668d6bf9bc- kube-system 8bdca78a-1afd-4841-ba10-9be11ff08a73 817 0 2025-09-06 09:56:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-s247t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califed9498bc66 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" Namespace="kube-system" Pod="coredns-668d6bf9bc-s247t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s247t-" Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.491 [INFO][4624] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" Namespace="kube-system" Pod="coredns-668d6bf9bc-s247t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s247t-eth0" Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.519 [INFO][4654] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" HandleID="k8s-pod-network.84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" Workload="localhost-k8s-coredns--668d6bf9bc--s247t-eth0" Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.520 [INFO][4654] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" HandleID="k8s-pod-network.84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" Workload="localhost-k8s-coredns--668d6bf9bc--s247t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026d600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-s247t", "timestamp":"2025-09-06 09:57:16.519930028 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.520 [INFO][4654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.520 [INFO][4654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.520 [INFO][4654] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.526 [INFO][4654] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" host="localhost" Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.531 [INFO][4654] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.542 [INFO][4654] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.544 [INFO][4654] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.547 [INFO][4654] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.547 [INFO][4654] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" host="localhost" Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.548 [INFO][4654] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6 Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.555 [INFO][4654] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" host="localhost" Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.559 [INFO][4654] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" host="localhost" Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.559 [INFO][4654] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" host="localhost" Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.559 [INFO][4654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 09:57:16.582334 containerd[1568]: 2025-09-06 09:57:16.559 [INFO][4654] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" HandleID="k8s-pod-network.84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" Workload="localhost-k8s-coredns--668d6bf9bc--s247t-eth0" Sep 6 09:57:16.582946 containerd[1568]: 2025-09-06 09:57:16.564 [INFO][4624] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" Namespace="kube-system" Pod="coredns-668d6bf9bc-s247t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s247t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s247t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8bdca78a-1afd-4841-ba10-9be11ff08a73", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-s247t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califed9498bc66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:16.582946 containerd[1568]: 2025-09-06 09:57:16.564 [INFO][4624] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" Namespace="kube-system" Pod="coredns-668d6bf9bc-s247t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s247t-eth0" Sep 6 09:57:16.582946 containerd[1568]: 2025-09-06 09:57:16.564 [INFO][4624] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califed9498bc66 ContainerID="84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" Namespace="kube-system" Pod="coredns-668d6bf9bc-s247t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s247t-eth0" Sep 6 09:57:16.582946 containerd[1568]: 2025-09-06 09:57:16.568 [INFO][4624] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" Namespace="kube-system" Pod="coredns-668d6bf9bc-s247t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s247t-eth0" Sep 6 09:57:16.582946 containerd[1568]: 2025-09-06 09:57:16.569 [INFO][4624] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" Namespace="kube-system" Pod="coredns-668d6bf9bc-s247t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s247t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s247t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8bdca78a-1afd-4841-ba10-9be11ff08a73", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6", Pod:"coredns-668d6bf9bc-s247t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califed9498bc66", MAC:"26:38:dd:b5:2f:16", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:16.582946 containerd[1568]: 2025-09-06 09:57:16.579 [INFO][4624] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" Namespace="kube-system" Pod="coredns-668d6bf9bc-s247t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s247t-eth0" Sep 6 09:57:16.607407 containerd[1568]: time="2025-09-06T09:57:16.607361344Z" level=info msg="connecting to shim 84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6" address="unix:///run/containerd/s/69c3f19c1d04d74ddeb0786f89c11f79be9724d7c8a68e998b0cc6db87aef47b" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:57:16.611979 systemd-networkd[1482]: cali724d2cecb8a: Gained IPv6LL Sep 6 09:57:16.623589 kubelet[2710]: E0906 09:57:16.623560 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:16.635965 systemd[1]: Started cri-containerd-84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6.scope - libcontainer container 84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6. Sep 6 09:57:16.651980 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 09:57:16.677928 systemd-networkd[1482]: cali48c148c0d96: Link UP Sep 6 09:57:16.678683 systemd-networkd[1482]: cali48c148c0d96: Gained carrier Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.504 [INFO][4630] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--tstlj-eth0 goldmane-54d579b49d- calico-system 079cc4e0-2aa0-41d7-9081-c51ef12b400b 818 0 2025-09-06 09:56:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-tstlj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali48c148c0d96 [] [] }} ContainerID="1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" Namespace="calico-system" Pod="goldmane-54d579b49d-tstlj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--tstlj-" Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.504 [INFO][4630] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" Namespace="calico-system" Pod="goldmane-54d579b49d-tstlj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--tstlj-eth0" Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.535 [INFO][4660] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" HandleID="k8s-pod-network.1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" Workload="localhost-k8s-goldmane--54d579b49d--tstlj-eth0" Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.538 [INFO][4660] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" HandleID="k8s-pod-network.1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" Workload="localhost-k8s-goldmane--54d579b49d--tstlj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001356d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-tstlj", "timestamp":"2025-09-06 09:57:16.535650176 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.538 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.559 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.560 [INFO][4660] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.628 [INFO][4660] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" host="localhost" Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.636 [INFO][4660] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.643 [INFO][4660] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.645 [INFO][4660] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.649 [INFO][4660] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.649 [INFO][4660] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" host="localhost" Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.651 [INFO][4660] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157 Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.657 [INFO][4660] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" host="localhost" Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.665 [INFO][4660] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" host="localhost" Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.665 [INFO][4660] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" host="localhost" Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.665 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 09:57:16.696152 containerd[1568]: 2025-09-06 09:57:16.665 [INFO][4660] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" HandleID="k8s-pod-network.1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" Workload="localhost-k8s-goldmane--54d579b49d--tstlj-eth0" Sep 6 09:57:16.696769 containerd[1568]: 2025-09-06 09:57:16.672 [INFO][4630] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" Namespace="calico-system" Pod="goldmane-54d579b49d-tstlj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--tstlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--tstlj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"079cc4e0-2aa0-41d7-9081-c51ef12b400b", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-tstlj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali48c148c0d96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:16.696769 containerd[1568]: 2025-09-06 09:57:16.672 [INFO][4630] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" Namespace="calico-system" Pod="goldmane-54d579b49d-tstlj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--tstlj-eth0" Sep 6 09:57:16.696769 containerd[1568]: 2025-09-06 09:57:16.672 [INFO][4630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48c148c0d96 ContainerID="1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" Namespace="calico-system" Pod="goldmane-54d579b49d-tstlj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--tstlj-eth0" Sep 6 09:57:16.696769 containerd[1568]: 2025-09-06 09:57:16.678 [INFO][4630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" Namespace="calico-system" Pod="goldmane-54d579b49d-tstlj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--tstlj-eth0" Sep 6 09:57:16.696769 containerd[1568]: 2025-09-06 09:57:16.678 [INFO][4630] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" Namespace="calico-system" Pod="goldmane-54d579b49d-tstlj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--tstlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--tstlj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"079cc4e0-2aa0-41d7-9081-c51ef12b400b", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157", Pod:"goldmane-54d579b49d-tstlj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali48c148c0d96", MAC:"4e:93:fe:bf:0d:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:16.696769 containerd[1568]: 2025-09-06 09:57:16.691 [INFO][4630] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" Namespace="calico-system" Pod="goldmane-54d579b49d-tstlj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--tstlj-eth0" Sep 6 09:57:16.707946 containerd[1568]: time="2025-09-06T09:57:16.707811208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s247t,Uid:8bdca78a-1afd-4841-ba10-9be11ff08a73,Namespace:kube-system,Attempt:0,} returns sandbox id \"84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6\"" Sep 6 09:57:16.710420 kubelet[2710]: E0906 09:57:16.710386 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:16.716533 containerd[1568]: time="2025-09-06T09:57:16.716471193Z" level=info msg="CreateContainer within sandbox \"84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 09:57:16.735530 containerd[1568]: time="2025-09-06T09:57:16.735483981Z" level=info msg="Container 60494fcce63bacbbb22693e13f48e465ce44b3efce6b23ead84985243c43c94d: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:57:16.737051 containerd[1568]: time="2025-09-06T09:57:16.736972603Z" level=info msg="connecting to shim 1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157" address="unix:///run/containerd/s/778af35f5b2dd8651a82698897d289027e1d986c307b87c0b0a708ce2af65465" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:57:16.746361 containerd[1568]: time="2025-09-06T09:57:16.746319385Z" level=info msg="CreateContainer within sandbox \"84629301256694782178cdeb81ce025747a0ecacee907a9b3ce55e26601b5bd6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60494fcce63bacbbb22693e13f48e465ce44b3efce6b23ead84985243c43c94d\"" Sep 6 09:57:16.747864 containerd[1568]: time="2025-09-06T09:57:16.747815071Z" level=info msg="StartContainer for \"60494fcce63bacbbb22693e13f48e465ce44b3efce6b23ead84985243c43c94d\"" Sep 6 09:57:16.748796 containerd[1568]: time="2025-09-06T09:57:16.748587576Z" level=info msg="connecting to shim 60494fcce63bacbbb22693e13f48e465ce44b3efce6b23ead84985243c43c94d" address="unix:///run/containerd/s/69c3f19c1d04d74ddeb0786f89c11f79be9724d7c8a68e998b0cc6db87aef47b" protocol=ttrpc version=3 Sep 6 09:57:16.758602 containerd[1568]: time="2025-09-06T09:57:16.758562112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:16.759568 containerd[1568]: time="2025-09-06T09:57:16.759538861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 6 09:57:16.760990 containerd[1568]: time="2025-09-06T09:57:16.760954971Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:16.763989 containerd[1568]: time="2025-09-06T09:57:16.763960764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:16.764914 containerd[1568]: time="2025-09-06T09:57:16.764888602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.577827309s" Sep 6 09:57:16.764914 containerd[1568]: time="2025-09-06T09:57:16.764913082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 6 09:57:16.766210 containerd[1568]: time="2025-09-06T09:57:16.766165143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 09:57:16.768069 containerd[1568]: time="2025-09-06T09:57:16.768043353Z" level=info msg="CreateContainer within sandbox \"a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 6 09:57:16.774028 systemd[1]: Started cri-containerd-1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157.scope - libcontainer container 1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157. Sep 6 09:57:16.780860 systemd[1]: Started cri-containerd-60494fcce63bacbbb22693e13f48e465ce44b3efce6b23ead84985243c43c94d.scope - libcontainer container 60494fcce63bacbbb22693e13f48e465ce44b3efce6b23ead84985243c43c94d. Sep 6 09:57:16.783126 containerd[1568]: time="2025-09-06T09:57:16.783091818Z" level=info msg="Container a5670440edc296628419c148d909434f1744ea3c71afd5fdc350adb1587eefe5: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:57:16.793539 containerd[1568]: time="2025-09-06T09:57:16.793491176Z" level=info msg="CreateContainer within sandbox \"a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a5670440edc296628419c148d909434f1744ea3c71afd5fdc350adb1587eefe5\"" Sep 6 09:57:16.796650 containerd[1568]: time="2025-09-06T09:57:16.796523825Z" level=info msg="StartContainer for \"a5670440edc296628419c148d909434f1744ea3c71afd5fdc350adb1587eefe5\"" Sep 6 09:57:16.799861 containerd[1568]: time="2025-09-06T09:57:16.799801762Z" level=info msg="connecting to shim a5670440edc296628419c148d909434f1744ea3c71afd5fdc350adb1587eefe5" address="unix:///run/containerd/s/b778a12c2805223d56bd3f98758581d6a61a94094512a49a748515c5adf0c12a" protocol=ttrpc version=3 Sep 6 09:57:16.803587 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 09:57:16.820027 containerd[1568]: time="2025-09-06T09:57:16.819961883Z" level=info msg="StartContainer for \"60494fcce63bacbbb22693e13f48e465ce44b3efce6b23ead84985243c43c94d\" returns successfully" Sep 6 09:57:16.822443 systemd[1]: Started cri-containerd-a5670440edc296628419c148d909434f1744ea3c71afd5fdc350adb1587eefe5.scope - libcontainer container a5670440edc296628419c148d909434f1744ea3c71afd5fdc350adb1587eefe5. Sep 6 09:57:16.856520 containerd[1568]: time="2025-09-06T09:57:16.856468513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-tstlj,Uid:079cc4e0-2aa0-41d7-9081-c51ef12b400b,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157\"" Sep 6 09:57:16.896969 containerd[1568]: time="2025-09-06T09:57:16.896914666Z" level=info msg="StartContainer for \"a5670440edc296628419c148d909434f1744ea3c71afd5fdc350adb1587eefe5\" returns successfully" Sep 6 09:57:17.448507 containerd[1568]: time="2025-09-06T09:57:17.448455200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4dcd664c-mz26f,Uid:b52c2275-ca01-47e2-b309-9c143dbd379f,Namespace:calico-apiserver,Attempt:0,}" Sep 6 09:57:17.552868 systemd-networkd[1482]: cali83bb6fbe857: Link UP Sep 6 09:57:17.553607 systemd-networkd[1482]: cali83bb6fbe857: Gained carrier Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.490 [INFO][4856] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-eth0 calico-apiserver-7d4dcd664c- calico-apiserver b52c2275-ca01-47e2-b309-9c143dbd379f 814 0 2025-09-06 09:56:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4dcd664c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d4dcd664c-mz26f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali83bb6fbe857 [] [] }} ContainerID="2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-mz26f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-" Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.491 [INFO][4856] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-mz26f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-eth0" Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.515 [INFO][4871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" HandleID="k8s-pod-network.2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" Workload="localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-eth0" Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.515 [INFO][4871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" HandleID="k8s-pod-network.2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" Workload="localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d4dcd664c-mz26f", "timestamp":"2025-09-06 09:57:17.515532249 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.515 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.515 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.515 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.522 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" host="localhost" Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.529 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.533 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.535 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.537 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.537 [INFO][4871] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" host="localhost" Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.538 [INFO][4871] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4 Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.541 [INFO][4871] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" host="localhost" Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.547 [INFO][4871] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" host="localhost" Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.547 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" host="localhost" Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.547 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 09:57:17.567966 containerd[1568]: 2025-09-06 09:57:17.547 [INFO][4871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" HandleID="k8s-pod-network.2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" Workload="localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-eth0" Sep 6 09:57:17.568651 containerd[1568]: 2025-09-06 09:57:17.550 [INFO][4856] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-mz26f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-eth0", GenerateName:"calico-apiserver-7d4dcd664c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b52c2275-ca01-47e2-b309-9c143dbd379f", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4dcd664c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d4dcd664c-mz26f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83bb6fbe857", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:17.568651 containerd[1568]: 2025-09-06 09:57:17.550 [INFO][4856] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-mz26f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-eth0" Sep 6 09:57:17.568651 containerd[1568]: 2025-09-06 09:57:17.550 [INFO][4856] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83bb6fbe857 ContainerID="2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-mz26f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-eth0" Sep 6 09:57:17.568651 containerd[1568]: 2025-09-06 09:57:17.554 [INFO][4856] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-mz26f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-eth0" Sep 6 09:57:17.568651 containerd[1568]: 2025-09-06 09:57:17.555 [INFO][4856] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-mz26f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-eth0", GenerateName:"calico-apiserver-7d4dcd664c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b52c2275-ca01-47e2-b309-9c143dbd379f", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4dcd664c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4", Pod:"calico-apiserver-7d4dcd664c-mz26f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83bb6fbe857", MAC:"66:45:1a:de:63:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:17.568651 containerd[1568]: 2025-09-06 09:57:17.564 [INFO][4856] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" Namespace="calico-apiserver" Pod="calico-apiserver-7d4dcd664c-mz26f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4dcd664c--mz26f-eth0" Sep 6 09:57:17.591304 containerd[1568]: time="2025-09-06T09:57:17.591244722Z" level=info msg="connecting to shim 2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4" address="unix:///run/containerd/s/0c43ecbdf6d7bf9b1ff10aa36448f423b3b673cad29f17c1b8459eaff8a63f5f" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:57:17.621006 systemd[1]: Started cri-containerd-2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4.scope - libcontainer container 2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4. Sep 6 09:57:17.631436 kubelet[2710]: E0906 09:57:17.631400 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:17.635215 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 09:57:17.662855 kubelet[2710]: I0906 09:57:17.662706 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s247t" podStartSLOduration=37.662685393 podStartE2EDuration="37.662685393s" podCreationTimestamp="2025-09-06 09:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 09:57:17.647990587 +0000 UTC m=+43.375509287" watchObservedRunningTime="2025-09-06 09:57:17.662685393 +0000 UTC m=+43.390204093" Sep 6 09:57:17.682003 containerd[1568]: time="2025-09-06T09:57:17.681943314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4dcd664c-mz26f,Uid:b52c2275-ca01-47e2-b309-9c143dbd379f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4\"" Sep 6 09:57:18.212097 systemd-networkd[1482]: califed9498bc66: Gained IPv6LL Sep 6 09:57:18.451070 containerd[1568]: time="2025-09-06T09:57:18.451018750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fc7f6d85-wm7kx,Uid:fdb51cd1-c225-4687-92fd-8f11d468f91a,Namespace:calico-system,Attempt:0,}" Sep 6 09:57:18.559481 systemd-networkd[1482]: cali9d34b7f405b: Link UP Sep 6 09:57:18.560284 systemd-networkd[1482]: cali9d34b7f405b: Gained carrier Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.487 [INFO][4939] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-eth0 calico-kube-controllers-84fc7f6d85- calico-system fdb51cd1-c225-4687-92fd-8f11d468f91a 819 0 2025-09-06 09:56:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84fc7f6d85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-84fc7f6d85-wm7kx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9d34b7f405b [] [] }} ContainerID="6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" Namespace="calico-system" Pod="calico-kube-controllers-84fc7f6d85-wm7kx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-" Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.487 [INFO][4939] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" Namespace="calico-system" Pod="calico-kube-controllers-84fc7f6d85-wm7kx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-eth0" Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.516 [INFO][4955] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" HandleID="k8s-pod-network.6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" Workload="localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-eth0" Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.516 [INFO][4955] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" HandleID="k8s-pod-network.6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" Workload="localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-84fc7f6d85-wm7kx", "timestamp":"2025-09-06 09:57:18.516232237 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.516 [INFO][4955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.516 [INFO][4955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.516 [INFO][4955] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.523 [INFO][4955] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" host="localhost" Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.528 [INFO][4955] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.532 [INFO][4955] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.534 [INFO][4955] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.537 [INFO][4955] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.537 [INFO][4955] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" host="localhost" Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.539 [INFO][4955] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.543 [INFO][4955] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" host="localhost" Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.549 [INFO][4955] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" host="localhost" Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.549 [INFO][4955] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" host="localhost" Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.549 [INFO][4955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 09:57:18.577747 containerd[1568]: 2025-09-06 09:57:18.549 [INFO][4955] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" HandleID="k8s-pod-network.6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" Workload="localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-eth0" Sep 6 09:57:18.578679 containerd[1568]: 2025-09-06 09:57:18.556 [INFO][4939] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" Namespace="calico-system" Pod="calico-kube-controllers-84fc7f6d85-wm7kx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-eth0", GenerateName:"calico-kube-controllers-84fc7f6d85-", Namespace:"calico-system", SelfLink:"", UID:"fdb51cd1-c225-4687-92fd-8f11d468f91a", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fc7f6d85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-84fc7f6d85-wm7kx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9d34b7f405b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:18.578679 containerd[1568]: 2025-09-06 09:57:18.556 [INFO][4939] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" Namespace="calico-system" Pod="calico-kube-controllers-84fc7f6d85-wm7kx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-eth0" Sep 6 09:57:18.578679 containerd[1568]: 2025-09-06 09:57:18.556 [INFO][4939] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d34b7f405b ContainerID="6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" Namespace="calico-system" Pod="calico-kube-controllers-84fc7f6d85-wm7kx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-eth0" Sep 6 09:57:18.578679 containerd[1568]: 2025-09-06 09:57:18.560 [INFO][4939] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" Namespace="calico-system" Pod="calico-kube-controllers-84fc7f6d85-wm7kx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-eth0" Sep 6 09:57:18.578679 containerd[1568]: 2025-09-06 09:57:18.561 [INFO][4939] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" Namespace="calico-system" Pod="calico-kube-controllers-84fc7f6d85-wm7kx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-eth0", GenerateName:"calico-kube-controllers-84fc7f6d85-", Namespace:"calico-system", SelfLink:"", UID:"fdb51cd1-c225-4687-92fd-8f11d468f91a", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 9, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fc7f6d85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d", Pod:"calico-kube-controllers-84fc7f6d85-wm7kx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9d34b7f405b", MAC:"8e:b3:21:2a:41:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 09:57:18.578679 containerd[1568]: 2025-09-06 09:57:18.573 [INFO][4939] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" Namespace="calico-system" Pod="calico-kube-controllers-84fc7f6d85-wm7kx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fc7f6d85--wm7kx-eth0" Sep 6 09:57:18.596056 systemd-networkd[1482]: cali48c148c0d96: Gained IPv6LL Sep 6 09:57:18.610041 containerd[1568]: time="2025-09-06T09:57:18.609982777Z" level=info msg="connecting to shim 6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d" address="unix:///run/containerd/s/52d47ad8d0b8834aba5dd1b5046b6a04a6df70b57a8b9280686912f7469250ca" namespace=k8s.io protocol=ttrpc version=3 Sep 6 09:57:18.644986 systemd[1]: Started cri-containerd-6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d.scope - libcontainer container 6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d. Sep 6 09:57:18.660472 kubelet[2710]: E0906 09:57:18.660443 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:18.664541 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 09:57:18.702470 containerd[1568]: time="2025-09-06T09:57:18.702406316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fc7f6d85-wm7kx,Uid:fdb51cd1-c225-4687-92fd-8f11d468f91a,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d\"" Sep 6 09:57:19.461184 containerd[1568]: time="2025-09-06T09:57:19.461129688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:19.461974 containerd[1568]: time="2025-09-06T09:57:19.461937784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 6 09:57:19.463172 containerd[1568]: time="2025-09-06T09:57:19.463110764Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:19.465086 containerd[1568]: time="2025-09-06T09:57:19.465053271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:19.465689 containerd[1568]: time="2025-09-06T09:57:19.465656666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.699448033s" Sep 6 09:57:19.465721 containerd[1568]: time="2025-09-06T09:57:19.465687901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 6 09:57:19.466682 containerd[1568]: time="2025-09-06T09:57:19.466653743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 6 09:57:19.468456 containerd[1568]: time="2025-09-06T09:57:19.468248884Z" level=info msg="CreateContainer within sandbox \"da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 09:57:19.476804 containerd[1568]: time="2025-09-06T09:57:19.476759982Z" level=info msg="Container 55691449c1ba5961489e3cecf1f6f75af3051c5b334f0a279a8020253c799c2c: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:57:19.518618 containerd[1568]: time="2025-09-06T09:57:19.518564983Z" level=info msg="CreateContainer within sandbox \"da640258d7d352da7d48a6db3b9562ecdb19ea4c15207cac3025a06f0273a8a3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"55691449c1ba5961489e3cecf1f6f75af3051c5b334f0a279a8020253c799c2c\"" Sep 6 09:57:19.519388 containerd[1568]: time="2025-09-06T09:57:19.519327106Z" level=info msg="StartContainer for \"55691449c1ba5961489e3cecf1f6f75af3051c5b334f0a279a8020253c799c2c\"" Sep 6 09:57:19.520972 containerd[1568]: time="2025-09-06T09:57:19.520931396Z" level=info msg="connecting to shim 55691449c1ba5961489e3cecf1f6f75af3051c5b334f0a279a8020253c799c2c" address="unix:///run/containerd/s/512ec95e64a76ebb7236b109e3bf30e1901aea3be43d8c158ff95004bc7c0134" protocol=ttrpc version=3 Sep 6 09:57:19.557043 systemd-networkd[1482]: cali83bb6fbe857: Gained IPv6LL Sep 6 09:57:19.587204 systemd[1]: Started cri-containerd-55691449c1ba5961489e3cecf1f6f75af3051c5b334f0a279a8020253c799c2c.scope - libcontainer container 55691449c1ba5961489e3cecf1f6f75af3051c5b334f0a279a8020253c799c2c. Sep 6 09:57:19.715870 containerd[1568]: time="2025-09-06T09:57:19.715655953Z" level=info msg="StartContainer for \"55691449c1ba5961489e3cecf1f6f75af3051c5b334f0a279a8020253c799c2c\" returns successfully" Sep 6 09:57:19.720851 kubelet[2710]: E0906 09:57:19.720804 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:19.748098 systemd-networkd[1482]: cali9d34b7f405b: Gained IPv6LL Sep 6 09:57:20.169061 systemd[1]: Started sshd@8-10.0.0.40:22-10.0.0.1:46628.service - OpenSSH per-connection server daemon (10.0.0.1:46628). Sep 6 09:57:20.243017 sshd[5063]: Accepted publickey for core from 10.0.0.1 port 46628 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:20.244748 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:20.249195 systemd-logind[1550]: New session 9 of user core. Sep 6 09:57:20.256956 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 6 09:57:20.392082 sshd[5066]: Connection closed by 10.0.0.1 port 46628 Sep 6 09:57:20.392437 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:20.396320 systemd[1]: sshd@8-10.0.0.40:22-10.0.0.1:46628.service: Deactivated successfully. Sep 6 09:57:20.398487 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 09:57:20.400205 systemd-logind[1550]: Session 9 logged out. Waiting for processes to exit. Sep 6 09:57:20.401522 systemd-logind[1550]: Removed session 9. Sep 6 09:57:20.733626 kubelet[2710]: I0906 09:57:20.733530 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d4dcd664c-65v6c" podStartSLOduration=26.367819763 podStartE2EDuration="30.733501508s" podCreationTimestamp="2025-09-06 09:56:50 +0000 UTC" firstStartedPulling="2025-09-06 09:57:15.1008564 +0000 UTC m=+40.828375090" lastFinishedPulling="2025-09-06 09:57:19.466538135 +0000 UTC m=+45.194056835" observedRunningTime="2025-09-06 09:57:20.732768268 +0000 UTC m=+46.460286968" watchObservedRunningTime="2025-09-06 09:57:20.733501508 +0000 UTC m=+46.461020208" Sep 6 09:57:21.463723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231680377.mount: Deactivated successfully. Sep 6 09:57:23.291214 containerd[1568]: time="2025-09-06T09:57:23.291149638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:23.291868 containerd[1568]: time="2025-09-06T09:57:23.291810621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 6 09:57:23.293035 containerd[1568]: time="2025-09-06T09:57:23.292999837Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:23.295038 containerd[1568]: time="2025-09-06T09:57:23.295010184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:23.295723 containerd[1568]: time="2025-09-06T09:57:23.295684535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.828924614s" Sep 6 09:57:23.295723 containerd[1568]: time="2025-09-06T09:57:23.295717233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 6 09:57:23.299374 containerd[1568]: time="2025-09-06T09:57:23.299347208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 6 09:57:23.302014 containerd[1568]: time="2025-09-06T09:57:23.301966261Z" level=info msg="CreateContainer within sandbox \"1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 6 09:57:23.314144 containerd[1568]: time="2025-09-06T09:57:23.314094080Z" level=info msg="Container 931b8a3a6d13ba9963e1a980ae60c7be172525861266b0b8687157c8ac814888: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:57:23.324002 containerd[1568]: time="2025-09-06T09:57:23.323956821Z" level=info msg="CreateContainer within sandbox \"1b0b7a47441e840c9b0e3b30e77c6547d79945a1e072f5c8a6f9be540bd41157\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"931b8a3a6d13ba9963e1a980ae60c7be172525861266b0b8687157c8ac814888\"" Sep 6 09:57:23.324525 containerd[1568]: time="2025-09-06T09:57:23.324483198Z" level=info msg="StartContainer for \"931b8a3a6d13ba9963e1a980ae60c7be172525861266b0b8687157c8ac814888\"" Sep 6 09:57:23.325566 containerd[1568]: time="2025-09-06T09:57:23.325526716Z" level=info msg="connecting to shim 931b8a3a6d13ba9963e1a980ae60c7be172525861266b0b8687157c8ac814888" address="unix:///run/containerd/s/778af35f5b2dd8651a82698897d289027e1d986c307b87c0b0a708ce2af65465" protocol=ttrpc version=3 Sep 6 09:57:23.350200 systemd[1]: Started cri-containerd-931b8a3a6d13ba9963e1a980ae60c7be172525861266b0b8687157c8ac814888.scope - libcontainer container 931b8a3a6d13ba9963e1a980ae60c7be172525861266b0b8687157c8ac814888. Sep 6 09:57:23.411759 containerd[1568]: time="2025-09-06T09:57:23.411710509Z" level=info msg="StartContainer for \"931b8a3a6d13ba9963e1a980ae60c7be172525861266b0b8687157c8ac814888\" returns successfully" Sep 6 09:57:23.842764 containerd[1568]: time="2025-09-06T09:57:23.842696667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"931b8a3a6d13ba9963e1a980ae60c7be172525861266b0b8687157c8ac814888\" id:\"90e6180363c323c7f927257638ee925693597bc58184e7cce0bda761f72ac33d\" pid:5153 exit_status:1 exited_at:{seconds:1757152643 nanos:842111117}" Sep 6 09:57:24.844245 containerd[1568]: time="2025-09-06T09:57:24.844150768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"931b8a3a6d13ba9963e1a980ae60c7be172525861266b0b8687157c8ac814888\" id:\"39ac6c6c53320db12469f230aa76de3b3141ab9a3e647feffd5fb8568e361b8f\" pid:5178 exit_status:1 exited_at:{seconds:1757152644 nanos:843423350}" Sep 6 09:57:25.329777 containerd[1568]: time="2025-09-06T09:57:25.329708625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:25.330453 containerd[1568]: time="2025-09-06T09:57:25.330397291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 6 09:57:25.331569 containerd[1568]: time="2025-09-06T09:57:25.331524552Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:25.333596 containerd[1568]: time="2025-09-06T09:57:25.333557955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:25.334327 containerd[1568]: time="2025-09-06T09:57:25.334293467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.034912551s" Sep 6 09:57:25.334392 containerd[1568]: time="2025-09-06T09:57:25.334328960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 6 09:57:25.335470 containerd[1568]: time="2025-09-06T09:57:25.335427442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 09:57:25.336952 containerd[1568]: time="2025-09-06T09:57:25.336918758Z" level=info msg="CreateContainer within sandbox \"a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 6 09:57:25.347411 containerd[1568]: time="2025-09-06T09:57:25.347370801Z" level=info msg="Container f71abadb120aef74cc5aa961c1e61c1574487340fc4f03bed8c9d569b0436ba8: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:57:25.358641 containerd[1568]: time="2025-09-06T09:57:25.358606134Z" level=info msg="CreateContainer within sandbox \"a94f7c580c2f5718a520702a3cef0766131a4589cff9ca329a265a6ab1dc8cb3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f71abadb120aef74cc5aa961c1e61c1574487340fc4f03bed8c9d569b0436ba8\"" Sep 6 09:57:25.359195 containerd[1568]: time="2025-09-06T09:57:25.359171648Z" level=info msg="StartContainer for \"f71abadb120aef74cc5aa961c1e61c1574487340fc4f03bed8c9d569b0436ba8\"" Sep 6 09:57:25.360595 containerd[1568]: time="2025-09-06T09:57:25.360572700Z" level=info msg="connecting to shim f71abadb120aef74cc5aa961c1e61c1574487340fc4f03bed8c9d569b0436ba8" address="unix:///run/containerd/s/b778a12c2805223d56bd3f98758581d6a61a94094512a49a748515c5adf0c12a" protocol=ttrpc version=3 Sep 6 09:57:25.383997 systemd[1]: Started cri-containerd-f71abadb120aef74cc5aa961c1e61c1574487340fc4f03bed8c9d569b0436ba8.scope - libcontainer container f71abadb120aef74cc5aa961c1e61c1574487340fc4f03bed8c9d569b0436ba8. Sep 6 09:57:25.411034 systemd[1]: Started sshd@9-10.0.0.40:22-10.0.0.1:46634.service - OpenSSH per-connection server daemon (10.0.0.1:46634). Sep 6 09:57:25.480034 containerd[1568]: time="2025-09-06T09:57:25.479917792Z" level=info msg="StartContainer for \"f71abadb120aef74cc5aa961c1e61c1574487340fc4f03bed8c9d569b0436ba8\" returns successfully" Sep 6 09:57:25.518239 kubelet[2710]: I0906 09:57:25.518162 2710 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 6 09:57:25.518239 kubelet[2710]: I0906 09:57:25.518259 2710 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 6 09:57:25.532269 sshd[5215]: Accepted publickey for core from 10.0.0.1 port 46634 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:25.537286 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:25.561683 systemd-logind[1550]: New session 10 of user core. Sep 6 09:57:25.568100 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 6 09:57:25.789374 containerd[1568]: time="2025-09-06T09:57:25.789233126Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:25.790046 containerd[1568]: time="2025-09-06T09:57:25.789981384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 6 09:57:25.800326 containerd[1568]: time="2025-09-06T09:57:25.800103232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 464.623483ms" Sep 6 09:57:25.800326 containerd[1568]: time="2025-09-06T09:57:25.800224511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 6 09:57:25.806947 containerd[1568]: time="2025-09-06T09:57:25.806895212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 6 09:57:25.812749 containerd[1568]: time="2025-09-06T09:57:25.812618760Z" level=info msg="CreateContainer within sandbox \"2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 09:57:25.840841 containerd[1568]: time="2025-09-06T09:57:25.840067460Z" level=info msg="Container 8758997cac5d22d78ece13ff28bc585e077f00f16e58b5894db2721f65a4358e: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:57:25.852099 kubelet[2710]: I0906 09:57:25.849713 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-66kxk" podStartSLOduration=23.335323345 podStartE2EDuration="33.849682654s" podCreationTimestamp="2025-09-06 09:56:52 +0000 UTC" firstStartedPulling="2025-09-06 09:57:14.820739192 +0000 UTC m=+40.548257892" lastFinishedPulling="2025-09-06 09:57:25.335098501 +0000 UTC m=+51.062617201" observedRunningTime="2025-09-06 09:57:25.849275673 +0000 UTC m=+51.576794373" watchObservedRunningTime="2025-09-06 09:57:25.849682654 +0000 UTC m=+51.577201354" Sep 6 09:57:25.851865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3413787648.mount: Deactivated successfully. Sep 6 09:57:25.858849 kubelet[2710]: I0906 09:57:25.857502 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-tstlj" podStartSLOduration=27.417004661 podStartE2EDuration="33.857473544s" podCreationTimestamp="2025-09-06 09:56:52 +0000 UTC" firstStartedPulling="2025-09-06 09:57:16.858740933 +0000 UTC m=+42.586259623" lastFinishedPulling="2025-09-06 09:57:23.299209806 +0000 UTC m=+49.026728506" observedRunningTime="2025-09-06 09:57:23.74983189 +0000 UTC m=+49.477350620" watchObservedRunningTime="2025-09-06 09:57:25.857473544 +0000 UTC m=+51.584992244" Sep 6 09:57:26.098961 sshd[5232]: Connection closed by 10.0.0.1 port 46634 Sep 6 09:57:26.100118 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:26.104361 systemd[1]: sshd@9-10.0.0.40:22-10.0.0.1:46634.service: Deactivated successfully. Sep 6 09:57:26.107258 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 09:57:26.107805 containerd[1568]: time="2025-09-06T09:57:26.107759944Z" level=info msg="CreateContainer within sandbox \"2973b79a5d0d7ac7f79d9df4bfdd697abd746b7867b49d7ed9ce863544d4efe4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8758997cac5d22d78ece13ff28bc585e077f00f16e58b5894db2721f65a4358e\"" Sep 6 09:57:26.109630 systemd-logind[1550]: Session 10 logged out. Waiting for processes to exit. Sep 6 09:57:26.110244 containerd[1568]: time="2025-09-06T09:57:26.110041466Z" level=info msg="StartContainer for \"8758997cac5d22d78ece13ff28bc585e077f00f16e58b5894db2721f65a4358e\"" Sep 6 09:57:26.111661 containerd[1568]: time="2025-09-06T09:57:26.111640987Z" level=info msg="connecting to shim 8758997cac5d22d78ece13ff28bc585e077f00f16e58b5894db2721f65a4358e" address="unix:///run/containerd/s/0c43ecbdf6d7bf9b1ff10aa36448f423b3b673cad29f17c1b8459eaff8a63f5f" protocol=ttrpc version=3 Sep 6 09:57:26.112767 systemd-logind[1550]: Removed session 10. Sep 6 09:57:26.173140 systemd[1]: Started cri-containerd-8758997cac5d22d78ece13ff28bc585e077f00f16e58b5894db2721f65a4358e.scope - libcontainer container 8758997cac5d22d78ece13ff28bc585e077f00f16e58b5894db2721f65a4358e. Sep 6 09:57:26.234220 containerd[1568]: time="2025-09-06T09:57:26.234157769Z" level=info msg="StartContainer for \"8758997cac5d22d78ece13ff28bc585e077f00f16e58b5894db2721f65a4358e\" returns successfully" Sep 6 09:57:26.809979 kubelet[2710]: I0906 09:57:26.808793 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d4dcd664c-mz26f" podStartSLOduration=28.687085491 podStartE2EDuration="36.808770851s" podCreationTimestamp="2025-09-06 09:56:50 +0000 UTC" firstStartedPulling="2025-09-06 09:57:17.683184246 +0000 UTC m=+43.410702946" lastFinishedPulling="2025-09-06 09:57:25.804869616 +0000 UTC m=+51.532388306" observedRunningTime="2025-09-06 09:57:26.808624372 +0000 UTC m=+52.536143072" watchObservedRunningTime="2025-09-06 09:57:26.808770851 +0000 UTC m=+52.536289541" Sep 6 09:57:30.235525 containerd[1568]: time="2025-09-06T09:57:30.235452551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:30.236652 containerd[1568]: time="2025-09-06T09:57:30.236600741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 6 09:57:30.237909 containerd[1568]: time="2025-09-06T09:57:30.237871651Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:30.239868 containerd[1568]: time="2025-09-06T09:57:30.239842972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 09:57:30.240417 containerd[1568]: time="2025-09-06T09:57:30.240376375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.432487885s" Sep 6 09:57:30.240477 containerd[1568]: time="2025-09-06T09:57:30.240422067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 6 09:57:30.255590 containerd[1568]: time="2025-09-06T09:57:30.255548982Z" level=info msg="CreateContainer within sandbox \"6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 6 09:57:30.270313 containerd[1568]: time="2025-09-06T09:57:30.270269001Z" level=info msg="Container eb4cb2a87c55082c8f9de62362a42e1685b6cc1df617d93f83b4a54f2e2d8f16: CDI devices from CRI Config.CDIDevices: []" Sep 6 09:57:30.279534 containerd[1568]: time="2025-09-06T09:57:30.279498474Z" level=info msg="CreateContainer within sandbox \"6d841c0ee3b074b58bb160957a6844647393b1228f2804e1e85feb27dca1aa6d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"eb4cb2a87c55082c8f9de62362a42e1685b6cc1df617d93f83b4a54f2e2d8f16\"" Sep 6 09:57:30.280060 containerd[1568]: time="2025-09-06T09:57:30.280021536Z" level=info msg="StartContainer for \"eb4cb2a87c55082c8f9de62362a42e1685b6cc1df617d93f83b4a54f2e2d8f16\"" Sep 6 09:57:30.281298 containerd[1568]: time="2025-09-06T09:57:30.281266673Z" level=info msg="connecting to shim eb4cb2a87c55082c8f9de62362a42e1685b6cc1df617d93f83b4a54f2e2d8f16" address="unix:///run/containerd/s/52d47ad8d0b8834aba5dd1b5046b6a04a6df70b57a8b9280686912f7469250ca" protocol=ttrpc version=3 Sep 6 09:57:30.308964 systemd[1]: Started cri-containerd-eb4cb2a87c55082c8f9de62362a42e1685b6cc1df617d93f83b4a54f2e2d8f16.scope - libcontainer container eb4cb2a87c55082c8f9de62362a42e1685b6cc1df617d93f83b4a54f2e2d8f16. Sep 6 09:57:30.362104 containerd[1568]: time="2025-09-06T09:57:30.361974132Z" level=info msg="StartContainer for \"eb4cb2a87c55082c8f9de62362a42e1685b6cc1df617d93f83b4a54f2e2d8f16\" returns successfully" Sep 6 09:57:30.785591 kubelet[2710]: I0906 09:57:30.785527 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84fc7f6d85-wm7kx" podStartSLOduration=26.248660404 podStartE2EDuration="37.785502995s" podCreationTimestamp="2025-09-06 09:56:53 +0000 UTC" firstStartedPulling="2025-09-06 09:57:18.704256951 +0000 UTC m=+44.431775651" lastFinishedPulling="2025-09-06 09:57:30.241099542 +0000 UTC m=+55.968618242" observedRunningTime="2025-09-06 09:57:30.783518547 +0000 UTC m=+56.511037247" watchObservedRunningTime="2025-09-06 09:57:30.785502995 +0000 UTC m=+56.513021696" Sep 6 09:57:30.825383 containerd[1568]: time="2025-09-06T09:57:30.825332971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb4cb2a87c55082c8f9de62362a42e1685b6cc1df617d93f83b4a54f2e2d8f16\" id:\"75216dacafae552c907102b736fb484eeb8c386896422cc85320091ce8578b7c\" pid:5356 exited_at:{seconds:1757152650 nanos:824847807}" Sep 6 09:57:31.126241 systemd[1]: Started sshd@10-10.0.0.40:22-10.0.0.1:40998.service - OpenSSH per-connection server daemon (10.0.0.1:40998). Sep 6 09:57:31.185486 sshd[5367]: Accepted publickey for core from 10.0.0.1 port 40998 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:31.188210 sshd-session[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:31.195114 systemd-logind[1550]: New session 11 of user core. Sep 6 09:57:31.205026 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 6 09:57:31.441541 sshd[5370]: Connection closed by 10.0.0.1 port 40998 Sep 6 09:57:31.441853 sshd-session[5367]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:31.454810 systemd[1]: sshd@10-10.0.0.40:22-10.0.0.1:40998.service: Deactivated successfully. Sep 6 09:57:31.456914 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 09:57:31.457655 systemd-logind[1550]: Session 11 logged out. Waiting for processes to exit. Sep 6 09:57:31.461381 systemd[1]: Started sshd@11-10.0.0.40:22-10.0.0.1:41000.service - OpenSSH per-connection server daemon (10.0.0.1:41000). Sep 6 09:57:31.462062 systemd-logind[1550]: Removed session 11. Sep 6 09:57:31.519680 sshd[5385]: Accepted publickey for core from 10.0.0.1 port 41000 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:31.521870 sshd-session[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:31.526514 systemd-logind[1550]: New session 12 of user core. Sep 6 09:57:31.541977 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 6 09:57:31.744487 sshd[5388]: Connection closed by 10.0.0.1 port 41000 Sep 6 09:57:31.746562 sshd-session[5385]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:31.755711 systemd[1]: sshd@11-10.0.0.40:22-10.0.0.1:41000.service: Deactivated successfully. Sep 6 09:57:31.757939 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 09:57:31.758841 systemd-logind[1550]: Session 12 logged out. Waiting for processes to exit. Sep 6 09:57:31.762941 systemd[1]: Started sshd@12-10.0.0.40:22-10.0.0.1:41008.service - OpenSSH per-connection server daemon (10.0.0.1:41008). Sep 6 09:57:31.763919 systemd-logind[1550]: Removed session 12. Sep 6 09:57:31.821526 sshd[5400]: Accepted publickey for core from 10.0.0.1 port 41008 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:31.823457 sshd-session[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:31.828124 systemd-logind[1550]: New session 13 of user core. Sep 6 09:57:31.839970 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 6 09:57:31.957187 sshd[5403]: Connection closed by 10.0.0.1 port 41008 Sep 6 09:57:31.957511 sshd-session[5400]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:31.962197 systemd[1]: sshd@12-10.0.0.40:22-10.0.0.1:41008.service: Deactivated successfully. Sep 6 09:57:31.964175 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 09:57:31.964899 systemd-logind[1550]: Session 13 logged out. Waiting for processes to exit. Sep 6 09:57:31.965956 systemd-logind[1550]: Removed session 13. Sep 6 09:57:36.975090 systemd[1]: Started sshd@13-10.0.0.40:22-10.0.0.1:41022.service - OpenSSH per-connection server daemon (10.0.0.1:41022). Sep 6 09:57:37.033491 sshd[5423]: Accepted publickey for core from 10.0.0.1 port 41022 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:37.035386 sshd-session[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:37.040141 systemd-logind[1550]: New session 14 of user core. Sep 6 09:57:37.049997 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 6 09:57:37.166297 sshd[5426]: Connection closed by 10.0.0.1 port 41022 Sep 6 09:57:37.166651 sshd-session[5423]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:37.172060 systemd[1]: sshd@13-10.0.0.40:22-10.0.0.1:41022.service: Deactivated successfully. Sep 6 09:57:37.174332 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 09:57:37.175235 systemd-logind[1550]: Session 14 logged out. Waiting for processes to exit. Sep 6 09:57:37.176535 systemd-logind[1550]: Removed session 14. Sep 6 09:57:40.715620 containerd[1568]: time="2025-09-06T09:57:40.715565745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2af587bf88a66c3d5461c3c18a5d4c50067f2c5a4a138e3b2e808985bdef803e\" id:\"3a05e04341c3a289ac8d1f7fad49e684290048a0e832b248c8cf523bba53cc00\" pid:5453 exit_status:1 exited_at:{seconds:1757152660 nanos:715171371}" Sep 6 09:57:42.179606 systemd[1]: Started sshd@14-10.0.0.40:22-10.0.0.1:56004.service - OpenSSH per-connection server daemon (10.0.0.1:56004). Sep 6 09:57:42.252845 sshd[5470]: Accepted publickey for core from 10.0.0.1 port 56004 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:42.254407 sshd-session[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:42.258867 systemd-logind[1550]: New session 15 of user core. Sep 6 09:57:42.268998 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 6 09:57:42.423033 sshd[5473]: Connection closed by 10.0.0.1 port 56004 Sep 6 09:57:42.423406 sshd-session[5470]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:42.427976 systemd[1]: sshd@14-10.0.0.40:22-10.0.0.1:56004.service: Deactivated successfully. Sep 6 09:57:42.429991 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 09:57:42.430977 systemd-logind[1550]: Session 15 logged out. Waiting for processes to exit. Sep 6 09:57:42.432253 systemd-logind[1550]: Removed session 15. Sep 6 09:57:47.438002 systemd[1]: Started sshd@15-10.0.0.40:22-10.0.0.1:56008.service - OpenSSH per-connection server daemon (10.0.0.1:56008). Sep 6 09:57:47.502169 sshd[5488]: Accepted publickey for core from 10.0.0.1 port 56008 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:47.503609 sshd-session[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:47.509579 systemd-logind[1550]: New session 16 of user core. Sep 6 09:57:47.516972 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 6 09:57:47.628155 sshd[5491]: Connection closed by 10.0.0.1 port 56008 Sep 6 09:57:47.628526 sshd-session[5488]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:47.632905 systemd[1]: sshd@15-10.0.0.40:22-10.0.0.1:56008.service: Deactivated successfully. Sep 6 09:57:47.634967 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 09:57:47.635963 systemd-logind[1550]: Session 16 logged out. Waiting for processes to exit. Sep 6 09:57:47.637072 systemd-logind[1550]: Removed session 16. Sep 6 09:57:51.448222 kubelet[2710]: E0906 09:57:51.448167 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:52.645069 systemd[1]: Started sshd@16-10.0.0.40:22-10.0.0.1:42126.service - OpenSSH per-connection server daemon (10.0.0.1:42126). Sep 6 09:57:52.703351 sshd[5512]: Accepted publickey for core from 10.0.0.1 port 42126 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:52.705422 sshd-session[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:52.710322 systemd-logind[1550]: New session 17 of user core. Sep 6 09:57:52.717012 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 6 09:57:52.843544 sshd[5515]: Connection closed by 10.0.0.1 port 42126 Sep 6 09:57:52.844005 sshd-session[5512]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:52.849767 systemd[1]: sshd@16-10.0.0.40:22-10.0.0.1:42126.service: Deactivated successfully. Sep 6 09:57:52.852599 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 09:57:52.853533 systemd-logind[1550]: Session 17 logged out. Waiting for processes to exit. Sep 6 09:57:52.855210 systemd-logind[1550]: Removed session 17. Sep 6 09:57:54.843494 containerd[1568]: time="2025-09-06T09:57:54.843435342Z" level=info msg="TaskExit event in podsandbox handler container_id:\"931b8a3a6d13ba9963e1a980ae60c7be172525861266b0b8687157c8ac814888\" id:\"034f7db69ffe8aff6627ae5f69ea2cd6eb454f310d2e5d3fe3ab335599dd1387\" pid:5539 exited_at:{seconds:1757152674 nanos:843010334}" Sep 6 09:57:56.448856 kubelet[2710]: E0906 09:57:56.448172 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:57.861688 systemd[1]: Started sshd@17-10.0.0.40:22-10.0.0.1:42142.service - OpenSSH per-connection server daemon (10.0.0.1:42142). Sep 6 09:57:57.938494 sshd[5554]: Accepted publickey for core from 10.0.0.1 port 42142 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:57.940313 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:57.946439 systemd-logind[1550]: New session 18 of user core. Sep 6 09:57:57.956952 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 6 09:57:58.131422 sshd[5557]: Connection closed by 10.0.0.1 port 42142 Sep 6 09:57:58.131935 sshd-session[5554]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:58.142537 systemd[1]: sshd@17-10.0.0.40:22-10.0.0.1:42142.service: Deactivated successfully. Sep 6 09:57:58.144408 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 09:57:58.145178 systemd-logind[1550]: Session 18 logged out. Waiting for processes to exit. Sep 6 09:57:58.147697 systemd[1]: Started sshd@18-10.0.0.40:22-10.0.0.1:42144.service - OpenSSH per-connection server daemon (10.0.0.1:42144). Sep 6 09:57:58.148533 systemd-logind[1550]: Removed session 18. Sep 6 09:57:58.200249 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 42144 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:58.201605 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:58.205840 systemd-logind[1550]: New session 19 of user core. Sep 6 09:57:58.217996 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 6 09:57:58.368491 containerd[1568]: time="2025-09-06T09:57:58.368451777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"931b8a3a6d13ba9963e1a980ae60c7be172525861266b0b8687157c8ac814888\" id:\"a698fe9dcbba8e3c3b107024b9954a85969d0362f7504385e6a3ff22505c2348\" pid:5587 exited_at:{seconds:1757152678 nanos:368155178}" Sep 6 09:57:58.447816 kubelet[2710]: E0906 09:57:58.447698 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:57:58.520171 sshd[5573]: Connection closed by 10.0.0.1 port 42144 Sep 6 09:57:58.520885 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:58.529356 systemd[1]: sshd@18-10.0.0.40:22-10.0.0.1:42144.service: Deactivated successfully. Sep 6 09:57:58.531180 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 09:57:58.531880 systemd-logind[1550]: Session 19 logged out. Waiting for processes to exit. Sep 6 09:57:58.534326 systemd[1]: Started sshd@19-10.0.0.40:22-10.0.0.1:42152.service - OpenSSH per-connection server daemon (10.0.0.1:42152). Sep 6 09:57:58.535198 systemd-logind[1550]: Removed session 19. Sep 6 09:57:58.599025 sshd[5608]: Accepted publickey for core from 10.0.0.1 port 42152 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:58.600636 sshd-session[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:58.606971 systemd-logind[1550]: New session 20 of user core. Sep 6 09:57:58.617120 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 6 09:57:59.212327 sshd[5613]: Connection closed by 10.0.0.1 port 42152 Sep 6 09:57:59.213061 sshd-session[5608]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:59.223276 systemd[1]: sshd@19-10.0.0.40:22-10.0.0.1:42152.service: Deactivated successfully. Sep 6 09:57:59.226065 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 09:57:59.227398 systemd-logind[1550]: Session 20 logged out. Waiting for processes to exit. Sep 6 09:57:59.236500 systemd[1]: Started sshd@20-10.0.0.40:22-10.0.0.1:42154.service - OpenSSH per-connection server daemon (10.0.0.1:42154). Sep 6 09:57:59.237468 systemd-logind[1550]: Removed session 20. Sep 6 09:57:59.296442 sshd[5646]: Accepted publickey for core from 10.0.0.1 port 42154 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:59.297678 sshd-session[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:59.302469 systemd-logind[1550]: New session 21 of user core. Sep 6 09:57:59.311985 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 6 09:57:59.738536 sshd[5649]: Connection closed by 10.0.0.1 port 42154 Sep 6 09:57:59.739033 sshd-session[5646]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:59.749620 systemd[1]: sshd@20-10.0.0.40:22-10.0.0.1:42154.service: Deactivated successfully. Sep 6 09:57:59.751949 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 09:57:59.754046 systemd-logind[1550]: Session 21 logged out. Waiting for processes to exit. Sep 6 09:57:59.757574 systemd[1]: Started sshd@21-10.0.0.40:22-10.0.0.1:42168.service - OpenSSH per-connection server daemon (10.0.0.1:42168). Sep 6 09:57:59.760516 systemd-logind[1550]: Removed session 21. Sep 6 09:57:59.819336 sshd[5661]: Accepted publickey for core from 10.0.0.1 port 42168 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:57:59.820875 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:57:59.825605 systemd-logind[1550]: New session 22 of user core. Sep 6 09:57:59.835021 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 6 09:57:59.951881 sshd[5664]: Connection closed by 10.0.0.1 port 42168 Sep 6 09:57:59.952171 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Sep 6 09:57:59.956951 systemd[1]: sshd@21-10.0.0.40:22-10.0.0.1:42168.service: Deactivated successfully. Sep 6 09:57:59.959089 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 09:57:59.959948 systemd-logind[1550]: Session 22 logged out. Waiting for processes to exit. Sep 6 09:57:59.961195 systemd-logind[1550]: Removed session 22. Sep 6 09:58:00.844581 containerd[1568]: time="2025-09-06T09:58:00.844538969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb4cb2a87c55082c8f9de62362a42e1685b6cc1df617d93f83b4a54f2e2d8f16\" id:\"5f39c28b5889a3e30b5699946c1a546c07029a0758551d2a91968eb36d739b42\" pid:5688 exited_at:{seconds:1757152680 nanos:844356030}" Sep 6 09:58:04.965813 systemd[1]: Started sshd@22-10.0.0.40:22-10.0.0.1:45582.service - OpenSSH per-connection server daemon (10.0.0.1:45582). Sep 6 09:58:05.014028 sshd[5702]: Accepted publickey for core from 10.0.0.1 port 45582 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:58:05.015297 sshd-session[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:58:05.019542 systemd-logind[1550]: New session 23 of user core. Sep 6 09:58:05.033945 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 6 09:58:05.144982 sshd[5705]: Connection closed by 10.0.0.1 port 45582 Sep 6 09:58:05.145360 sshd-session[5702]: pam_unix(sshd:session): session closed for user core Sep 6 09:58:05.149881 systemd[1]: sshd@22-10.0.0.40:22-10.0.0.1:45582.service: Deactivated successfully. Sep 6 09:58:05.151966 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 09:58:05.152662 systemd-logind[1550]: Session 23 logged out. Waiting for processes to exit. Sep 6 09:58:05.153772 systemd-logind[1550]: Removed session 23. Sep 6 09:58:05.448024 kubelet[2710]: E0906 09:58:05.447963 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 09:58:10.163538 systemd[1]: Started sshd@23-10.0.0.40:22-10.0.0.1:52010.service - OpenSSH per-connection server daemon (10.0.0.1:52010). Sep 6 09:58:10.214379 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 52010 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:58:10.216045 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:58:10.220367 systemd-logind[1550]: New session 24 of user core. Sep 6 09:58:10.231028 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 6 09:58:10.353833 sshd[5721]: Connection closed by 10.0.0.1 port 52010 Sep 6 09:58:10.355106 sshd-session[5718]: pam_unix(sshd:session): session closed for user core Sep 6 09:58:10.359798 systemd[1]: sshd@23-10.0.0.40:22-10.0.0.1:52010.service: Deactivated successfully. Sep 6 09:58:10.362271 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 09:58:10.363201 systemd-logind[1550]: Session 24 logged out. Waiting for processes to exit. Sep 6 09:58:10.364647 systemd-logind[1550]: Removed session 24. Sep 6 09:58:10.663943 containerd[1568]: time="2025-09-06T09:58:10.663895453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2af587bf88a66c3d5461c3c18a5d4c50067f2c5a4a138e3b2e808985bdef803e\" id:\"75d6295e60fdce9473f4539de14f273154bcfa27c17347bb483a272d066a7d5d\" pid:5746 exited_at:{seconds:1757152690 nanos:663480589}" Sep 6 09:58:12.586888 containerd[1568]: time="2025-09-06T09:58:12.586844460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb4cb2a87c55082c8f9de62362a42e1685b6cc1df617d93f83b4a54f2e2d8f16\" id:\"73386d4fe5967dc5c7267c4f8c21d02345b272f4579c55e8350df8f19378f290\" pid:5773 exited_at:{seconds:1757152692 nanos:586462546}" Sep 6 09:58:15.376115 systemd[1]: Started sshd@24-10.0.0.40:22-10.0.0.1:52012.service - OpenSSH per-connection server daemon (10.0.0.1:52012). Sep 6 09:58:15.449748 sshd[5785]: Accepted publickey for core from 10.0.0.1 port 52012 ssh2: RSA SHA256:RP0lRXGXzH090yOxzY7O6pG5YAyAkwYHkc0+TZI0kSM Sep 6 09:58:15.451549 sshd-session[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 09:58:15.455994 systemd-logind[1550]: New session 25 of user core. Sep 6 09:58:15.466008 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 6 09:58:15.788907 sshd[5788]: Connection closed by 10.0.0.1 port 52012 Sep 6 09:58:15.789364 sshd-session[5785]: pam_unix(sshd:session): session closed for user core Sep 6 09:58:15.794165 systemd[1]: sshd@24-10.0.0.40:22-10.0.0.1:52012.service: Deactivated successfully. Sep 6 09:58:15.796276 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 09:58:15.797921 systemd-logind[1550]: Session 25 logged out. Waiting for processes to exit. Sep 6 09:58:15.799828 systemd-logind[1550]: Removed session 25.