Sep 9 00:37:27.856173 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:13:49 -00 2025 Sep 9 00:37:27.856201 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=34d704fb26999c645221adf783007b0add8c1672b7c5860358d83aa19335714a Sep 9 00:37:27.856212 kernel: BIOS-provided physical RAM map: Sep 9 00:37:27.856221 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 9 00:37:27.856229 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 9 00:37:27.856238 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 9 00:37:27.856248 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 9 00:37:27.856257 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 9 00:37:27.856271 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 9 00:37:27.856280 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 9 00:37:27.856289 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:37:27.856297 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 9 00:37:27.856306 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:37:27.856315 kernel: NX (Execute Disable) protection: active Sep 9 00:37:27.856327 kernel: APIC: Static calls initialized Sep 9 00:37:27.856337 kernel: SMBIOS 2.8 present. Sep 9 00:37:27.856349 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 9 00:37:27.856359 kernel: DMI: Memory slots populated: 1/1 Sep 9 00:37:27.856368 kernel: Hypervisor detected: KVM Sep 9 00:37:27.856377 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:37:27.856386 kernel: kvm-clock: using sched offset of 6114632989 cycles Sep 9 00:37:27.856396 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:37:27.856405 kernel: tsc: Detected 2794.748 MHz processor Sep 9 00:37:27.856415 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:37:27.856428 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:37:27.856437 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 9 00:37:27.856447 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 9 00:37:27.856456 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:37:27.856466 kernel: Using GB pages for direct mapping Sep 9 00:37:27.856475 kernel: ACPI: Early table checksum verification disabled Sep 9 00:37:27.856485 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 9 00:37:27.856495 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:37:27.856507 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:37:27.856517 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:37:27.856526 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 9 00:37:27.856536 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:37:27.856545 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:37:27.856555 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:37:27.856566 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:37:27.856577 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 9 00:37:27.856604 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 9 00:37:27.856613 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 9 00:37:27.856623 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 9 00:37:27.856633 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 9 00:37:27.856643 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 9 00:37:27.856653 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 9 00:37:27.856665 kernel: No NUMA configuration found Sep 9 00:37:27.856675 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 9 00:37:27.856685 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 9 00:37:27.856694 kernel: Zone ranges: Sep 9 00:37:27.856704 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:37:27.856714 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 9 00:37:27.856724 kernel: Normal empty Sep 9 00:37:27.856734 kernel: Device empty Sep 9 00:37:27.856743 kernel: Movable zone start for each node Sep 9 00:37:27.856753 kernel: Early memory node ranges Sep 9 00:37:27.856780 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 9 00:37:27.856790 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 9 00:37:27.856800 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 9 00:37:27.856810 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:37:27.856820 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 00:37:27.856830 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 9 00:37:27.856839 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:37:27.856852 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:37:27.856862 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:37:27.856875 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:37:27.856885 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:37:27.856897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:37:27.856907 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:37:27.856916 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:37:27.856926 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:37:27.856936 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:37:27.856945 kernel: TSC deadline timer available Sep 9 00:37:27.856955 kernel: CPU topo: Max. logical packages: 1 Sep 9 00:37:27.856967 kernel: CPU topo: Max. logical dies: 1 Sep 9 00:37:27.856977 kernel: CPU topo: Max. dies per package: 1 Sep 9 00:37:27.856987 kernel: CPU topo: Max. threads per core: 1 Sep 9 00:37:27.856996 kernel: CPU topo: Num. cores per package: 4 Sep 9 00:37:27.857006 kernel: CPU topo: Num. threads per package: 4 Sep 9 00:37:27.857016 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 00:37:27.857026 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:37:27.857036 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:37:27.857046 kernel: kvm-guest: setup PV sched yield Sep 9 00:37:27.857055 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 9 00:37:27.857068 kernel: Booting paravirtualized kernel on KVM Sep 9 00:37:27.857078 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:37:27.857088 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:37:27.857098 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 00:37:27.857108 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 00:37:27.857118 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:37:27.857128 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:37:27.857138 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:37:27.857149 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=34d704fb26999c645221adf783007b0add8c1672b7c5860358d83aa19335714a Sep 9 00:37:27.857162 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:37:27.857172 kernel: random: crng init done Sep 9 00:37:27.857182 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:37:27.857192 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:37:27.857212 kernel: Fallback order for Node 0: 0 Sep 9 00:37:27.857223 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 9 00:37:27.857242 kernel: Policy zone: DMA32 Sep 9 00:37:27.857261 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:37:27.857290 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:37:27.857310 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 00:37:27.857320 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 00:37:27.857331 kernel: Dynamic Preempt: voluntary Sep 9 00:37:27.857341 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:37:27.857357 kernel: rcu: RCU event tracing is enabled. Sep 9 00:37:27.857376 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:37:27.857403 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:37:27.857418 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:37:27.857432 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:37:27.857443 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:37:27.857454 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:37:27.857465 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:37:27.857476 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:37:27.857486 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:37:27.857497 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:37:27.857508 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:37:27.857529 kernel: Console: colour VGA+ 80x25 Sep 9 00:37:27.857540 kernel: printk: legacy console [ttyS0] enabled Sep 9 00:37:27.857551 kernel: ACPI: Core revision 20240827 Sep 9 00:37:27.857562 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:37:27.857576 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:37:27.857596 kernel: x2apic enabled Sep 9 00:37:27.857607 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:37:27.857623 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:37:27.857635 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:37:27.857651 kernel: kvm-guest: setup PV IPIs Sep 9 00:37:27.857663 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:37:27.857676 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:37:27.857687 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 00:37:27.857699 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:37:27.857710 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:37:27.857721 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:37:27.857732 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:37:27.857743 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:37:27.857774 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:37:27.857786 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:37:27.857797 kernel: active return thunk: retbleed_return_thunk Sep 9 00:37:27.857808 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:37:27.857819 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:37:27.857830 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:37:27.857841 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:37:27.857853 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:37:27.857868 kernel: active return thunk: srso_return_thunk Sep 9 00:37:27.857879 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:37:27.857891 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:37:27.857902 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:37:27.857913 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:37:27.857924 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:37:27.857935 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:37:27.857946 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:37:27.857957 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:37:27.857970 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 00:37:27.857982 kernel: landlock: Up and running. Sep 9 00:37:27.857992 kernel: SELinux: Initializing. Sep 9 00:37:27.858007 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:37:27.858018 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:37:27.858029 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:37:27.858041 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:37:27.858052 kernel: ... version: 0 Sep 9 00:37:27.858063 kernel: ... bit width: 48 Sep 9 00:37:27.858076 kernel: ... generic registers: 6 Sep 9 00:37:27.858087 kernel: ... value mask: 0000ffffffffffff Sep 9 00:37:27.858098 kernel: ... max period: 00007fffffffffff Sep 9 00:37:27.858109 kernel: ... fixed-purpose events: 0 Sep 9 00:37:27.858120 kernel: ... event mask: 000000000000003f Sep 9 00:37:27.858131 kernel: signal: max sigframe size: 1776 Sep 9 00:37:27.858142 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:37:27.858153 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:37:27.858164 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 00:37:27.858178 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:37:27.858189 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:37:27.858200 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:37:27.858211 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:37:27.858222 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 00:37:27.858234 kernel: Memory: 2428916K/2571752K available (14336K kernel code, 2428K rwdata, 9960K rodata, 54036K init, 2932K bss, 136904K reserved, 0K cma-reserved) Sep 9 00:37:27.858245 kernel: devtmpfs: initialized Sep 9 00:37:27.858256 kernel: x86/mm: Memory block size: 128MB Sep 9 00:37:27.858268 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:37:27.858281 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:37:27.858292 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:37:27.858306 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:37:27.858317 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:37:27.858329 kernel: audit: type=2000 audit(1757378244.273:1): state=initialized audit_enabled=0 res=1 Sep 9 00:37:27.858340 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:37:27.858351 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:37:27.858362 kernel: cpuidle: using governor menu Sep 9 00:37:27.858373 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:37:27.858387 kernel: dca service started, version 1.12.1 Sep 9 00:37:27.858398 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 9 00:37:27.858409 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 9 00:37:27.858420 kernel: PCI: Using configuration type 1 for base access Sep 9 00:37:27.858432 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:37:27.858443 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:37:27.858454 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:37:27.858465 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:37:27.858476 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:37:27.858489 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:37:27.858500 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:37:27.858511 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:37:27.858522 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:37:27.858534 kernel: ACPI: Interpreter enabled Sep 9 00:37:27.858545 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:37:27.858556 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:37:27.858567 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:37:27.858587 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:37:27.858601 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:37:27.858613 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:37:27.858918 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:37:27.859131 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:37:27.859321 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:37:27.859338 kernel: PCI host bridge to bus 0000:00 Sep 9 00:37:27.859509 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:37:27.859673 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:37:27.859840 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:37:27.859982 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 9 00:37:27.860120 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 00:37:27.860255 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 9 00:37:27.860402 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:37:27.860621 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 00:37:27.860855 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 00:37:27.861048 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 9 00:37:27.861252 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 9 00:37:27.861411 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 9 00:37:27.861564 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:37:27.861752 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 00:37:27.861947 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 9 00:37:27.862103 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 9 00:37:27.862253 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 9 00:37:27.862468 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 00:37:27.862644 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 9 00:37:27.862821 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 9 00:37:27.862980 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 9 00:37:27.863165 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 00:37:27.863320 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 9 00:37:27.863473 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 9 00:37:27.863637 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 9 00:37:27.863894 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 9 00:37:27.864081 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 00:37:27.864235 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:37:27.864418 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 00:37:27.864572 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 9 00:37:27.864741 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 9 00:37:27.864947 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 00:37:27.865104 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 9 00:37:27.865120 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:37:27.865136 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:37:27.865148 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:37:27.865159 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:37:27.865170 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:37:27.865181 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:37:27.865192 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:37:27.865203 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:37:27.865215 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:37:27.865226 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:37:27.865239 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:37:27.865251 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:37:27.865261 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:37:27.865272 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:37:27.865284 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:37:27.865295 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:37:27.865306 kernel: iommu: Default domain type: Translated Sep 9 00:37:27.865317 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:37:27.865328 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:37:27.865342 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:37:27.865353 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 9 00:37:27.865364 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 9 00:37:27.865519 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:37:27.865686 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:37:27.865890 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:37:27.865908 kernel: vgaarb: loaded Sep 9 00:37:27.865919 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:37:27.865931 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:37:27.865947 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:37:27.865958 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:37:27.865969 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:37:27.865981 kernel: pnp: PnP ACPI init Sep 9 00:37:27.866160 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 9 00:37:27.866177 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:37:27.866189 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:37:27.866200 kernel: NET: Registered PF_INET protocol family Sep 9 00:37:27.866215 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:37:27.866226 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:37:27.866237 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:37:27.866249 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:37:27.866260 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:37:27.866271 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:37:27.866283 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:37:27.866294 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:37:27.866309 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:37:27.866320 kernel: NET: Registered PF_XDP protocol family Sep 9 00:37:27.866473 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:37:27.866631 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:37:27.866797 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:37:27.866955 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 9 00:37:27.867099 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 9 00:37:27.867241 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 9 00:37:27.867257 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:37:27.867274 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:37:27.867285 kernel: Initialise system trusted keyrings Sep 9 00:37:27.867296 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:37:27.867307 kernel: Key type asymmetric registered Sep 9 00:37:27.867317 kernel: Asymmetric key parser 'x509' registered Sep 9 00:37:27.867328 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:37:27.867339 kernel: io scheduler mq-deadline registered Sep 9 00:37:27.867349 kernel: io scheduler kyber registered Sep 9 00:37:27.867360 kernel: io scheduler bfq registered Sep 9 00:37:27.867375 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:37:27.867386 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:37:27.867397 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:37:27.867408 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:37:27.867418 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:37:27.867429 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:37:27.867440 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:37:27.867450 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:37:27.867461 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:37:27.867475 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:37:27.867660 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:37:27.867832 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:37:27.867989 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:37:27 UTC (1757378247) Sep 9 00:37:27.868143 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 9 00:37:27.868159 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:37:27.868170 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:37:27.868181 kernel: Segment Routing with IPv6 Sep 9 00:37:27.868198 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:37:27.868209 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:37:27.868220 kernel: Key type dns_resolver registered Sep 9 00:37:27.868231 kernel: IPI shorthand broadcast: enabled Sep 9 00:37:27.868241 kernel: sched_clock: Marking stable (2919001974, 110616272)->(3201373608, -171755362) Sep 9 00:37:27.868252 kernel: registered taskstats version 1 Sep 9 00:37:27.868263 kernel: Loading compiled-in X.509 certificates Sep 9 00:37:27.868274 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: f610abecf8d2943295243a86f7aa958542b6f677' Sep 9 00:37:27.868285 kernel: Demotion targets for Node 0: null Sep 9 00:37:27.868299 kernel: Key type .fscrypt registered Sep 9 00:37:27.868309 kernel: Key type fscrypt-provisioning registered Sep 9 00:37:27.868320 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:37:27.868332 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:37:27.868343 kernel: ima: No architecture policies found Sep 9 00:37:27.868353 kernel: clk: Disabling unused clocks Sep 9 00:37:27.868364 kernel: Warning: unable to open an initial console. Sep 9 00:37:27.868376 kernel: Freeing unused kernel image (initmem) memory: 54036K Sep 9 00:37:27.868390 kernel: Write protecting the kernel read-only data: 24576k Sep 9 00:37:27.868400 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 9 00:37:27.868411 kernel: Run /init as init process Sep 9 00:37:27.868421 kernel: with arguments: Sep 9 00:37:27.868432 kernel: /init Sep 9 00:37:27.868442 kernel: with environment: Sep 9 00:37:27.868453 kernel: HOME=/ Sep 9 00:37:27.868463 kernel: TERM=linux Sep 9 00:37:27.868474 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:37:27.868487 systemd[1]: Successfully made /usr/ read-only. Sep 9 00:37:27.868517 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:37:27.868533 systemd[1]: Detected virtualization kvm. Sep 9 00:37:27.868545 systemd[1]: Detected architecture x86-64. Sep 9 00:37:27.868556 systemd[1]: Running in initrd. Sep 9 00:37:27.868568 systemd[1]: No hostname configured, using default hostname. Sep 9 00:37:27.868594 systemd[1]: Hostname set to . Sep 9 00:37:27.868606 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:37:27.868618 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:37:27.868631 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:37:27.868642 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:37:27.868655 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:37:27.868668 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:37:27.868680 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:37:27.868697 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:37:27.868710 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:37:27.868725 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:37:27.868737 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:37:27.868748 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:37:27.868784 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:37:27.868800 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:37:27.868812 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:37:27.868824 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:37:27.868836 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:37:27.868848 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:37:27.868860 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:37:27.868871 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 00:37:27.868884 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:37:27.868896 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:37:27.868912 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:37:27.868924 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:37:27.868936 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:37:27.868949 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:37:27.868964 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:37:27.868979 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 00:37:27.868991 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:37:27.869004 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:37:27.869016 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:37:27.869028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:37:27.869040 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:37:27.869056 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:37:27.869068 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:37:27.869081 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:37:27.869129 systemd-journald[221]: Collecting audit messages is disabled. Sep 9 00:37:27.869162 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:37:27.869175 systemd-journald[221]: Journal started Sep 9 00:37:27.869203 systemd-journald[221]: Runtime Journal (/run/log/journal/b463e67dfb0f4a0f881f190255ad6c0e) is 6M, max 48.6M, 42.5M free. Sep 9 00:37:27.859262 systemd-modules-load[222]: Inserted module 'overlay' Sep 9 00:37:27.871795 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:37:27.886789 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:37:27.888606 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 9 00:37:27.916323 kernel: Bridge firewalling registered Sep 9 00:37:27.917776 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:37:27.918041 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:37:27.918455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:37:27.922435 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:37:27.923537 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:37:27.926889 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:37:27.946888 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 00:37:27.948808 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:37:27.949996 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:37:27.953163 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:37:27.955251 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:37:27.966926 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:37:27.969091 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:37:27.995078 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=34d704fb26999c645221adf783007b0add8c1672b7c5860358d83aa19335714a Sep 9 00:37:28.006692 systemd-resolved[254]: Positive Trust Anchors: Sep 9 00:37:28.006705 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:37:28.006743 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:37:28.009685 systemd-resolved[254]: Defaulting to hostname 'linux'. Sep 9 00:37:28.010984 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:37:28.017162 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:37:28.185848 kernel: SCSI subsystem initialized Sep 9 00:37:28.226399 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:37:28.257617 kernel: iscsi: registered transport (tcp) Sep 9 00:37:28.293834 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:37:28.293928 kernel: QLogic iSCSI HBA Driver Sep 9 00:37:28.332138 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:37:28.437957 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:37:28.440015 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:37:28.561079 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:37:28.563580 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:37:28.654821 kernel: raid6: avx2x4 gen() 20912 MB/s Sep 9 00:37:28.672592 kernel: raid6: avx2x2 gen() 19952 MB/s Sep 9 00:37:28.689812 kernel: raid6: avx2x1 gen() 16037 MB/s Sep 9 00:37:28.689901 kernel: raid6: using algorithm avx2x4 gen() 20912 MB/s Sep 9 00:37:28.708351 kernel: raid6: .... xor() 5414 MB/s, rmw enabled Sep 9 00:37:28.708438 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:37:28.799838 kernel: xor: automatically using best checksumming function avx Sep 9 00:37:29.054839 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:37:29.068983 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:37:29.085262 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:37:29.147279 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 9 00:37:29.156908 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:37:29.167350 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:37:29.228983 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Sep 9 00:37:29.360905 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:37:29.425589 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:37:29.557057 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:37:29.566483 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:37:29.627886 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:37:29.632648 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:37:29.632867 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:37:29.641794 kernel: AES CTR mode by8 optimization enabled Sep 9 00:37:29.654298 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:37:29.654377 kernel: GPT:9289727 != 19775487 Sep 9 00:37:29.654393 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:37:29.654416 kernel: GPT:9289727 != 19775487 Sep 9 00:37:29.655412 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:37:29.655470 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:37:29.680808 kernel: libata version 3.00 loaded. Sep 9 00:37:29.688804 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 00:37:29.720364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:37:29.727317 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:37:29.728715 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:37:29.720597 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:37:29.741042 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 00:37:29.741329 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 00:37:29.741498 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:37:29.742153 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:37:29.747290 kernel: scsi host0: ahci Sep 9 00:37:29.755593 kernel: scsi host1: ahci Sep 9 00:37:29.755973 kernel: scsi host2: ahci Sep 9 00:37:29.756473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:37:29.775917 kernel: scsi host3: ahci Sep 9 00:37:29.776208 kernel: scsi host4: ahci Sep 9 00:37:29.776464 kernel: scsi host5: ahci Sep 9 00:37:29.776659 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 9 00:37:29.776674 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 9 00:37:29.776687 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 9 00:37:29.776700 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 9 00:37:29.776712 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 9 00:37:29.776730 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 9 00:37:29.777836 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:37:29.823265 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:37:29.858211 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:37:29.905942 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:37:29.907555 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:37:29.943179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:37:29.961354 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:37:29.967188 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:37:30.118282 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:37:30.118372 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:37:30.121084 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:37:30.121114 kernel: ata3.00: applying bridge limits Sep 9 00:37:30.129869 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:37:30.129950 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:37:30.130953 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:37:30.136558 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:37:30.136610 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:37:30.136626 kernel: ata3.00: configured for UDMA/100 Sep 9 00:37:30.137823 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:37:30.155125 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:37:30.205011 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:37:30.206319 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:37:30.228931 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:37:30.473982 disk-uuid[633]: Primary Header is updated. Sep 9 00:37:30.473982 disk-uuid[633]: Secondary Entries is updated. Sep 9 00:37:30.473982 disk-uuid[633]: Secondary Header is updated. Sep 9 00:37:30.478796 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:37:30.483811 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:37:30.594892 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:37:30.617361 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:37:30.617441 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:37:30.619735 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:37:30.624010 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:37:30.658703 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:37:31.542799 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:37:31.542894 disk-uuid[638]: The operation has completed successfully. Sep 9 00:37:31.575861 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:37:31.576011 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:37:31.647997 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:37:31.676326 sh[663]: Success Sep 9 00:37:31.694192 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:37:31.694224 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:37:31.695368 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 00:37:31.703885 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 00:37:31.735713 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:37:31.738163 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:37:31.750883 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:37:31.775778 kernel: BTRFS: device fsid eee400a1-88b9-480b-9c0c-54d171140f9a devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (675) Sep 9 00:37:31.775810 kernel: BTRFS info (device dm-0): first mount of filesystem eee400a1-88b9-480b-9c0c-54d171140f9a Sep 9 00:37:31.777793 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:37:31.782786 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:37:31.782814 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 00:37:31.783990 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:37:31.784560 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:37:31.785992 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:37:31.788214 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:37:31.792112 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:37:31.811829 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (704) Sep 9 00:37:31.811887 kernel: BTRFS info (device vda6): first mount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:37:31.813891 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:37:31.816778 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:37:31.816803 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:37:31.821786 kernel: BTRFS info (device vda6): last unmount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:37:31.822687 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:37:31.826537 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:37:31.959237 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:37:31.962569 ignition[749]: Ignition 2.21.0 Sep 9 00:37:31.962583 ignition[749]: Stage: fetch-offline Sep 9 00:37:31.962618 ignition[749]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:37:31.962628 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:37:31.962717 ignition[749]: parsed url from cmdline: "" Sep 9 00:37:31.962721 ignition[749]: no config URL provided Sep 9 00:37:31.962726 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:37:31.962735 ignition[749]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:37:31.966830 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:37:31.962775 ignition[749]: op(1): [started] loading QEMU firmware config module Sep 9 00:37:31.962781 ignition[749]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:37:31.974167 ignition[749]: op(1): [finished] loading QEMU firmware config module Sep 9 00:37:32.013223 ignition[749]: parsing config with SHA512: e510c38bbeee69c56bee56ef9b91e1dc2d48da2ed2951c86d2facc0c5623266d2eb06bb74425da59f5c54bf7171336d1f5b6c3f85431f5bdce8ebef5d91f5906 Sep 9 00:37:32.016581 systemd-networkd[853]: lo: Link UP Sep 9 00:37:32.016590 systemd-networkd[853]: lo: Gained carrier Sep 9 00:37:32.023130 ignition[749]: fetch-offline: fetch-offline passed Sep 9 00:37:32.018263 systemd-networkd[853]: Enumeration completed Sep 9 00:37:32.023201 ignition[749]: Ignition finished successfully Sep 9 00:37:32.018635 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:37:32.018639 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:37:32.019008 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:37:32.022572 unknown[749]: fetched base config from "system" Sep 9 00:37:32.022587 unknown[749]: fetched user config from "qemu" Sep 9 00:37:32.022926 systemd-networkd[853]: eth0: Link UP Sep 9 00:37:32.022954 systemd[1]: Reached target network.target - Network. Sep 9 00:37:32.023120 systemd-networkd[853]: eth0: Gained carrier Sep 9 00:37:32.023132 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:37:32.027267 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:37:32.029054 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:37:32.030070 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:37:32.037820 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:37:32.090240 ignition[857]: Ignition 2.21.0 Sep 9 00:37:32.090256 ignition[857]: Stage: kargs Sep 9 00:37:32.090497 ignition[857]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:37:32.090509 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:37:32.094070 ignition[857]: kargs: kargs passed Sep 9 00:37:32.094190 ignition[857]: Ignition finished successfully Sep 9 00:37:32.099072 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:37:32.101265 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:37:32.142525 ignition[866]: Ignition 2.21.0 Sep 9 00:37:32.142538 ignition[866]: Stage: disks Sep 9 00:37:32.142668 ignition[866]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:37:32.142678 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:37:32.146911 ignition[866]: disks: disks passed Sep 9 00:37:32.147008 ignition[866]: Ignition finished successfully Sep 9 00:37:32.151200 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:37:32.152523 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:37:32.154248 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:37:32.155463 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:37:32.155529 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:37:32.157564 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:37:32.162343 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:37:32.188555 systemd-resolved[254]: Detected conflict on linux IN A 10.0.0.118 Sep 9 00:37:32.188569 systemd-resolved[254]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Sep 9 00:37:32.195122 systemd-fsck[876]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 00:37:32.637887 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:37:32.640686 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:37:32.768818 kernel: EXT4-fs (vda9): mounted filesystem 91c315eb-0fc3-4e95-bf9b-06acc06be6bc r/w with ordered data mode. Quota mode: none. Sep 9 00:37:32.769663 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:37:32.770503 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:37:32.773345 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:37:32.777552 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:37:32.778750 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:37:32.778814 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:37:32.778840 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:37:32.794008 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:37:32.796938 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:37:32.801451 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (884) Sep 9 00:37:32.801474 kernel: BTRFS info (device vda6): first mount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:37:32.801494 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:37:32.803891 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:37:32.803913 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:37:32.805333 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:37:32.847068 initrd-setup-root[908]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:37:32.852538 initrd-setup-root[915]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:37:32.857664 initrd-setup-root[922]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:37:32.863292 initrd-setup-root[929]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:37:32.977117 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:37:32.979486 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:37:32.981454 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:37:33.002776 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:37:33.004033 kernel: BTRFS info (device vda6): last unmount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:37:33.018734 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:37:33.040979 ignition[998]: INFO : Ignition 2.21.0 Sep 9 00:37:33.040979 ignition[998]: INFO : Stage: mount Sep 9 00:37:33.042902 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:37:33.042902 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:37:33.045135 ignition[998]: INFO : mount: mount passed Sep 9 00:37:33.045135 ignition[998]: INFO : Ignition finished successfully Sep 9 00:37:33.046954 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:37:33.050338 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:37:33.082175 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:37:33.109687 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1010) Sep 9 00:37:33.109736 kernel: BTRFS info (device vda6): first mount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:37:33.109751 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:37:33.115329 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:37:33.115418 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:37:33.118578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:37:33.171987 ignition[1027]: INFO : Ignition 2.21.0 Sep 9 00:37:33.171987 ignition[1027]: INFO : Stage: files Sep 9 00:37:33.173829 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:37:33.173829 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:37:33.177897 ignition[1027]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:37:33.179310 ignition[1027]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:37:33.179310 ignition[1027]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:37:33.182371 ignition[1027]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:37:33.182371 ignition[1027]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:37:33.185482 ignition[1027]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:37:33.185482 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:37:33.185482 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 9 00:37:33.182900 unknown[1027]: wrote ssh authorized keys file for user: core Sep 9 00:37:33.226930 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:37:33.347837 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:37:33.347837 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:37:33.352209 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:37:33.352209 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:37:33.352209 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:37:33.352209 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:37:33.352209 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:37:33.352209 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:37:33.352209 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:37:33.366973 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:37:33.369441 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:37:33.369441 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:37:33.374439 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:37:33.374439 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:37:33.374439 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 9 00:37:33.610955 systemd-networkd[853]: eth0: Gained IPv6LL Sep 9 00:37:33.861891 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 00:37:34.719193 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:37:34.719193 ignition[1027]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 00:37:34.723831 ignition[1027]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:37:34.966020 ignition[1027]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:37:34.966020 ignition[1027]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 00:37:34.966020 ignition[1027]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 00:37:34.970445 ignition[1027]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:37:34.970445 ignition[1027]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:37:34.970445 ignition[1027]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 00:37:34.970445 ignition[1027]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:37:34.992778 ignition[1027]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:37:34.997269 ignition[1027]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:37:35.058718 ignition[1027]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:37:35.058718 ignition[1027]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:37:35.058718 ignition[1027]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:37:35.058718 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:37:35.058718 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:37:35.058718 ignition[1027]: INFO : files: files passed Sep 9 00:37:35.058718 ignition[1027]: INFO : Ignition finished successfully Sep 9 00:37:35.060449 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:37:35.064982 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:37:35.071330 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:37:35.080472 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:37:35.080602 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:37:35.088604 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:37:35.093690 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:37:35.093690 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:37:35.097802 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:37:35.101779 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:37:35.103458 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:37:35.107229 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:37:35.174967 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:37:35.175131 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:37:35.176446 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:37:35.178499 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:37:35.182254 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:37:35.183994 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:37:35.224030 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:37:35.225682 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:37:35.259562 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:37:35.260981 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:37:35.263507 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:37:35.266594 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:37:35.266843 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:37:35.270290 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:37:35.270489 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:37:35.282030 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:37:35.283635 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:37:35.284189 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:37:35.284577 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:37:35.285231 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:37:35.285607 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:37:35.286416 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:37:35.287075 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:37:35.287460 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:37:35.287993 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:37:35.288137 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:37:35.307134 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:37:35.307316 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:37:35.307670 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:37:35.313491 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:37:35.316503 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:37:35.316653 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:37:35.317981 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:37:35.318091 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:37:35.321218 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:37:35.322239 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:37:35.327958 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:37:35.331590 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:37:35.332817 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:37:35.333262 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:37:35.333403 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:37:35.336707 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:37:35.336807 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:37:35.338561 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:37:35.338678 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:37:35.340689 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:37:35.340810 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:37:35.344899 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:37:35.345125 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:37:35.345233 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:37:35.348920 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:37:35.355458 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:37:35.356612 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:37:35.357962 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:37:35.358106 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:37:35.367117 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:37:35.367276 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:37:35.375732 ignition[1082]: INFO : Ignition 2.21.0 Sep 9 00:37:35.375732 ignition[1082]: INFO : Stage: umount Sep 9 00:37:35.377560 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:37:35.377560 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:37:35.379820 ignition[1082]: INFO : umount: umount passed Sep 9 00:37:35.379820 ignition[1082]: INFO : Ignition finished successfully Sep 9 00:37:35.381044 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:37:35.381227 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:37:35.383473 systemd[1]: Stopped target network.target - Network. Sep 9 00:37:35.384481 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:37:35.384582 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:37:35.387240 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:37:35.387324 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:37:35.388112 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:37:35.388165 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:37:35.388445 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:37:35.388489 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:37:35.388908 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:37:35.389381 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:37:35.459242 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:37:35.459406 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:37:35.463976 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 00:37:35.464642 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 00:37:35.465871 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:37:35.465925 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:37:35.467305 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:37:35.469631 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:37:35.469689 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:37:35.470265 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:37:35.478410 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:37:35.480985 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:37:35.486070 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 00:37:35.487919 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:37:35.488042 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:37:35.490912 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:37:35.490976 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:37:35.491978 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:37:35.492038 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:37:35.497666 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:37:35.497785 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:37:35.498143 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:37:35.506046 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:37:35.509211 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:37:35.509398 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:37:35.512755 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:37:35.512914 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:37:35.515959 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:37:35.516035 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:37:35.519037 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:37:35.519138 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:37:35.522096 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:37:35.522183 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:37:35.525005 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:37:35.525075 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:37:35.529748 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:37:35.529862 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 00:37:35.529934 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:37:35.534147 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:37:35.534253 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:37:35.538863 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:37:35.538927 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:37:35.543573 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 00:37:35.543639 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 00:37:35.543689 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:37:35.563687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:37:35.563841 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:37:35.691879 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:37:35.697082 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:37:35.697229 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:37:35.699395 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:37:35.701184 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:37:35.701246 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:37:35.705653 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:37:35.739242 systemd[1]: Switching root. Sep 9 00:37:35.783179 systemd-journald[221]: Journal stopped Sep 9 00:37:37.555684 systemd-journald[221]: Received SIGTERM from PID 1 (systemd). Sep 9 00:37:37.555750 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:37:37.555845 kernel: SELinux: policy capability open_perms=1 Sep 9 00:37:37.555859 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:37:37.555871 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:37:37.555882 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:37:37.555895 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:37:37.555910 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:37:37.555922 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:37:37.555933 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 00:37:37.555945 kernel: audit: type=1403 audit(1757378256.370:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:37:37.555958 systemd[1]: Successfully loaded SELinux policy in 66.685ms. Sep 9 00:37:37.555983 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.453ms. Sep 9 00:37:37.556000 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:37:37.556016 systemd[1]: Detected virtualization kvm. Sep 9 00:37:37.556032 systemd[1]: Detected architecture x86-64. Sep 9 00:37:37.556048 systemd[1]: Detected first boot. Sep 9 00:37:37.556066 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:37:37.556078 zram_generator::config[1128]: No configuration found. Sep 9 00:37:37.556096 kernel: Guest personality initialized and is inactive Sep 9 00:37:37.556108 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:37:37.556119 kernel: Initialized host personality Sep 9 00:37:37.556132 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:37:37.556144 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:37:37.556162 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 00:37:37.556176 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:37:37.556188 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:37:37.556201 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:37:37.556213 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:37:37.556226 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:37:37.556238 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:37:37.556250 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:37:37.556263 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:37:37.556278 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:37:37.556290 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:37:37.556311 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:37:37.556323 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:37:37.556336 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:37:37.556348 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:37:37.556361 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:37:37.556373 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:37:37.556389 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:37:37.556401 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:37:37.556415 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:37:37.556427 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:37:37.556440 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:37:37.556451 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:37:37.556464 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:37:37.556477 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:37:37.556491 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:37:37.556504 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:37:37.556516 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:37:37.556528 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:37:37.556541 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:37:37.556553 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:37:37.556565 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 00:37:37.556577 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:37:37.556589 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:37:37.556601 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:37:37.556615 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:37:37.556628 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:37:37.556645 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:37:37.556658 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:37:37.556670 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:37:37.556683 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:37:37.556696 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:37:37.556708 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:37:37.556722 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:37:37.556735 systemd[1]: Reached target machines.target - Containers. Sep 9 00:37:37.556748 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:37:37.556773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:37:37.556786 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:37:37.556798 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:37:37.556810 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:37:37.556822 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:37:37.556834 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:37:37.556853 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:37:37.556865 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:37:37.556877 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:37:37.556891 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:37:37.556904 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:37:37.556916 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:37:37.556928 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:37:37.556941 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:37:37.556958 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:37:37.556972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:37:37.556984 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:37:37.556996 kernel: fuse: init (API version 7.41) Sep 9 00:37:37.557008 kernel: loop: module loaded Sep 9 00:37:37.557064 systemd-journald[1192]: Collecting audit messages is disabled. Sep 9 00:37:37.557090 systemd-journald[1192]: Journal started Sep 9 00:37:37.557115 systemd-journald[1192]: Runtime Journal (/run/log/journal/b463e67dfb0f4a0f881f190255ad6c0e) is 6M, max 48.6M, 42.5M free. Sep 9 00:37:37.166560 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:37:37.190931 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:37:37.191436 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:37:37.561255 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:37:37.563982 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 00:37:37.578795 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:37:37.580874 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:37:37.580937 systemd[1]: Stopped verity-setup.service. Sep 9 00:37:37.590081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:37:37.590130 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:37:37.594608 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:37:37.595854 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:37:37.598800 kernel: ACPI: bus type drm_connector registered Sep 9 00:37:37.597969 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:37:37.599973 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:37:37.601441 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:37:37.612269 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:37:37.613590 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:37:37.615201 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:37:37.615436 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:37:37.616994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:37:37.617224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:37:37.666989 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:37:37.667252 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:37:37.668775 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:37:37.669004 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:37:37.670509 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:37:37.670749 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:37:37.672342 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:37:37.672580 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:37:37.674161 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:37:37.675693 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:37:37.677355 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:37:37.679165 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 00:37:37.686378 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:37:37.698196 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:37:37.700827 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:37:37.703222 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:37:37.704357 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:37:37.704387 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:37:37.707080 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 00:37:37.720947 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:37:37.725674 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:37:37.727256 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:37:37.729354 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:37:37.730516 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:37:37.731671 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:37:37.732966 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:37:37.736873 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:37:37.738995 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:37:37.742431 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:37:37.744143 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:37:37.747083 systemd-journald[1192]: Time spent on flushing to /var/log/journal/b463e67dfb0f4a0f881f190255ad6c0e is 17.162ms for 986 entries. Sep 9 00:37:37.747083 systemd-journald[1192]: System Journal (/var/log/journal/b463e67dfb0f4a0f881f190255ad6c0e) is 8M, max 195.6M, 187.6M free. Sep 9 00:37:38.080600 systemd-journald[1192]: Received client request to flush runtime journal. Sep 9 00:37:38.080644 kernel: loop0: detected capacity change from 0 to 229808 Sep 9 00:37:38.080658 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:37:38.080672 kernel: loop1: detected capacity change from 0 to 128016 Sep 9 00:37:38.080685 kernel: loop2: detected capacity change from 0 to 111000 Sep 9 00:37:38.080703 kernel: loop3: detected capacity change from 0 to 229808 Sep 9 00:37:38.080716 kernel: loop4: detected capacity change from 0 to 128016 Sep 9 00:37:38.080729 kernel: loop5: detected capacity change from 0 to 111000 Sep 9 00:37:37.768626 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:37:38.051935 (sd-merge)[1257]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:37:38.052582 (sd-merge)[1257]: Merged extensions into '/usr'. Sep 9 00:37:38.057429 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:37:38.057441 systemd[1]: Reloading... Sep 9 00:37:38.138815 zram_generator::config[1286]: No configuration found. Sep 9 00:37:38.261721 ldconfig[1235]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:37:38.359897 systemd[1]: Reloading finished in 301 ms. Sep 9 00:37:38.390081 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:37:38.391882 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:37:38.393390 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:37:38.395133 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:37:38.396914 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:37:38.405557 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:37:38.420431 systemd[1]: Starting ensure-sysext.service... Sep 9 00:37:38.422798 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 00:37:38.426985 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:37:38.445447 systemd[1]: Reload requested from client PID 1327 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:37:38.445464 systemd[1]: Reloading... Sep 9 00:37:38.534906 zram_generator::config[1357]: No configuration found. Sep 9 00:37:38.728080 systemd[1]: Reloading finished in 282 ms. Sep 9 00:37:38.774966 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:37:38.783112 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:37:38.785650 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:37:38.788986 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:37:38.789188 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:37:38.800125 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:37:38.804415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:37:38.807951 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:37:38.809866 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:37:38.810069 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:37:38.810234 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:37:38.814714 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:37:38.824162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:37:38.828078 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:37:38.828332 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:37:38.830091 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:37:38.830316 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:37:38.834812 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:37:38.834990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:37:38.836522 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:37:38.887147 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:37:38.890287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:37:38.891616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:37:38.891788 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:37:38.891946 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:37:38.893376 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:37:38.909042 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:37:38.911608 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Sep 9 00:37:38.911631 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Sep 9 00:37:38.911904 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 00:37:38.911937 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 00:37:38.912280 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:37:38.912349 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:37:38.912498 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:37:38.912627 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:37:38.913462 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:37:38.913711 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Sep 9 00:37:38.913798 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Sep 9 00:37:38.914907 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:37:38.915199 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:37:38.917203 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:37:38.918371 systemd-tmpfiles[1396]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:37:38.918379 systemd-tmpfiles[1396]: Skipping /boot Sep 9 00:37:38.926924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:37:38.927310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:37:38.929206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:37:38.929274 systemd-tmpfiles[1396]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:37:38.929284 systemd-tmpfiles[1396]: Skipping /boot Sep 9 00:37:38.932140 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:37:38.935219 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:37:38.939161 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:37:38.940841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:37:38.941001 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:37:38.941265 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:37:38.946295 systemd[1]: Finished ensure-sysext.service. Sep 9 00:37:38.971182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:37:38.971447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:37:38.973040 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:37:38.973272 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:37:38.974911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:37:38.975133 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:37:38.977043 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:37:38.977283 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:37:38.981540 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:37:38.981644 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:37:39.372042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:37:39.375197 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:37:39.377883 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:37:39.380460 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:37:39.395959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:37:39.401570 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:37:39.406229 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:37:39.420205 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:37:39.422161 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:37:39.443007 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:37:39.450056 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:37:39.503972 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:37:39.564841 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:37:39.596941 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:37:39.600919 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:37:39.602938 augenrules[1458]: No rules Sep 9 00:37:39.604698 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:37:39.608195 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:37:39.608578 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:37:39.644160 systemd-udevd[1457]: Using default interface naming scheme 'v255'. Sep 9 00:37:39.657916 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:37:39.665975 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:37:39.667738 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:37:39.677558 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:37:39.683975 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:37:39.688164 systemd-resolved[1425]: Positive Trust Anchors: Sep 9 00:37:39.688184 systemd-resolved[1425]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:37:39.688227 systemd-resolved[1425]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:37:39.695365 systemd-resolved[1425]: Defaulting to hostname 'linux'. Sep 9 00:37:39.699536 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:37:39.701216 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:37:39.702651 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:37:39.703987 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:37:39.705806 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:37:39.707265 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 00:37:39.709715 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:37:39.711180 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:37:39.712829 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:37:39.715881 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:37:39.715926 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:37:39.717012 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:37:39.768857 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:37:39.772246 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:37:39.776983 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 00:37:39.792138 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 00:37:39.793447 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 00:37:39.802823 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:37:39.804405 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 00:37:39.807494 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:37:39.809521 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 00:37:39.811122 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:37:39.818720 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:37:39.820200 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:37:39.821283 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:37:39.822305 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:37:39.822330 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:37:39.823917 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:37:39.827045 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:37:39.831971 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:37:39.835176 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:37:39.837824 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:37:39.840553 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 00:37:39.852031 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:37:39.856000 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:37:39.857198 jq[1506]: false Sep 9 00:37:39.862662 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:37:39.863721 oslogin_cache_refresh[1508]: Refreshing passwd entry cache Sep 9 00:37:39.865213 google_oslogin_nss_cache[1508]: oslogin_cache_refresh[1508]: Refreshing passwd entry cache Sep 9 00:37:39.866056 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:37:39.867902 oslogin_cache_refresh[1508]: Failure getting users, quitting Sep 9 00:37:39.868902 google_oslogin_nss_cache[1508]: oslogin_cache_refresh[1508]: Failure getting users, quitting Sep 9 00:37:39.868902 google_oslogin_nss_cache[1508]: oslogin_cache_refresh[1508]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:37:39.868902 google_oslogin_nss_cache[1508]: oslogin_cache_refresh[1508]: Refreshing group entry cache Sep 9 00:37:39.868902 google_oslogin_nss_cache[1508]: oslogin_cache_refresh[1508]: Failure getting groups, quitting Sep 9 00:37:39.868902 google_oslogin_nss_cache[1508]: oslogin_cache_refresh[1508]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:37:39.867915 oslogin_cache_refresh[1508]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:37:39.867958 oslogin_cache_refresh[1508]: Refreshing group entry cache Sep 9 00:37:39.868403 oslogin_cache_refresh[1508]: Failure getting groups, quitting Sep 9 00:37:39.868412 oslogin_cache_refresh[1508]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:37:39.875074 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:37:39.900963 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:37:39.901587 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:37:39.902804 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:37:39.903441 systemd-networkd[1476]: lo: Link UP Sep 9 00:37:39.903446 systemd-networkd[1476]: lo: Gained carrier Sep 9 00:37:39.905896 systemd-networkd[1476]: Enumeration completed Sep 9 00:37:39.910553 systemd-networkd[1476]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:37:39.910573 systemd-networkd[1476]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:37:39.913154 systemd-networkd[1476]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:37:39.913189 systemd-networkd[1476]: eth0: Link UP Sep 9 00:37:39.913909 systemd-networkd[1476]: eth0: Gained carrier Sep 9 00:37:39.913925 systemd-networkd[1476]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:37:39.930822 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:37:39.933159 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:37:39.935153 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:37:39.940235 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:37:39.984202 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:37:39.984653 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 00:37:39.985110 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 00:37:39.987027 systemd-networkd[1476]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:37:39.987068 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:37:39.987459 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:37:39.987887 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. Sep 9 00:37:39.990302 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:37:39.990606 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:37:39.990699 systemd-timesyncd[1426]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:37:39.990835 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:37:39.990751 systemd-timesyncd[1426]: Initial clock synchronization to Tue 2025-09-09 00:37:39.844051 UTC. Sep 9 00:37:39.995731 jq[1524]: true Sep 9 00:37:40.010787 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 00:37:40.016790 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:37:40.018721 systemd[1]: Reached target network.target - Network. Sep 9 00:37:40.023257 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:37:40.028918 jq[1538]: true Sep 9 00:37:40.030383 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 00:37:40.089214 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:37:40.096714 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:37:40.112057 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:37:40.183939 extend-filesystems[1507]: Found /dev/vda6 Sep 9 00:37:40.190531 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:37:40.196424 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:37:40.196732 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:37:40.216677 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 00:37:40.226426 update_engine[1523]: I20250909 00:37:40.226346 1523 main.cc:92] Flatcar Update Engine starting Sep 9 00:37:40.231449 systemd-logind[1516]: New seat seat0. Sep 9 00:37:40.233278 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:37:40.250676 dbus-daemon[1503]: [system] SELinux support is enabled Sep 9 00:37:40.254005 update_engine[1523]: I20250909 00:37:40.253960 1523 update_check_scheduler.cc:74] Next update check in 3m58s Sep 9 00:37:40.256986 systemd-logind[1516]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:37:40.257001 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:37:40.258246 systemd-logind[1516]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 00:37:40.261409 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:37:40.267977 tar[1531]: linux-amd64/LICENSE Sep 9 00:37:40.267977 tar[1531]: linux-amd64/helm Sep 9 00:37:40.274154 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:37:40.274452 dbus-daemon[1503]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 00:37:40.277523 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:37:40.277819 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:37:40.282831 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:37:40.304538 extend-filesystems[1507]: Found /dev/vda9 Sep 9 00:37:40.341100 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:37:40.341477 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:37:40.347033 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:37:40.374871 kernel: kvm_amd: TSC scaling supported Sep 9 00:37:40.374923 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:37:40.374937 kernel: kvm_amd: Nested Paging enabled Sep 9 00:37:40.374970 kernel: kvm_amd: LBR virtualization supported Sep 9 00:37:40.377210 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:37:40.377249 kernel: kvm_amd: Virtual GIF supported Sep 9 00:37:40.399198 extend-filesystems[1507]: Checking size of /dev/vda9 Sep 9 00:37:40.502231 sshd_keygen[1530]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:37:40.532904 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:37:40.536503 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:37:40.544136 locksmithd[1589]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:37:40.547879 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:37:40.559377 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:37:40.559651 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:37:40.561851 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:37:40.571885 extend-filesystems[1507]: Resized partition /dev/vda9 Sep 9 00:37:40.582783 tar[1531]: linux-amd64/README.md Sep 9 00:37:40.601103 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:37:40.625413 extend-filesystems[1626]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 00:37:40.635241 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:37:40.662794 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:37:40.677291 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:37:40.693541 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:37:40.693869 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:37:40.759222 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:37:40.952579 containerd[1580]: time="2025-09-09T00:37:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 00:37:40.953341 containerd[1580]: time="2025-09-09T00:37:40.953303308Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 00:37:40.961911 containerd[1580]: time="2025-09-09T00:37:40.961858717Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.109µs" Sep 9 00:37:40.962052 containerd[1580]: time="2025-09-09T00:37:40.962034455Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 00:37:40.962121 containerd[1580]: time="2025-09-09T00:37:40.962106773Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 00:37:40.962511 containerd[1580]: time="2025-09-09T00:37:40.962461674Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 00:37:40.962511 containerd[1580]: time="2025-09-09T00:37:40.962495019Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 00:37:40.962606 containerd[1580]: time="2025-09-09T00:37:40.962524998Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:37:40.962606 containerd[1580]: time="2025-09-09T00:37:40.962593307Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:37:40.962606 containerd[1580]: time="2025-09-09T00:37:40.962604415Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:37:40.962892 containerd[1580]: time="2025-09-09T00:37:40.962856442Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:37:40.962892 containerd[1580]: time="2025-09-09T00:37:40.962876276Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:37:40.962892 containerd[1580]: time="2025-09-09T00:37:40.962887821Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:37:40.962968 containerd[1580]: time="2025-09-09T00:37:40.962896607Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 00:37:40.963006 containerd[1580]: time="2025-09-09T00:37:40.962986257Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 00:37:40.963257 containerd[1580]: time="2025-09-09T00:37:40.963224833Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:37:40.963281 containerd[1580]: time="2025-09-09T00:37:40.963260700Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:37:40.963281 containerd[1580]: time="2025-09-09T00:37:40.963271480Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 00:37:40.963382 containerd[1580]: time="2025-09-09T00:37:40.963353914Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 00:37:40.963707 containerd[1580]: time="2025-09-09T00:37:40.963636447Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 00:37:40.963914 containerd[1580]: time="2025-09-09T00:37:40.963877038Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:37:41.149812 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:37:41.595350 extend-filesystems[1626]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:37:41.595350 extend-filesystems[1626]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:37:41.595350 extend-filesystems[1626]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:37:41.599204 extend-filesystems[1507]: Resized filesystem in /dev/vda9 Sep 9 00:37:41.601759 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:37:41.602114 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:37:41.821096 containerd[1580]: time="2025-09-09T00:37:41.821000858Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 00:37:41.821224 containerd[1580]: time="2025-09-09T00:37:41.821113674Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 00:37:41.821224 containerd[1580]: time="2025-09-09T00:37:41.821135148Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 00:37:41.821224 containerd[1580]: time="2025-09-09T00:37:41.821150286Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 00:37:41.821224 containerd[1580]: time="2025-09-09T00:37:41.821168583Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 00:37:41.821224 containerd[1580]: time="2025-09-09T00:37:41.821182180Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 00:37:41.821224 containerd[1580]: time="2025-09-09T00:37:41.821204797Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 00:37:41.821224 containerd[1580]: time="2025-09-09T00:37:41.821219080Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 00:37:41.821369 containerd[1580]: time="2025-09-09T00:37:41.821233662Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 00:37:41.821369 containerd[1580]: time="2025-09-09T00:37:41.821246535Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 00:37:41.821369 containerd[1580]: time="2025-09-09T00:37:41.821258156Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 00:37:41.821369 containerd[1580]: time="2025-09-09T00:37:41.821274187Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 00:37:41.821505 containerd[1580]: time="2025-09-09T00:37:41.821479338Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 00:37:41.821531 bash[1579]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:37:41.821847 containerd[1580]: time="2025-09-09T00:37:41.821523519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 00:37:41.821847 containerd[1580]: time="2025-09-09T00:37:41.821543831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 00:37:41.821847 containerd[1580]: time="2025-09-09T00:37:41.821558999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 00:37:41.821847 containerd[1580]: time="2025-09-09T00:37:41.821572051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 00:37:41.821847 containerd[1580]: time="2025-09-09T00:37:41.821584387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 00:37:41.821847 containerd[1580]: time="2025-09-09T00:37:41.821597876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 00:37:41.821847 containerd[1580]: time="2025-09-09T00:37:41.821610778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 00:37:41.821847 containerd[1580]: time="2025-09-09T00:37:41.821623920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 00:37:41.821847 containerd[1580]: time="2025-09-09T00:37:41.821638381Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 00:37:41.821847 containerd[1580]: time="2025-09-09T00:37:41.821651801Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 00:37:41.821847 containerd[1580]: time="2025-09-09T00:37:41.821741206Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 00:37:41.821847 containerd[1580]: time="2025-09-09T00:37:41.821806523Z" level=info msg="Start snapshots syncer" Sep 9 00:37:41.821847 containerd[1580]: time="2025-09-09T00:37:41.821843275Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 00:37:41.822182 containerd[1580]: time="2025-09-09T00:37:41.822125305Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 00:37:41.822279 containerd[1580]: time="2025-09-09T00:37:41.822189679Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 00:37:41.822300 containerd[1580]: time="2025-09-09T00:37:41.822275408Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 00:37:41.822424 containerd[1580]: time="2025-09-09T00:37:41.822399190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 00:37:41.822462 containerd[1580]: time="2025-09-09T00:37:41.822424698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 00:37:41.822462 containerd[1580]: time="2025-09-09T00:37:41.822436339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 00:37:41.822462 containerd[1580]: time="2025-09-09T00:37:41.822448090Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 00:37:41.822536 containerd[1580]: time="2025-09-09T00:37:41.822465532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 00:37:41.822536 containerd[1580]: time="2025-09-09T00:37:41.822495549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 00:37:41.822536 containerd[1580]: time="2025-09-09T00:37:41.822510179Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 00:37:41.822536 containerd[1580]: time="2025-09-09T00:37:41.822532101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 00:37:41.822730 containerd[1580]: time="2025-09-09T00:37:41.822544010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 00:37:41.822730 containerd[1580]: time="2025-09-09T00:37:41.822557270Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 00:37:41.822730 containerd[1580]: time="2025-09-09T00:37:41.822598601Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:37:41.822730 containerd[1580]: time="2025-09-09T00:37:41.822616579Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:37:41.822730 containerd[1580]: time="2025-09-09T00:37:41.822628191Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:37:41.822730 containerd[1580]: time="2025-09-09T00:37:41.822640050Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:37:41.822730 containerd[1580]: time="2025-09-09T00:37:41.822650648Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 00:37:41.822730 containerd[1580]: time="2025-09-09T00:37:41.822691988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 00:37:41.822730 containerd[1580]: time="2025-09-09T00:37:41.822705209Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 00:37:41.822730 containerd[1580]: time="2025-09-09T00:37:41.822726365Z" level=info msg="runtime interface created" Sep 9 00:37:41.822730 containerd[1580]: time="2025-09-09T00:37:41.822733239Z" level=info msg="created NRI interface" Sep 9 00:37:41.822979 containerd[1580]: time="2025-09-09T00:37:41.822744741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 00:37:41.822979 containerd[1580]: time="2025-09-09T00:37:41.822794316Z" level=info msg="Connect containerd service" Sep 9 00:37:41.822979 containerd[1580]: time="2025-09-09T00:37:41.822822594Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:37:41.823121 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:37:41.824412 containerd[1580]: time="2025-09-09T00:37:41.823684925Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:37:41.826582 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:37:41.866976 systemd-networkd[1476]: eth0: Gained IPv6LL Sep 9 00:37:41.870021 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:37:41.872117 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:37:41.875107 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:37:41.878919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:37:41.882242 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:37:41.909490 containerd[1580]: time="2025-09-09T00:37:41.909104324Z" level=info msg="Start subscribing containerd event" Sep 9 00:37:41.909490 containerd[1580]: time="2025-09-09T00:37:41.909172443Z" level=info msg="Start recovering state" Sep 9 00:37:41.909490 containerd[1580]: time="2025-09-09T00:37:41.909348034Z" level=info msg="Start event monitor" Sep 9 00:37:41.909490 containerd[1580]: time="2025-09-09T00:37:41.909378279Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:37:41.909490 containerd[1580]: time="2025-09-09T00:37:41.909493449Z" level=info msg="Start streaming server" Sep 9 00:37:41.909682 containerd[1580]: time="2025-09-09T00:37:41.909506332Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 00:37:41.909682 containerd[1580]: time="2025-09-09T00:37:41.909514467Z" level=info msg="runtime interface starting up..." Sep 9 00:37:41.909682 containerd[1580]: time="2025-09-09T00:37:41.909520347Z" level=info msg="starting plugins..." Sep 9 00:37:41.910003 containerd[1580]: time="2025-09-09T00:37:41.909941635Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 00:37:41.910365 containerd[1580]: time="2025-09-09T00:37:41.910316794Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:37:41.910583 containerd[1580]: time="2025-09-09T00:37:41.910546937Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:37:41.911658 containerd[1580]: time="2025-09-09T00:37:41.911617121Z" level=info msg="containerd successfully booted in 0.959570s" Sep 9 00:37:41.912452 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:37:41.918268 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:37:41.926227 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:37:41.926567 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:37:41.928419 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:37:42.481160 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:37:42.483707 systemd[1]: Started sshd@0-10.0.0.118:22-10.0.0.1:38146.service - OpenSSH per-connection server daemon (10.0.0.1:38146). Sep 9 00:37:42.553925 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 38146 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:37:42.556069 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:42.569734 systemd-logind[1516]: New session 1 of user core. Sep 9 00:37:42.571314 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:37:42.574041 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:37:42.605790 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:37:42.622307 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:37:42.638243 (systemd)[1677]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:42.638739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:37:42.640443 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:37:42.646403 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:37:42.647154 systemd-logind[1516]: New session c1 of user core. Sep 9 00:37:42.796349 systemd[1677]: Queued start job for default target default.target. Sep 9 00:37:42.809204 systemd[1677]: Created slice app.slice - User Application Slice. Sep 9 00:37:42.809237 systemd[1677]: Reached target paths.target - Paths. Sep 9 00:37:42.809286 systemd[1677]: Reached target timers.target - Timers. Sep 9 00:37:42.810945 systemd[1677]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:37:42.823250 systemd[1677]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:37:42.823439 systemd[1677]: Reached target sockets.target - Sockets. Sep 9 00:37:42.823514 systemd[1677]: Reached target basic.target - Basic System. Sep 9 00:37:42.823585 systemd[1677]: Reached target default.target - Main User Target. Sep 9 00:37:42.823627 systemd[1677]: Startup finished in 169ms. Sep 9 00:37:42.823800 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:37:42.826913 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:37:42.828185 systemd[1]: Startup finished in 2.985s (kernel) + 8.728s (initrd) + 6.523s (userspace) = 18.237s. Sep 9 00:37:42.896076 systemd[1]: Started sshd@1-10.0.0.118:22-10.0.0.1:38152.service - OpenSSH per-connection server daemon (10.0.0.1:38152). Sep 9 00:37:42.949206 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 38152 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:37:42.951157 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:42.958999 systemd-logind[1516]: New session 2 of user core. Sep 9 00:37:42.965064 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:37:43.021316 sshd[1705]: Connection closed by 10.0.0.1 port 38152 Sep 9 00:37:43.021791 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:43.035223 systemd[1]: sshd@1-10.0.0.118:22-10.0.0.1:38152.service: Deactivated successfully. Sep 9 00:37:43.037352 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:37:43.038453 systemd-logind[1516]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:37:43.041504 systemd[1]: Started sshd@2-10.0.0.118:22-10.0.0.1:38166.service - OpenSSH per-connection server daemon (10.0.0.1:38166). Sep 9 00:37:43.043285 systemd-logind[1516]: Removed session 2. Sep 9 00:37:43.103914 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 38166 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:37:43.105880 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:43.112260 systemd-logind[1516]: New session 3 of user core. Sep 9 00:37:43.117950 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:37:43.123121 kubelet[1681]: E0909 00:37:43.123075 1681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:37:43.127473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:37:43.127679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:37:43.128143 systemd[1]: kubelet.service: Consumed 1.046s CPU time, 266M memory peak. Sep 9 00:37:43.168648 sshd[1714]: Connection closed by 10.0.0.1 port 38166 Sep 9 00:37:43.169055 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:43.182424 systemd[1]: sshd@2-10.0.0.118:22-10.0.0.1:38166.service: Deactivated successfully. Sep 9 00:37:43.184128 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:37:43.184838 systemd-logind[1516]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:37:43.187367 systemd[1]: Started sshd@3-10.0.0.118:22-10.0.0.1:38168.service - OpenSSH per-connection server daemon (10.0.0.1:38168). Sep 9 00:37:43.188163 systemd-logind[1516]: Removed session 3. Sep 9 00:37:43.253445 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 38168 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:37:43.255608 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:43.260489 systemd-logind[1516]: New session 4 of user core. Sep 9 00:37:43.274946 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:37:43.329641 sshd[1724]: Connection closed by 10.0.0.1 port 38168 Sep 9 00:37:43.330039 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:43.343679 systemd[1]: sshd@3-10.0.0.118:22-10.0.0.1:38168.service: Deactivated successfully. Sep 9 00:37:43.345720 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:37:43.346578 systemd-logind[1516]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:37:43.349499 systemd[1]: Started sshd@4-10.0.0.118:22-10.0.0.1:38172.service - OpenSSH per-connection server daemon (10.0.0.1:38172). Sep 9 00:37:43.350148 systemd-logind[1516]: Removed session 4. Sep 9 00:37:43.411630 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 38172 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:37:43.413164 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:43.418004 systemd-logind[1516]: New session 5 of user core. Sep 9 00:37:43.427932 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:37:43.487495 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:37:43.487853 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:37:43.503457 sudo[1734]: pam_unix(sudo:session): session closed for user root Sep 9 00:37:43.505318 sshd[1733]: Connection closed by 10.0.0.1 port 38172 Sep 9 00:37:43.505674 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:43.526357 systemd[1]: sshd@4-10.0.0.118:22-10.0.0.1:38172.service: Deactivated successfully. Sep 9 00:37:43.528332 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:37:43.529190 systemd-logind[1516]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:37:43.531625 systemd[1]: Started sshd@5-10.0.0.118:22-10.0.0.1:38186.service - OpenSSH per-connection server daemon (10.0.0.1:38186). Sep 9 00:37:43.532625 systemd-logind[1516]: Removed session 5. Sep 9 00:37:43.592550 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 38186 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:37:43.594019 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:43.598428 systemd-logind[1516]: New session 6 of user core. Sep 9 00:37:43.607923 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:37:43.662936 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:37:43.663299 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:37:43.672842 sudo[1746]: pam_unix(sudo:session): session closed for user root Sep 9 00:37:43.680351 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 00:37:43.680682 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:37:43.693731 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:37:43.756844 augenrules[1768]: No rules Sep 9 00:37:43.758725 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:37:43.759080 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:37:43.760306 sudo[1745]: pam_unix(sudo:session): session closed for user root Sep 9 00:37:43.762110 sshd[1744]: Connection closed by 10.0.0.1 port 38186 Sep 9 00:37:43.762521 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:43.776596 systemd[1]: sshd@5-10.0.0.118:22-10.0.0.1:38186.service: Deactivated successfully. Sep 9 00:37:43.779285 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:37:43.780264 systemd-logind[1516]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:37:43.783367 systemd[1]: Started sshd@6-10.0.0.118:22-10.0.0.1:38194.service - OpenSSH per-connection server daemon (10.0.0.1:38194). Sep 9 00:37:43.784175 systemd-logind[1516]: Removed session 6. Sep 9 00:37:43.853667 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 38194 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:37:43.855250 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:43.859738 systemd-logind[1516]: New session 7 of user core. Sep 9 00:37:43.867898 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:37:43.921234 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:37:43.921563 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:37:44.214827 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:37:44.237110 (dockerd)[1801]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:37:44.453088 dockerd[1801]: time="2025-09-09T00:37:44.453020057Z" level=info msg="Starting up" Sep 9 00:37:44.453847 dockerd[1801]: time="2025-09-09T00:37:44.453813939Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 00:37:44.467430 dockerd[1801]: time="2025-09-09T00:37:44.467305445Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 00:37:45.834621 dockerd[1801]: time="2025-09-09T00:37:45.834565094Z" level=info msg="Loading containers: start." Sep 9 00:37:45.925784 kernel: Initializing XFRM netlink socket Sep 9 00:37:46.203879 systemd-networkd[1476]: docker0: Link UP Sep 9 00:37:46.208880 dockerd[1801]: time="2025-09-09T00:37:46.208849511Z" level=info msg="Loading containers: done." Sep 9 00:37:46.226293 dockerd[1801]: time="2025-09-09T00:37:46.226228409Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:37:46.226489 dockerd[1801]: time="2025-09-09T00:37:46.226333430Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 00:37:46.226489 dockerd[1801]: time="2025-09-09T00:37:46.226440482Z" level=info msg="Initializing buildkit" Sep 9 00:37:46.257946 dockerd[1801]: time="2025-09-09T00:37:46.257885508Z" level=info msg="Completed buildkit initialization" Sep 9 00:37:46.262087 dockerd[1801]: time="2025-09-09T00:37:46.262017517Z" level=info msg="Daemon has completed initialization" Sep 9 00:37:46.262156 dockerd[1801]: time="2025-09-09T00:37:46.262119830Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:37:46.262291 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:37:47.047618 containerd[1580]: time="2025-09-09T00:37:47.047552834Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 00:37:48.571741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806333188.mount: Deactivated successfully. Sep 9 00:37:50.394580 containerd[1580]: time="2025-09-09T00:37:50.394486844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:50.395312 containerd[1580]: time="2025-09-09T00:37:50.395226287Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 9 00:37:50.396581 containerd[1580]: time="2025-09-09T00:37:50.396547180Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:50.399904 containerd[1580]: time="2025-09-09T00:37:50.399384281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:50.400617 containerd[1580]: time="2025-09-09T00:37:50.400573970Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 3.352978199s" Sep 9 00:37:50.400617 containerd[1580]: time="2025-09-09T00:37:50.400610213Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 9 00:37:50.401718 containerd[1580]: time="2025-09-09T00:37:50.401658628Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 00:37:52.611314 containerd[1580]: time="2025-09-09T00:37:52.611227684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:52.612786 containerd[1580]: time="2025-09-09T00:37:52.612746274Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 9 00:37:52.614733 containerd[1580]: time="2025-09-09T00:37:52.614689245Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:52.617595 containerd[1580]: time="2025-09-09T00:37:52.617541582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:52.618376 containerd[1580]: time="2025-09-09T00:37:52.618340469Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 2.216639053s" Sep 9 00:37:52.618430 containerd[1580]: time="2025-09-09T00:37:52.618380355Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 9 00:37:52.619023 containerd[1580]: time="2025-09-09T00:37:52.618991754Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 00:37:53.378260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:37:53.380188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:37:53.619806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:37:53.734221 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:37:54.328783 kubelet[2088]: E0909 00:37:54.328675 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:37:54.336887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:37:54.337147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:37:54.337682 systemd[1]: kubelet.service: Consumed 378ms CPU time, 110.3M memory peak. Sep 9 00:37:56.716921 containerd[1580]: time="2025-09-09T00:37:56.716835331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:56.744225 containerd[1580]: time="2025-09-09T00:37:56.744147813Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 9 00:37:56.768258 containerd[1580]: time="2025-09-09T00:37:56.768190413Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:56.799950 containerd[1580]: time="2025-09-09T00:37:56.799884960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:56.801164 containerd[1580]: time="2025-09-09T00:37:56.801132944Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 4.182112728s" Sep 9 00:37:56.801229 containerd[1580]: time="2025-09-09T00:37:56.801170242Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 9 00:37:56.801781 containerd[1580]: time="2025-09-09T00:37:56.801662478Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 00:37:58.689110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2955661631.mount: Deactivated successfully. Sep 9 00:37:59.766604 containerd[1580]: time="2025-09-09T00:37:59.766515409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:59.767342 containerd[1580]: time="2025-09-09T00:37:59.767281725Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 9 00:37:59.768873 containerd[1580]: time="2025-09-09T00:37:59.768809692Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:59.771060 containerd[1580]: time="2025-09-09T00:37:59.771011319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:59.771661 containerd[1580]: time="2025-09-09T00:37:59.771617585Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 2.969925092s" Sep 9 00:37:59.771661 containerd[1580]: time="2025-09-09T00:37:59.771648359Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 9 00:37:59.772249 containerd[1580]: time="2025-09-09T00:37:59.772201898Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 00:38:01.528496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4126587026.mount: Deactivated successfully. Sep 9 00:38:04.587936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:38:04.590434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:38:04.835076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:38:04.849212 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:38:04.910985 kubelet[2168]: E0909 00:38:04.910927 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:38:04.922382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:38:04.922624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:38:04.923329 systemd[1]: kubelet.service: Consumed 257ms CPU time, 110.9M memory peak. Sep 9 00:38:05.290249 containerd[1580]: time="2025-09-09T00:38:05.290069372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:05.291013 containerd[1580]: time="2025-09-09T00:38:05.290951617Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 9 00:38:05.292452 containerd[1580]: time="2025-09-09T00:38:05.292399531Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:05.295259 containerd[1580]: time="2025-09-09T00:38:05.295190552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:05.296046 containerd[1580]: time="2025-09-09T00:38:05.296016445Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 5.523790617s" Sep 9 00:38:05.296046 containerd[1580]: time="2025-09-09T00:38:05.296046436Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 9 00:38:05.296490 containerd[1580]: time="2025-09-09T00:38:05.296466877Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:38:05.845357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1453335169.mount: Deactivated successfully. Sep 9 00:38:05.854206 containerd[1580]: time="2025-09-09T00:38:05.854125569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:38:05.854974 containerd[1580]: time="2025-09-09T00:38:05.854926739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:38:05.856165 containerd[1580]: time="2025-09-09T00:38:05.856087358Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:38:05.858355 containerd[1580]: time="2025-09-09T00:38:05.858253968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:38:05.858920 containerd[1580]: time="2025-09-09T00:38:05.858878780Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 562.385492ms" Sep 9 00:38:05.858982 containerd[1580]: time="2025-09-09T00:38:05.858917512Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:38:05.859427 containerd[1580]: time="2025-09-09T00:38:05.859398906Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 00:38:06.995309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829049126.mount: Deactivated successfully. Sep 9 00:38:10.714947 containerd[1580]: time="2025-09-09T00:38:10.714856686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:10.715602 containerd[1580]: time="2025-09-09T00:38:10.715570642Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 9 00:38:10.716948 containerd[1580]: time="2025-09-09T00:38:10.716913777Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:10.719546 containerd[1580]: time="2025-09-09T00:38:10.719497281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:10.720661 containerd[1580]: time="2025-09-09T00:38:10.720617651Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.861190159s" Sep 9 00:38:10.720661 containerd[1580]: time="2025-09-09T00:38:10.720658494Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 9 00:38:14.660258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:38:14.660418 systemd[1]: kubelet.service: Consumed 257ms CPU time, 110.9M memory peak. Sep 9 00:38:14.662548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:38:14.688202 systemd[1]: Reload requested from client PID 2268 ('systemctl') (unit session-7.scope)... Sep 9 00:38:14.688229 systemd[1]: Reloading... Sep 9 00:38:14.773810 zram_generator::config[2310]: No configuration found. Sep 9 00:38:15.955465 systemd[1]: Reloading finished in 1266 ms. Sep 9 00:38:16.024505 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:38:16.024604 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:38:16.024932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:38:16.024975 systemd[1]: kubelet.service: Consumed 165ms CPU time, 98.3M memory peak. Sep 9 00:38:16.026528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:38:16.700797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:38:16.717215 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:38:16.756105 kubelet[2359]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:38:16.756105 kubelet[2359]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:38:16.756105 kubelet[2359]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:38:16.756659 kubelet[2359]: I0909 00:38:16.756130 2359 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:38:18.304555 kubelet[2359]: I0909 00:38:18.304484 2359 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:38:18.304555 kubelet[2359]: I0909 00:38:18.304528 2359 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:38:18.305042 kubelet[2359]: I0909 00:38:18.304786 2359 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:38:18.335232 kubelet[2359]: I0909 00:38:18.335164 2359 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:38:18.335650 kubelet[2359]: E0909 00:38:18.335594 2359 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:38:18.343193 kubelet[2359]: I0909 00:38:18.343141 2359 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:38:18.349154 kubelet[2359]: I0909 00:38:18.349123 2359 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:38:18.349432 kubelet[2359]: I0909 00:38:18.349387 2359 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:38:18.349578 kubelet[2359]: I0909 00:38:18.349416 2359 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:38:18.349734 kubelet[2359]: I0909 00:38:18.349579 2359 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:38:18.349734 kubelet[2359]: I0909 00:38:18.349587 2359 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:38:18.350418 kubelet[2359]: I0909 00:38:18.350380 2359 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:38:18.352574 kubelet[2359]: I0909 00:38:18.352544 2359 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:38:18.352574 kubelet[2359]: I0909 00:38:18.352566 2359 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:38:18.352652 kubelet[2359]: I0909 00:38:18.352594 2359 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:38:18.352652 kubelet[2359]: I0909 00:38:18.352615 2359 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:38:18.358927 kubelet[2359]: E0909 00:38:18.358862 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:38:18.359028 kubelet[2359]: E0909 00:38:18.358935 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:38:18.359480 kubelet[2359]: I0909 00:38:18.359452 2359 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 00:38:18.363476 kubelet[2359]: I0909 00:38:18.363382 2359 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:38:18.364063 kubelet[2359]: W0909 00:38:18.364043 2359 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:38:18.367303 kubelet[2359]: I0909 00:38:18.367284 2359 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:38:18.367357 kubelet[2359]: I0909 00:38:18.367347 2359 server.go:1289] "Started kubelet" Sep 9 00:38:18.369259 kubelet[2359]: I0909 00:38:18.369209 2359 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:38:18.370270 kubelet[2359]: I0909 00:38:18.370019 2359 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:38:18.370270 kubelet[2359]: I0909 00:38:18.370034 2359 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:38:18.370524 kubelet[2359]: I0909 00:38:18.370025 2359 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:38:18.371088 kubelet[2359]: I0909 00:38:18.371059 2359 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:38:18.372049 kubelet[2359]: I0909 00:38:18.372002 2359 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:38:18.372417 kubelet[2359]: E0909 00:38:18.372389 2359 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:38:18.372488 kubelet[2359]: I0909 00:38:18.372436 2359 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:38:18.372637 kubelet[2359]: I0909 00:38:18.372611 2359 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:38:18.372726 kubelet[2359]: I0909 00:38:18.372706 2359 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:38:18.373054 kubelet[2359]: E0909 00:38:18.371889 2359 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186376405a29e602 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:38:18.367305218 +0000 UTC m=+1.644346250,LastTimestamp:2025-09-09 00:38:18.367305218 +0000 UTC m=+1.644346250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:38:18.373383 kubelet[2359]: E0909 00:38:18.373336 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:38:18.373707 kubelet[2359]: I0909 00:38:18.373680 2359 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:38:18.373869 kubelet[2359]: I0909 00:38:18.373842 2359 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:38:18.373965 kubelet[2359]: E0909 00:38:18.373941 2359 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:38:18.375054 kubelet[2359]: I0909 00:38:18.375029 2359 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:38:18.375351 kubelet[2359]: E0909 00:38:18.375316 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="200ms" Sep 9 00:38:18.405933 kubelet[2359]: I0909 00:38:18.405539 2359 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:38:18.406089 kubelet[2359]: I0909 00:38:18.406069 2359 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:38:18.406089 kubelet[2359]: I0909 00:38:18.406085 2359 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:38:18.406158 kubelet[2359]: I0909 00:38:18.406104 2359 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:38:18.407553 kubelet[2359]: I0909 00:38:18.407514 2359 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:38:18.407553 kubelet[2359]: I0909 00:38:18.407544 2359 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:38:18.407647 kubelet[2359]: I0909 00:38:18.407562 2359 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:38:18.407647 kubelet[2359]: I0909 00:38:18.407575 2359 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:38:18.407647 kubelet[2359]: E0909 00:38:18.407619 2359 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:38:18.473503 kubelet[2359]: E0909 00:38:18.473390 2359 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:38:18.508724 kubelet[2359]: E0909 00:38:18.508654 2359 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:38:18.574084 kubelet[2359]: E0909 00:38:18.573945 2359 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:38:18.576634 kubelet[2359]: E0909 00:38:18.576570 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="400ms" Sep 9 00:38:18.674778 kubelet[2359]: E0909 00:38:18.674744 2359 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:38:18.708962 kubelet[2359]: E0909 00:38:18.708933 2359 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:38:18.737370 kubelet[2359]: E0909 00:38:18.737323 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:38:18.741548 kubelet[2359]: I0909 00:38:18.741505 2359 policy_none.go:49] "None policy: Start" Sep 9 00:38:18.741548 kubelet[2359]: I0909 00:38:18.741526 2359 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:38:18.741548 kubelet[2359]: I0909 00:38:18.741541 2359 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:38:18.749053 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:38:18.760405 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:38:18.763979 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:38:18.774639 kubelet[2359]: E0909 00:38:18.774604 2359 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:38:18.774843 kubelet[2359]: E0909 00:38:18.774814 2359 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:38:18.774904 kubelet[2359]: I0909 00:38:18.774864 2359 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:38:18.774904 kubelet[2359]: I0909 00:38:18.774878 2359 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:38:18.775796 kubelet[2359]: I0909 00:38:18.775120 2359 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:38:18.776073 kubelet[2359]: E0909 00:38:18.776008 2359 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:38:18.776073 kubelet[2359]: E0909 00:38:18.776060 2359 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:38:18.876677 kubelet[2359]: I0909 00:38:18.876479 2359 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:38:18.877145 kubelet[2359]: E0909 00:38:18.877084 2359 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 9 00:38:18.977948 kubelet[2359]: E0909 00:38:18.977892 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="800ms" Sep 9 00:38:19.078422 kubelet[2359]: I0909 00:38:19.078398 2359 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:38:19.078819 kubelet[2359]: E0909 00:38:19.078743 2359 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 9 00:38:19.119672 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 00:38:19.129809 kubelet[2359]: E0909 00:38:19.129694 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:38:19.132656 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 00:38:19.141939 kubelet[2359]: E0909 00:38:19.141906 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:38:19.144963 systemd[1]: Created slice kubepods-burstable-pod3135ca76ca8e399343e240caa46a1438.slice - libcontainer container kubepods-burstable-pod3135ca76ca8e399343e240caa46a1438.slice. Sep 9 00:38:19.146813 kubelet[2359]: E0909 00:38:19.146792 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:38:19.178258 kubelet[2359]: I0909 00:38:19.178224 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3135ca76ca8e399343e240caa46a1438-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3135ca76ca8e399343e240caa46a1438\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:38:19.178308 kubelet[2359]: I0909 00:38:19.178258 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3135ca76ca8e399343e240caa46a1438-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3135ca76ca8e399343e240caa46a1438\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:38:19.178308 kubelet[2359]: I0909 00:38:19.178279 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:38:19.178308 kubelet[2359]: I0909 00:38:19.178295 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:38:19.178308 kubelet[2359]: I0909 00:38:19.178309 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:38:19.178405 kubelet[2359]: I0909 00:38:19.178324 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:38:19.178405 kubelet[2359]: I0909 00:38:19.178337 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:38:19.178405 kubelet[2359]: I0909 00:38:19.178365 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:38:19.178405 kubelet[2359]: I0909 00:38:19.178394 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3135ca76ca8e399343e240caa46a1438-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3135ca76ca8e399343e240caa46a1438\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:38:19.431111 kubelet[2359]: E0909 00:38:19.430955 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:19.431578 containerd[1580]: time="2025-09-09T00:38:19.431469891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 00:38:19.442718 kubelet[2359]: E0909 00:38:19.442673 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:19.443098 containerd[1580]: time="2025-09-09T00:38:19.443055474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 00:38:19.447519 kubelet[2359]: E0909 00:38:19.447489 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:19.447778 containerd[1580]: time="2025-09-09T00:38:19.447745967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3135ca76ca8e399343e240caa46a1438,Namespace:kube-system,Attempt:0,}" Sep 9 00:38:19.484872 kubelet[2359]: I0909 00:38:19.484280 2359 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:38:19.484872 kubelet[2359]: E0909 00:38:19.484620 2359 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 9 00:38:19.487425 containerd[1580]: time="2025-09-09T00:38:19.487373031Z" level=info msg="connecting to shim 1a9c157714cae5328c479a0bf4ed4e184db5ccaaa676bb54959514f16368d64f" address="unix:///run/containerd/s/d7007443a16122a59485198bebec0175dcf2af1dd033d331bf5a23a0fb5b1efe" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:38:19.495780 containerd[1580]: time="2025-09-09T00:38:19.495665370Z" level=info msg="connecting to shim 9a8cbf680f6c03dde1d76c94bbdd7241ff6909a96f616d7595f172190baef4e4" address="unix:///run/containerd/s/8b3335145a9e076c4982a926e6d185d1d3bc234028368e39597d64ff4816882c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:38:19.499779 containerd[1580]: time="2025-09-09T00:38:19.497356752Z" level=info msg="connecting to shim 6a884ebe7bc4f59646bfc60130cd3edb4e52bbe071e880ce722830178ba9f8b7" address="unix:///run/containerd/s/f29be4f15dfadf030d6d9de1844df0b6a8b32b101e9c045aeda8174a0d4b84fd" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:38:19.517830 kubelet[2359]: E0909 00:38:19.516727 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:38:19.532778 systemd[1]: Started cri-containerd-1a9c157714cae5328c479a0bf4ed4e184db5ccaaa676bb54959514f16368d64f.scope - libcontainer container 1a9c157714cae5328c479a0bf4ed4e184db5ccaaa676bb54959514f16368d64f. Sep 9 00:38:19.534196 systemd[1]: Started cri-containerd-9a8cbf680f6c03dde1d76c94bbdd7241ff6909a96f616d7595f172190baef4e4.scope - libcontainer container 9a8cbf680f6c03dde1d76c94bbdd7241ff6909a96f616d7595f172190baef4e4. Sep 9 00:38:19.537661 systemd[1]: Started cri-containerd-6a884ebe7bc4f59646bfc60130cd3edb4e52bbe071e880ce722830178ba9f8b7.scope - libcontainer container 6a884ebe7bc4f59646bfc60130cd3edb4e52bbe071e880ce722830178ba9f8b7. Sep 9 00:38:19.608788 containerd[1580]: time="2025-09-09T00:38:19.608720059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a884ebe7bc4f59646bfc60130cd3edb4e52bbe071e880ce722830178ba9f8b7\"" Sep 9 00:38:19.609729 kubelet[2359]: E0909 00:38:19.609698 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:19.611454 containerd[1580]: time="2025-09-09T00:38:19.611403200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a9c157714cae5328c479a0bf4ed4e184db5ccaaa676bb54959514f16368d64f\"" Sep 9 00:38:19.613381 kubelet[2359]: E0909 00:38:19.613355 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:19.617433 containerd[1580]: time="2025-09-09T00:38:19.617393931Z" level=info msg="CreateContainer within sandbox \"6a884ebe7bc4f59646bfc60130cd3edb4e52bbe071e880ce722830178ba9f8b7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:38:19.618225 containerd[1580]: time="2025-09-09T00:38:19.618193089Z" level=info msg="CreateContainer within sandbox \"1a9c157714cae5328c479a0bf4ed4e184db5ccaaa676bb54959514f16368d64f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:38:19.624998 containerd[1580]: time="2025-09-09T00:38:19.624951814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3135ca76ca8e399343e240caa46a1438,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a8cbf680f6c03dde1d76c94bbdd7241ff6909a96f616d7595f172190baef4e4\"" Sep 9 00:38:19.625699 kubelet[2359]: E0909 00:38:19.625674 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:19.630118 containerd[1580]: time="2025-09-09T00:38:19.630071425Z" level=info msg="CreateContainer within sandbox \"9a8cbf680f6c03dde1d76c94bbdd7241ff6909a96f616d7595f172190baef4e4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:38:19.630930 containerd[1580]: time="2025-09-09T00:38:19.630901949Z" level=info msg="Container 70ccf1163ea142537f555db1b71ba86fb2e6c8ff8940313671879279c926ffeb: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:38:19.632461 containerd[1580]: time="2025-09-09T00:38:19.632426723Z" level=info msg="Container f9e728b44ea0278e06b71ccc3282c94494d1ff0bf96ded8848880ce1204a6062: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:38:19.640171 containerd[1580]: time="2025-09-09T00:38:19.640137039Z" level=info msg="CreateContainer within sandbox \"1a9c157714cae5328c479a0bf4ed4e184db5ccaaa676bb54959514f16368d64f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"70ccf1163ea142537f555db1b71ba86fb2e6c8ff8940313671879279c926ffeb\"" Sep 9 00:38:19.640624 containerd[1580]: time="2025-09-09T00:38:19.640596642Z" level=info msg="StartContainer for \"70ccf1163ea142537f555db1b71ba86fb2e6c8ff8940313671879279c926ffeb\"" Sep 9 00:38:19.641740 containerd[1580]: time="2025-09-09T00:38:19.641706863Z" level=info msg="connecting to shim 70ccf1163ea142537f555db1b71ba86fb2e6c8ff8940313671879279c926ffeb" address="unix:///run/containerd/s/d7007443a16122a59485198bebec0175dcf2af1dd033d331bf5a23a0fb5b1efe" protocol=ttrpc version=3 Sep 9 00:38:19.642256 containerd[1580]: time="2025-09-09T00:38:19.642228627Z" level=info msg="Container 19f982d903636666c8627aa7ffa9ad92de4d22d7be801759f43c4d44d9dc48a1: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:38:19.646338 containerd[1580]: time="2025-09-09T00:38:19.646304451Z" level=info msg="CreateContainer within sandbox \"6a884ebe7bc4f59646bfc60130cd3edb4e52bbe071e880ce722830178ba9f8b7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f9e728b44ea0278e06b71ccc3282c94494d1ff0bf96ded8848880ce1204a6062\"" Sep 9 00:38:19.647606 containerd[1580]: time="2025-09-09T00:38:19.646882148Z" level=info msg="StartContainer for \"f9e728b44ea0278e06b71ccc3282c94494d1ff0bf96ded8848880ce1204a6062\"" Sep 9 00:38:19.647984 containerd[1580]: time="2025-09-09T00:38:19.647951835Z" level=info msg="connecting to shim f9e728b44ea0278e06b71ccc3282c94494d1ff0bf96ded8848880ce1204a6062" address="unix:///run/containerd/s/f29be4f15dfadf030d6d9de1844df0b6a8b32b101e9c045aeda8174a0d4b84fd" protocol=ttrpc version=3 Sep 9 00:38:19.649267 containerd[1580]: time="2025-09-09T00:38:19.649182443Z" level=info msg="CreateContainer within sandbox \"9a8cbf680f6c03dde1d76c94bbdd7241ff6909a96f616d7595f172190baef4e4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"19f982d903636666c8627aa7ffa9ad92de4d22d7be801759f43c4d44d9dc48a1\"" Sep 9 00:38:19.649673 containerd[1580]: time="2025-09-09T00:38:19.649641695Z" level=info msg="StartContainer for \"19f982d903636666c8627aa7ffa9ad92de4d22d7be801759f43c4d44d9dc48a1\"" Sep 9 00:38:19.651168 containerd[1580]: time="2025-09-09T00:38:19.651082924Z" level=info msg="connecting to shim 19f982d903636666c8627aa7ffa9ad92de4d22d7be801759f43c4d44d9dc48a1" address="unix:///run/containerd/s/8b3335145a9e076c4982a926e6d185d1d3bc234028368e39597d64ff4816882c" protocol=ttrpc version=3 Sep 9 00:38:19.664051 systemd[1]: Started cri-containerd-70ccf1163ea142537f555db1b71ba86fb2e6c8ff8940313671879279c926ffeb.scope - libcontainer container 70ccf1163ea142537f555db1b71ba86fb2e6c8ff8940313671879279c926ffeb. Sep 9 00:38:19.669679 systemd[1]: Started cri-containerd-f9e728b44ea0278e06b71ccc3282c94494d1ff0bf96ded8848880ce1204a6062.scope - libcontainer container f9e728b44ea0278e06b71ccc3282c94494d1ff0bf96ded8848880ce1204a6062. Sep 9 00:38:19.674742 systemd[1]: Started cri-containerd-19f982d903636666c8627aa7ffa9ad92de4d22d7be801759f43c4d44d9dc48a1.scope - libcontainer container 19f982d903636666c8627aa7ffa9ad92de4d22d7be801759f43c4d44d9dc48a1. Sep 9 00:38:19.741017 containerd[1580]: time="2025-09-09T00:38:19.740879801Z" level=info msg="StartContainer for \"f9e728b44ea0278e06b71ccc3282c94494d1ff0bf96ded8848880ce1204a6062\" returns successfully" Sep 9 00:38:19.742339 containerd[1580]: time="2025-09-09T00:38:19.742308154Z" level=info msg="StartContainer for \"19f982d903636666c8627aa7ffa9ad92de4d22d7be801759f43c4d44d9dc48a1\" returns successfully" Sep 9 00:38:19.750708 containerd[1580]: time="2025-09-09T00:38:19.750681365Z" level=info msg="StartContainer for \"70ccf1163ea142537f555db1b71ba86fb2e6c8ff8940313671879279c926ffeb\" returns successfully" Sep 9 00:38:19.751419 kubelet[2359]: E0909 00:38:19.751296 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:38:19.778568 kubelet[2359]: E0909 00:38:19.778509 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="1.6s" Sep 9 00:38:20.286207 kubelet[2359]: I0909 00:38:20.285917 2359 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:38:20.420606 kubelet[2359]: E0909 00:38:20.420574 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:38:20.422356 kubelet[2359]: E0909 00:38:20.422296 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:20.425296 kubelet[2359]: E0909 00:38:20.425178 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:38:20.425296 kubelet[2359]: E0909 00:38:20.425259 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:20.429979 kubelet[2359]: E0909 00:38:20.429953 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:38:20.430132 kubelet[2359]: E0909 00:38:20.430114 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:21.355571 kubelet[2359]: I0909 00:38:21.355516 2359 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:38:21.355571 kubelet[2359]: E0909 00:38:21.355561 2359 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:38:21.356100 kubelet[2359]: I0909 00:38:21.355714 2359 apiserver.go:52] "Watching apiserver" Sep 9 00:38:21.373625 kubelet[2359]: I0909 00:38:21.373563 2359 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:38:21.375709 kubelet[2359]: I0909 00:38:21.375677 2359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:38:21.380836 kubelet[2359]: E0909 00:38:21.380790 2359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:38:21.380836 kubelet[2359]: I0909 00:38:21.380836 2359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:38:21.382369 kubelet[2359]: E0909 00:38:21.382344 2359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:38:21.382369 kubelet[2359]: I0909 00:38:21.382364 2359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:38:21.383915 kubelet[2359]: E0909 00:38:21.383879 2359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:38:21.429545 kubelet[2359]: I0909 00:38:21.429514 2359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:38:21.429793 kubelet[2359]: I0909 00:38:21.429715 2359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:38:21.431471 kubelet[2359]: E0909 00:38:21.431409 2359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:38:21.431695 kubelet[2359]: E0909 00:38:21.431578 2359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:38:21.431695 kubelet[2359]: E0909 00:38:21.431592 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:21.431798 kubelet[2359]: E0909 00:38:21.431714 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:23.153492 systemd[1]: Reload requested from client PID 2645 ('systemctl') (unit session-7.scope)... Sep 9 00:38:23.153513 systemd[1]: Reloading... Sep 9 00:38:23.243864 zram_generator::config[2688]: No configuration found. Sep 9 00:38:23.484418 systemd[1]: Reloading finished in 330 ms. Sep 9 00:38:23.518968 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:38:23.533248 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:38:23.533633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:38:23.533698 systemd[1]: kubelet.service: Consumed 2.122s CPU time, 133.8M memory peak. Sep 9 00:38:23.535696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:38:23.753776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:38:23.768117 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:38:23.864481 kubelet[2733]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:38:23.864481 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:38:23.864481 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:38:23.864909 kubelet[2733]: I0909 00:38:23.864526 2733 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:38:23.874996 kubelet[2733]: I0909 00:38:23.874949 2733 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:38:23.874996 kubelet[2733]: I0909 00:38:23.874985 2733 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:38:23.875546 kubelet[2733]: I0909 00:38:23.875521 2733 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:38:23.877058 kubelet[2733]: I0909 00:38:23.877030 2733 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 00:38:23.879301 kubelet[2733]: I0909 00:38:23.879274 2733 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:38:23.882777 kubelet[2733]: I0909 00:38:23.882748 2733 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:38:23.891082 kubelet[2733]: I0909 00:38:23.891054 2733 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:38:23.891335 kubelet[2733]: I0909 00:38:23.891298 2733 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:38:23.891458 kubelet[2733]: I0909 00:38:23.891325 2733 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:38:23.891536 kubelet[2733]: I0909 00:38:23.891461 2733 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:38:23.891536 kubelet[2733]: I0909 00:38:23.891469 2733 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:38:23.891536 kubelet[2733]: I0909 00:38:23.891513 2733 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:38:23.891679 kubelet[2733]: I0909 00:38:23.891659 2733 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:38:23.891679 kubelet[2733]: I0909 00:38:23.891677 2733 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:38:23.891727 kubelet[2733]: I0909 00:38:23.891696 2733 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:38:23.891727 kubelet[2733]: I0909 00:38:23.891711 2733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:38:23.892867 kubelet[2733]: I0909 00:38:23.892608 2733 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 00:38:23.893521 kubelet[2733]: I0909 00:38:23.893505 2733 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:38:23.899442 kubelet[2733]: I0909 00:38:23.899417 2733 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:38:23.900074 kubelet[2733]: I0909 00:38:23.900059 2733 server.go:1289] "Started kubelet" Sep 9 00:38:23.901623 kubelet[2733]: I0909 00:38:23.901578 2733 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:38:23.902084 kubelet[2733]: I0909 00:38:23.902066 2733 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:38:23.903782 kubelet[2733]: I0909 00:38:23.902914 2733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:38:23.903782 kubelet[2733]: I0909 00:38:23.902940 2733 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:38:23.903996 kubelet[2733]: I0909 00:38:23.903981 2733 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:38:23.905914 kubelet[2733]: I0909 00:38:23.905883 2733 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:38:23.907095 kubelet[2733]: I0909 00:38:23.906002 2733 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:38:23.907095 kubelet[2733]: I0909 00:38:23.906260 2733 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:38:23.907095 kubelet[2733]: I0909 00:38:23.906575 2733 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:38:23.907910 kubelet[2733]: I0909 00:38:23.907857 2733 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:38:23.908010 kubelet[2733]: I0909 00:38:23.907981 2733 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:38:23.908529 kubelet[2733]: E0909 00:38:23.908501 2733 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:38:23.910460 kubelet[2733]: I0909 00:38:23.910428 2733 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:38:23.922121 kubelet[2733]: I0909 00:38:23.922071 2733 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:38:23.923285 kubelet[2733]: I0909 00:38:23.923271 2733 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:38:23.923285 kubelet[2733]: I0909 00:38:23.923287 2733 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:38:23.923371 kubelet[2733]: I0909 00:38:23.923305 2733 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:38:23.923371 kubelet[2733]: I0909 00:38:23.923311 2733 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:38:23.923371 kubelet[2733]: E0909 00:38:23.923348 2733 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:38:23.955259 kubelet[2733]: I0909 00:38:23.955214 2733 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:38:23.955259 kubelet[2733]: I0909 00:38:23.955245 2733 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:38:23.955259 kubelet[2733]: I0909 00:38:23.955264 2733 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:38:23.955447 kubelet[2733]: I0909 00:38:23.955381 2733 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:38:23.955447 kubelet[2733]: I0909 00:38:23.955400 2733 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:38:23.955447 kubelet[2733]: I0909 00:38:23.955415 2733 policy_none.go:49] "None policy: Start" Sep 9 00:38:23.955447 kubelet[2733]: I0909 00:38:23.955423 2733 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:38:23.955447 kubelet[2733]: I0909 00:38:23.955433 2733 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:38:23.955546 kubelet[2733]: I0909 00:38:23.955509 2733 state_mem.go:75] "Updated machine memory state" Sep 9 00:38:23.959329 kubelet[2733]: E0909 00:38:23.959305 2733 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:38:23.959491 kubelet[2733]: I0909 00:38:23.959473 2733 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:38:23.959525 kubelet[2733]: I0909 00:38:23.959488 2733 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:38:23.960804 kubelet[2733]: I0909 00:38:23.960400 2733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:38:23.961724 kubelet[2733]: E0909 00:38:23.961697 2733 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:38:24.024243 kubelet[2733]: I0909 00:38:24.024112 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:38:24.024243 kubelet[2733]: I0909 00:38:24.024169 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:38:24.024391 kubelet[2733]: I0909 00:38:24.024247 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:38:24.066284 kubelet[2733]: I0909 00:38:24.066254 2733 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:38:24.071429 kubelet[2733]: I0909 00:38:24.071400 2733 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:38:24.071512 kubelet[2733]: I0909 00:38:24.071484 2733 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:38:24.106603 kubelet[2733]: I0909 00:38:24.106564 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:38:24.106603 kubelet[2733]: I0909 00:38:24.106598 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:38:24.106603 kubelet[2733]: I0909 00:38:24.106618 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3135ca76ca8e399343e240caa46a1438-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3135ca76ca8e399343e240caa46a1438\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:38:24.106857 kubelet[2733]: I0909 00:38:24.106635 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3135ca76ca8e399343e240caa46a1438-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3135ca76ca8e399343e240caa46a1438\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:38:24.106857 kubelet[2733]: I0909 00:38:24.106652 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:38:24.106857 kubelet[2733]: I0909 00:38:24.106668 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:38:24.106857 kubelet[2733]: I0909 00:38:24.106731 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:38:24.106857 kubelet[2733]: I0909 00:38:24.106776 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3135ca76ca8e399343e240caa46a1438-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3135ca76ca8e399343e240caa46a1438\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:38:24.106984 kubelet[2733]: I0909 00:38:24.106814 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:38:24.331566 kubelet[2733]: E0909 00:38:24.331309 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:24.331566 kubelet[2733]: E0909 00:38:24.331374 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:24.331566 kubelet[2733]: E0909 00:38:24.331315 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:24.892410 kubelet[2733]: I0909 00:38:24.892356 2733 apiserver.go:52] "Watching apiserver" Sep 9 00:38:24.906503 kubelet[2733]: I0909 00:38:24.906463 2733 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:38:24.937235 kubelet[2733]: I0909 00:38:24.937150 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:38:24.937235 kubelet[2733]: I0909 00:38:24.937176 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:38:24.937592 kubelet[2733]: E0909 00:38:24.937410 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:25.243432 kubelet[2733]: E0909 00:38:25.243027 2733 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:38:25.243598 kubelet[2733]: E0909 00:38:25.243493 2733 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:38:25.243754 kubelet[2733]: E0909 00:38:25.243717 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:25.243955 kubelet[2733]: E0909 00:38:25.243875 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:25.262193 kubelet[2733]: I0909 00:38:25.261742 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.261718735 podStartE2EDuration="1.261718735s" podCreationTimestamp="2025-09-09 00:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:38:25.24327492 +0000 UTC m=+1.421163254" watchObservedRunningTime="2025-09-09 00:38:25.261718735 +0000 UTC m=+1.439607059" Sep 9 00:38:25.274192 kubelet[2733]: I0909 00:38:25.274110 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.274091237 podStartE2EDuration="1.274091237s" podCreationTimestamp="2025-09-09 00:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:38:25.262061476 +0000 UTC m=+1.439949811" watchObservedRunningTime="2025-09-09 00:38:25.274091237 +0000 UTC m=+1.451979571" Sep 9 00:38:25.283451 kubelet[2733]: I0909 00:38:25.283385 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.283369306 podStartE2EDuration="1.283369306s" podCreationTimestamp="2025-09-09 00:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:38:25.274404788 +0000 UTC m=+1.452293132" watchObservedRunningTime="2025-09-09 00:38:25.283369306 +0000 UTC m=+1.461257640" Sep 9 00:38:25.592429 update_engine[1523]: I20250909 00:38:25.592326 1523 update_attempter.cc:509] Updating boot flags... Sep 9 00:38:25.939963 kubelet[2733]: E0909 00:38:25.939589 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:25.939963 kubelet[2733]: E0909 00:38:25.939882 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:28.299208 kubelet[2733]: E0909 00:38:28.299168 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:28.489902 kubelet[2733]: E0909 00:38:28.489835 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:29.400613 kubelet[2733]: I0909 00:38:29.400576 2733 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:38:29.401299 kubelet[2733]: I0909 00:38:29.401144 2733 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:38:29.401340 containerd[1580]: time="2025-09-09T00:38:29.401004307Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:38:30.458004 systemd[1]: Created slice kubepods-besteffort-pod0fddf578_bb37_41c9_a34f_8a17c129a2fb.slice - libcontainer container kubepods-besteffort-pod0fddf578_bb37_41c9_a34f_8a17c129a2fb.slice. Sep 9 00:38:30.547544 kubelet[2733]: I0909 00:38:30.547473 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fddf578-bb37-41c9-a34f-8a17c129a2fb-kube-proxy\") pod \"kube-proxy-ltj85\" (UID: \"0fddf578-bb37-41c9-a34f-8a17c129a2fb\") " pod="kube-system/kube-proxy-ltj85" Sep 9 00:38:30.547544 kubelet[2733]: I0909 00:38:30.547506 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fddf578-bb37-41c9-a34f-8a17c129a2fb-xtables-lock\") pod \"kube-proxy-ltj85\" (UID: \"0fddf578-bb37-41c9-a34f-8a17c129a2fb\") " pod="kube-system/kube-proxy-ltj85" Sep 9 00:38:30.547544 kubelet[2733]: I0909 00:38:30.547521 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fddf578-bb37-41c9-a34f-8a17c129a2fb-lib-modules\") pod \"kube-proxy-ltj85\" (UID: \"0fddf578-bb37-41c9-a34f-8a17c129a2fb\") " pod="kube-system/kube-proxy-ltj85" Sep 9 00:38:30.547544 kubelet[2733]: I0909 00:38:30.547537 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngx6m\" (UniqueName: \"kubernetes.io/projected/0fddf578-bb37-41c9-a34f-8a17c129a2fb-kube-api-access-ngx6m\") pod \"kube-proxy-ltj85\" (UID: \"0fddf578-bb37-41c9-a34f-8a17c129a2fb\") " pod="kube-system/kube-proxy-ltj85" Sep 9 00:38:30.612703 systemd[1]: Created slice kubepods-besteffort-pod0fa251bd_09ae_4843_9a4d_b923abc473b9.slice - libcontainer container kubepods-besteffort-pod0fa251bd_09ae_4843_9a4d_b923abc473b9.slice. Sep 9 00:38:30.648163 kubelet[2733]: I0909 00:38:30.648109 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx4mv\" (UniqueName: \"kubernetes.io/projected/0fa251bd-09ae-4843-9a4d-b923abc473b9-kube-api-access-lx4mv\") pod \"tigera-operator-755d956888-9b7ph\" (UID: \"0fa251bd-09ae-4843-9a4d-b923abc473b9\") " pod="tigera-operator/tigera-operator-755d956888-9b7ph" Sep 9 00:38:30.648163 kubelet[2733]: I0909 00:38:30.648188 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0fa251bd-09ae-4843-9a4d-b923abc473b9-var-lib-calico\") pod \"tigera-operator-755d956888-9b7ph\" (UID: \"0fa251bd-09ae-4843-9a4d-b923abc473b9\") " pod="tigera-operator/tigera-operator-755d956888-9b7ph" Sep 9 00:38:30.769160 kubelet[2733]: E0909 00:38:30.768907 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:30.769912 containerd[1580]: time="2025-09-09T00:38:30.769737223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ltj85,Uid:0fddf578-bb37-41c9-a34f-8a17c129a2fb,Namespace:kube-system,Attempt:0,}" Sep 9 00:38:30.855284 containerd[1580]: time="2025-09-09T00:38:30.855232471Z" level=info msg="connecting to shim eb04265a83ffab61ba05d10cd7c779681ebcfa2e76b721b9bc5f58e66128ace6" address="unix:///run/containerd/s/c54bf0ff467fb83f3e1124405c2bcb7b4f07df988d55d20f2c0e42c5dafc927f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:38:30.882922 systemd[1]: Started cri-containerd-eb04265a83ffab61ba05d10cd7c779681ebcfa2e76b721b9bc5f58e66128ace6.scope - libcontainer container eb04265a83ffab61ba05d10cd7c779681ebcfa2e76b721b9bc5f58e66128ace6. Sep 9 00:38:30.910051 containerd[1580]: time="2025-09-09T00:38:30.909997589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ltj85,Uid:0fddf578-bb37-41c9-a34f-8a17c129a2fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb04265a83ffab61ba05d10cd7c779681ebcfa2e76b721b9bc5f58e66128ace6\"" Sep 9 00:38:30.910786 kubelet[2733]: E0909 00:38:30.910740 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:30.916098 containerd[1580]: time="2025-09-09T00:38:30.916051996Z" level=info msg="CreateContainer within sandbox \"eb04265a83ffab61ba05d10cd7c779681ebcfa2e76b721b9bc5f58e66128ace6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:38:30.916215 containerd[1580]: time="2025-09-09T00:38:30.916183585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-9b7ph,Uid:0fa251bd-09ae-4843-9a4d-b923abc473b9,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:38:30.932109 containerd[1580]: time="2025-09-09T00:38:30.932069852Z" level=info msg="Container 067db0e538c84a7e83cacd02a4eac0c27178ba27ffdd1e0d7465f5c3bf7be319: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:38:30.945618 containerd[1580]: time="2025-09-09T00:38:30.945538733Z" level=info msg="CreateContainer within sandbox \"eb04265a83ffab61ba05d10cd7c779681ebcfa2e76b721b9bc5f58e66128ace6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"067db0e538c84a7e83cacd02a4eac0c27178ba27ffdd1e0d7465f5c3bf7be319\"" Sep 9 00:38:30.946521 containerd[1580]: time="2025-09-09T00:38:30.946190048Z" level=info msg="connecting to shim 919597d2d1da750a4cfdc5082b2a395c6905b8243940645a74d00b09a970f8c9" address="unix:///run/containerd/s/41d773784eab5419f65fcfceb233d445a9b5879c5414694f8cb0a48fadf15fdb" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:38:30.947243 containerd[1580]: time="2025-09-09T00:38:30.946486447Z" level=info msg="StartContainer for \"067db0e538c84a7e83cacd02a4eac0c27178ba27ffdd1e0d7465f5c3bf7be319\"" Sep 9 00:38:30.951297 containerd[1580]: time="2025-09-09T00:38:30.951263572Z" level=info msg="connecting to shim 067db0e538c84a7e83cacd02a4eac0c27178ba27ffdd1e0d7465f5c3bf7be319" address="unix:///run/containerd/s/c54bf0ff467fb83f3e1124405c2bcb7b4f07df988d55d20f2c0e42c5dafc927f" protocol=ttrpc version=3 Sep 9 00:38:30.974904 systemd[1]: Started cri-containerd-067db0e538c84a7e83cacd02a4eac0c27178ba27ffdd1e0d7465f5c3bf7be319.scope - libcontainer container 067db0e538c84a7e83cacd02a4eac0c27178ba27ffdd1e0d7465f5c3bf7be319. Sep 9 00:38:30.976273 systemd[1]: Started cri-containerd-919597d2d1da750a4cfdc5082b2a395c6905b8243940645a74d00b09a970f8c9.scope - libcontainer container 919597d2d1da750a4cfdc5082b2a395c6905b8243940645a74d00b09a970f8c9. Sep 9 00:38:31.026877 containerd[1580]: time="2025-09-09T00:38:31.026722497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-9b7ph,Uid:0fa251bd-09ae-4843-9a4d-b923abc473b9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"919597d2d1da750a4cfdc5082b2a395c6905b8243940645a74d00b09a970f8c9\"" Sep 9 00:38:31.028279 containerd[1580]: time="2025-09-09T00:38:31.028244303Z" level=info msg="StartContainer for \"067db0e538c84a7e83cacd02a4eac0c27178ba27ffdd1e0d7465f5c3bf7be319\" returns successfully" Sep 9 00:38:31.029119 containerd[1580]: time="2025-09-09T00:38:31.029090944Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:38:31.955147 kubelet[2733]: E0909 00:38:31.955109 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:31.964701 kubelet[2733]: I0909 00:38:31.964637 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ltj85" podStartSLOduration=1.964619702 podStartE2EDuration="1.964619702s" podCreationTimestamp="2025-09-09 00:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:38:31.964380866 +0000 UTC m=+8.142269200" watchObservedRunningTime="2025-09-09 00:38:31.964619702 +0000 UTC m=+8.142508036" Sep 9 00:38:32.441046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872484380.mount: Deactivated successfully. Sep 9 00:38:32.957956 kubelet[2733]: E0909 00:38:32.957917 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:33.307932 containerd[1580]: time="2025-09-09T00:38:33.307844237Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:33.308697 containerd[1580]: time="2025-09-09T00:38:33.308627403Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 00:38:33.310138 containerd[1580]: time="2025-09-09T00:38:33.310093503Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:33.312319 containerd[1580]: time="2025-09-09T00:38:33.312261912Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:33.312830 containerd[1580]: time="2025-09-09T00:38:33.312802904Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.283685319s" Sep 9 00:38:33.312893 containerd[1580]: time="2025-09-09T00:38:33.312834033Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 00:38:33.318219 containerd[1580]: time="2025-09-09T00:38:33.318170726Z" level=info msg="CreateContainer within sandbox \"919597d2d1da750a4cfdc5082b2a395c6905b8243940645a74d00b09a970f8c9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:38:33.326351 containerd[1580]: time="2025-09-09T00:38:33.326287751Z" level=info msg="Container 07c536be54d2c342375aeb9c66566f748d341d56ad44f827caf147e5bc6d4254: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:38:33.330087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2041137961.mount: Deactivated successfully. Sep 9 00:38:33.335280 containerd[1580]: time="2025-09-09T00:38:33.335235206Z" level=info msg="CreateContainer within sandbox \"919597d2d1da750a4cfdc5082b2a395c6905b8243940645a74d00b09a970f8c9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"07c536be54d2c342375aeb9c66566f748d341d56ad44f827caf147e5bc6d4254\"" Sep 9 00:38:33.335893 containerd[1580]: time="2025-09-09T00:38:33.335838767Z" level=info msg="StartContainer for \"07c536be54d2c342375aeb9c66566f748d341d56ad44f827caf147e5bc6d4254\"" Sep 9 00:38:33.336937 containerd[1580]: time="2025-09-09T00:38:33.336896234Z" level=info msg="connecting to shim 07c536be54d2c342375aeb9c66566f748d341d56ad44f827caf147e5bc6d4254" address="unix:///run/containerd/s/41d773784eab5419f65fcfceb233d445a9b5879c5414694f8cb0a48fadf15fdb" protocol=ttrpc version=3 Sep 9 00:38:33.395970 systemd[1]: Started cri-containerd-07c536be54d2c342375aeb9c66566f748d341d56ad44f827caf147e5bc6d4254.scope - libcontainer container 07c536be54d2c342375aeb9c66566f748d341d56ad44f827caf147e5bc6d4254. Sep 9 00:38:33.478968 containerd[1580]: time="2025-09-09T00:38:33.478909044Z" level=info msg="StartContainer for \"07c536be54d2c342375aeb9c66566f748d341d56ad44f827caf147e5bc6d4254\" returns successfully" Sep 9 00:38:33.822980 kubelet[2733]: E0909 00:38:33.822949 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:33.961037 kubelet[2733]: E0909 00:38:33.960580 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:33.978453 kubelet[2733]: I0909 00:38:33.978320 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-9b7ph" podStartSLOduration=1.6930813900000001 podStartE2EDuration="3.978295018s" podCreationTimestamp="2025-09-09 00:38:30 +0000 UTC" firstStartedPulling="2025-09-09 00:38:31.02859401 +0000 UTC m=+7.206482344" lastFinishedPulling="2025-09-09 00:38:33.313807638 +0000 UTC m=+9.491695972" observedRunningTime="2025-09-09 00:38:33.969547348 +0000 UTC m=+10.147435672" watchObservedRunningTime="2025-09-09 00:38:33.978295018 +0000 UTC m=+10.156183352" Sep 9 00:38:34.961754 kubelet[2733]: E0909 00:38:34.961708 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:38.305237 kubelet[2733]: E0909 00:38:38.305184 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:38.496795 kubelet[2733]: E0909 00:38:38.496604 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:38.543468 sudo[1781]: pam_unix(sudo:session): session closed for user root Sep 9 00:38:38.546781 sshd[1780]: Connection closed by 10.0.0.1 port 38194 Sep 9 00:38:38.547065 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:38.553171 systemd[1]: sshd@6-10.0.0.118:22-10.0.0.1:38194.service: Deactivated successfully. Sep 9 00:38:38.558662 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:38:38.558917 systemd[1]: session-7.scope: Consumed 5.844s CPU time, 231.9M memory peak. Sep 9 00:38:38.573323 systemd-logind[1516]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:38:38.576832 systemd-logind[1516]: Removed session 7. Sep 9 00:38:44.288676 systemd[1]: Created slice kubepods-besteffort-pod9392b460_2670_47b7_826b_089b4c09605f.slice - libcontainer container kubepods-besteffort-pod9392b460_2670_47b7_826b_089b4c09605f.slice. Sep 9 00:38:44.332134 kubelet[2733]: I0909 00:38:44.331993 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9392b460-2670-47b7-826b-089b4c09605f-tigera-ca-bundle\") pod \"calico-typha-58c46d4c6-8xzg8\" (UID: \"9392b460-2670-47b7-826b-089b4c09605f\") " pod="calico-system/calico-typha-58c46d4c6-8xzg8" Sep 9 00:38:44.333001 kubelet[2733]: I0909 00:38:44.332438 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9392b460-2670-47b7-826b-089b4c09605f-typha-certs\") pod \"calico-typha-58c46d4c6-8xzg8\" (UID: \"9392b460-2670-47b7-826b-089b4c09605f\") " pod="calico-system/calico-typha-58c46d4c6-8xzg8" Sep 9 00:38:44.333431 kubelet[2733]: I0909 00:38:44.332466 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4vx7\" (UniqueName: \"kubernetes.io/projected/9392b460-2670-47b7-826b-089b4c09605f-kube-api-access-r4vx7\") pod \"calico-typha-58c46d4c6-8xzg8\" (UID: \"9392b460-2670-47b7-826b-089b4c09605f\") " pod="calico-system/calico-typha-58c46d4c6-8xzg8" Sep 9 00:38:44.357540 systemd[1]: Created slice kubepods-besteffort-podf87ba660_f275_46ae_82e1_f1967aba8d14.slice - libcontainer container kubepods-besteffort-podf87ba660_f275_46ae_82e1_f1967aba8d14.slice. Sep 9 00:38:44.433686 kubelet[2733]: I0909 00:38:44.433592 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f87ba660-f275-46ae-82e1-f1967aba8d14-flexvol-driver-host\") pod \"calico-node-lgwcv\" (UID: \"f87ba660-f275-46ae-82e1-f1967aba8d14\") " pod="calico-system/calico-node-lgwcv" Sep 9 00:38:44.433686 kubelet[2733]: I0909 00:38:44.433639 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f87ba660-f275-46ae-82e1-f1967aba8d14-tigera-ca-bundle\") pod \"calico-node-lgwcv\" (UID: \"f87ba660-f275-46ae-82e1-f1967aba8d14\") " pod="calico-system/calico-node-lgwcv" Sep 9 00:38:44.433686 kubelet[2733]: I0909 00:38:44.433659 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f87ba660-f275-46ae-82e1-f1967aba8d14-var-run-calico\") pod \"calico-node-lgwcv\" (UID: \"f87ba660-f275-46ae-82e1-f1967aba8d14\") " pod="calico-system/calico-node-lgwcv" Sep 9 00:38:44.433686 kubelet[2733]: I0909 00:38:44.433675 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f87ba660-f275-46ae-82e1-f1967aba8d14-node-certs\") pod \"calico-node-lgwcv\" (UID: \"f87ba660-f275-46ae-82e1-f1967aba8d14\") " pod="calico-system/calico-node-lgwcv" Sep 9 00:38:44.433686 kubelet[2733]: I0909 00:38:44.433700 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f87ba660-f275-46ae-82e1-f1967aba8d14-cni-net-dir\") pod \"calico-node-lgwcv\" (UID: \"f87ba660-f275-46ae-82e1-f1967aba8d14\") " pod="calico-system/calico-node-lgwcv" Sep 9 00:38:44.434173 kubelet[2733]: I0909 00:38:44.433714 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f87ba660-f275-46ae-82e1-f1967aba8d14-xtables-lock\") pod \"calico-node-lgwcv\" (UID: \"f87ba660-f275-46ae-82e1-f1967aba8d14\") " pod="calico-system/calico-node-lgwcv" Sep 9 00:38:44.434173 kubelet[2733]: I0909 00:38:44.433732 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f87ba660-f275-46ae-82e1-f1967aba8d14-policysync\") pod \"calico-node-lgwcv\" (UID: \"f87ba660-f275-46ae-82e1-f1967aba8d14\") " pod="calico-system/calico-node-lgwcv" Sep 9 00:38:44.434173 kubelet[2733]: I0909 00:38:44.433751 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf7n8\" (UniqueName: \"kubernetes.io/projected/f87ba660-f275-46ae-82e1-f1967aba8d14-kube-api-access-cf7n8\") pod \"calico-node-lgwcv\" (UID: \"f87ba660-f275-46ae-82e1-f1967aba8d14\") " pod="calico-system/calico-node-lgwcv" Sep 9 00:38:44.434657 kubelet[2733]: I0909 00:38:44.434609 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f87ba660-f275-46ae-82e1-f1967aba8d14-cni-log-dir\") pod \"calico-node-lgwcv\" (UID: \"f87ba660-f275-46ae-82e1-f1967aba8d14\") " pod="calico-system/calico-node-lgwcv" Sep 9 00:38:44.435299 kubelet[2733]: I0909 00:38:44.435249 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f87ba660-f275-46ae-82e1-f1967aba8d14-lib-modules\") pod \"calico-node-lgwcv\" (UID: \"f87ba660-f275-46ae-82e1-f1967aba8d14\") " pod="calico-system/calico-node-lgwcv" Sep 9 00:38:44.435299 kubelet[2733]: I0909 00:38:44.435301 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f87ba660-f275-46ae-82e1-f1967aba8d14-var-lib-calico\") pod \"calico-node-lgwcv\" (UID: \"f87ba660-f275-46ae-82e1-f1967aba8d14\") " pod="calico-system/calico-node-lgwcv" Sep 9 00:38:44.435414 kubelet[2733]: I0909 00:38:44.435325 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f87ba660-f275-46ae-82e1-f1967aba8d14-cni-bin-dir\") pod \"calico-node-lgwcv\" (UID: \"f87ba660-f275-46ae-82e1-f1967aba8d14\") " pod="calico-system/calico-node-lgwcv" Sep 9 00:38:44.471594 kubelet[2733]: E0909 00:38:44.471507 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5g5jz" podUID="8af07ab4-1ceb-404b-af22-0045bd45398e" Sep 9 00:38:44.535606 kubelet[2733]: I0909 00:38:44.535519 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8af07ab4-1ceb-404b-af22-0045bd45398e-varrun\") pod \"csi-node-driver-5g5jz\" (UID: \"8af07ab4-1ceb-404b-af22-0045bd45398e\") " pod="calico-system/csi-node-driver-5g5jz" Sep 9 00:38:44.535606 kubelet[2733]: I0909 00:38:44.535589 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8af07ab4-1ceb-404b-af22-0045bd45398e-registration-dir\") pod \"csi-node-driver-5g5jz\" (UID: \"8af07ab4-1ceb-404b-af22-0045bd45398e\") " pod="calico-system/csi-node-driver-5g5jz" Sep 9 00:38:44.535829 kubelet[2733]: I0909 00:38:44.535662 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8af07ab4-1ceb-404b-af22-0045bd45398e-kubelet-dir\") pod \"csi-node-driver-5g5jz\" (UID: \"8af07ab4-1ceb-404b-af22-0045bd45398e\") " pod="calico-system/csi-node-driver-5g5jz" Sep 9 00:38:44.535829 kubelet[2733]: I0909 00:38:44.535676 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8af07ab4-1ceb-404b-af22-0045bd45398e-socket-dir\") pod \"csi-node-driver-5g5jz\" (UID: \"8af07ab4-1ceb-404b-af22-0045bd45398e\") " pod="calico-system/csi-node-driver-5g5jz" Sep 9 00:38:44.535829 kubelet[2733]: I0909 00:38:44.535701 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2prj\" (UniqueName: \"kubernetes.io/projected/8af07ab4-1ceb-404b-af22-0045bd45398e-kube-api-access-s2prj\") pod \"csi-node-driver-5g5jz\" (UID: \"8af07ab4-1ceb-404b-af22-0045bd45398e\") " pod="calico-system/csi-node-driver-5g5jz" Sep 9 00:38:44.540343 kubelet[2733]: E0909 00:38:44.540144 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.540343 kubelet[2733]: W0909 00:38:44.540163 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.541595 kubelet[2733]: E0909 00:38:44.541547 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.547754 kubelet[2733]: E0909 00:38:44.547695 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.547754 kubelet[2733]: W0909 00:38:44.547720 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.547754 kubelet[2733]: E0909 00:38:44.547740 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.552878 kubelet[2733]: E0909 00:38:44.552847 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.552878 kubelet[2733]: W0909 00:38:44.552870 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.552975 kubelet[2733]: E0909 00:38:44.552894 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.593596 kubelet[2733]: E0909 00:38:44.593555 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:44.594193 containerd[1580]: time="2025-09-09T00:38:44.594148251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c46d4c6-8xzg8,Uid:9392b460-2670-47b7-826b-089b4c09605f,Namespace:calico-system,Attempt:0,}" Sep 9 00:38:44.634211 containerd[1580]: time="2025-09-09T00:38:44.634123243Z" level=info msg="connecting to shim b966c256d00d993b9cd5d5da473655b2600edd6d1c3f4795bd9bc20c1624ed37" address="unix:///run/containerd/s/b24b5f544212d4a80a3cbc240011db964f04617c44b2c2afc02c0c4bdb21c3cc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:38:44.636953 kubelet[2733]: E0909 00:38:44.636921 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.636953 kubelet[2733]: W0909 00:38:44.636947 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.637106 kubelet[2733]: E0909 00:38:44.636974 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.637525 kubelet[2733]: E0909 00:38:44.637494 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.637525 kubelet[2733]: W0909 00:38:44.637512 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.637525 kubelet[2733]: E0909 00:38:44.637525 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.637968 kubelet[2733]: E0909 00:38:44.637813 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.637968 kubelet[2733]: W0909 00:38:44.637833 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.637968 kubelet[2733]: E0909 00:38:44.637846 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.638510 kubelet[2733]: E0909 00:38:44.638144 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.638510 kubelet[2733]: W0909 00:38:44.638158 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.638510 kubelet[2733]: E0909 00:38:44.638170 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.638510 kubelet[2733]: E0909 00:38:44.638385 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.638510 kubelet[2733]: W0909 00:38:44.638394 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.638510 kubelet[2733]: E0909 00:38:44.638404 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.639107 kubelet[2733]: E0909 00:38:44.638665 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.639107 kubelet[2733]: W0909 00:38:44.638680 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.639107 kubelet[2733]: E0909 00:38:44.638694 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.639107 kubelet[2733]: E0909 00:38:44.638964 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.639107 kubelet[2733]: W0909 00:38:44.638976 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.639107 kubelet[2733]: E0909 00:38:44.638989 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.639527 kubelet[2733]: E0909 00:38:44.639191 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.639527 kubelet[2733]: W0909 00:38:44.639201 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.639527 kubelet[2733]: E0909 00:38:44.639211 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.639527 kubelet[2733]: E0909 00:38:44.639456 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.639527 kubelet[2733]: W0909 00:38:44.639466 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.639527 kubelet[2733]: E0909 00:38:44.639478 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.640170 kubelet[2733]: E0909 00:38:44.639729 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.640170 kubelet[2733]: W0909 00:38:44.639741 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.640170 kubelet[2733]: E0909 00:38:44.639753 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.640170 kubelet[2733]: E0909 00:38:44.639945 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.640170 kubelet[2733]: W0909 00:38:44.639954 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.640170 kubelet[2733]: E0909 00:38:44.639963 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.640170 kubelet[2733]: E0909 00:38:44.640175 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.640446 kubelet[2733]: W0909 00:38:44.640183 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.640446 kubelet[2733]: E0909 00:38:44.640193 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.640446 kubelet[2733]: E0909 00:38:44.640430 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.640446 kubelet[2733]: W0909 00:38:44.640441 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.640593 kubelet[2733]: E0909 00:38:44.640453 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.640735 kubelet[2733]: E0909 00:38:44.640710 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.640735 kubelet[2733]: W0909 00:38:44.640729 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.640866 kubelet[2733]: E0909 00:38:44.640740 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.641049 kubelet[2733]: E0909 00:38:44.640993 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.641049 kubelet[2733]: W0909 00:38:44.641009 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.641049 kubelet[2733]: E0909 00:38:44.641020 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.641279 kubelet[2733]: E0909 00:38:44.641240 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.641279 kubelet[2733]: W0909 00:38:44.641276 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.641403 kubelet[2733]: E0909 00:38:44.641287 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.641627 kubelet[2733]: E0909 00:38:44.641595 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.641627 kubelet[2733]: W0909 00:38:44.641609 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.641627 kubelet[2733]: E0909 00:38:44.641620 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.641917 kubelet[2733]: E0909 00:38:44.641902 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.641917 kubelet[2733]: W0909 00:38:44.641914 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.641917 kubelet[2733]: E0909 00:38:44.641925 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.642174 kubelet[2733]: E0909 00:38:44.642157 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.642174 kubelet[2733]: W0909 00:38:44.642169 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.642258 kubelet[2733]: E0909 00:38:44.642179 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.642470 kubelet[2733]: E0909 00:38:44.642451 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.642470 kubelet[2733]: W0909 00:38:44.642465 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.642554 kubelet[2733]: E0909 00:38:44.642478 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.642724 kubelet[2733]: E0909 00:38:44.642707 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.642724 kubelet[2733]: W0909 00:38:44.642719 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.642843 kubelet[2733]: E0909 00:38:44.642730 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.642965 kubelet[2733]: E0909 00:38:44.642951 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.642965 kubelet[2733]: W0909 00:38:44.642962 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.643037 kubelet[2733]: E0909 00:38:44.642971 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.643289 kubelet[2733]: E0909 00:38:44.643257 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.643289 kubelet[2733]: W0909 00:38:44.643290 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.643374 kubelet[2733]: E0909 00:38:44.643300 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.643513 kubelet[2733]: E0909 00:38:44.643495 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.643513 kubelet[2733]: W0909 00:38:44.643507 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.643600 kubelet[2733]: E0909 00:38:44.643517 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.643753 kubelet[2733]: E0909 00:38:44.643736 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.643753 kubelet[2733]: W0909 00:38:44.643748 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.643753 kubelet[2733]: E0909 00:38:44.643780 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.655276 kubelet[2733]: E0909 00:38:44.655159 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:44.655276 kubelet[2733]: W0909 00:38:44.655183 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:44.655276 kubelet[2733]: E0909 00:38:44.655205 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:44.662371 containerd[1580]: time="2025-09-09T00:38:44.662296806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lgwcv,Uid:f87ba660-f275-46ae-82e1-f1967aba8d14,Namespace:calico-system,Attempt:0,}" Sep 9 00:38:44.667099 systemd[1]: Started cri-containerd-b966c256d00d993b9cd5d5da473655b2600edd6d1c3f4795bd9bc20c1624ed37.scope - libcontainer container b966c256d00d993b9cd5d5da473655b2600edd6d1c3f4795bd9bc20c1624ed37. Sep 9 00:38:44.687791 containerd[1580]: time="2025-09-09T00:38:44.687286212Z" level=info msg="connecting to shim d22eb9021d83cd933f4bdf98684af1fb526cc8dc6b8f5e6de136391c75bb9bc4" address="unix:///run/containerd/s/b61d477419b044fa94563b46377c133b294b861b32ff7cc04f44859c51f702fd" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:38:44.716057 systemd[1]: Started cri-containerd-d22eb9021d83cd933f4bdf98684af1fb526cc8dc6b8f5e6de136391c75bb9bc4.scope - libcontainer container d22eb9021d83cd933f4bdf98684af1fb526cc8dc6b8f5e6de136391c75bb9bc4. Sep 9 00:38:44.725249 containerd[1580]: time="2025-09-09T00:38:44.725151426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c46d4c6-8xzg8,Uid:9392b460-2670-47b7-826b-089b4c09605f,Namespace:calico-system,Attempt:0,} returns sandbox id \"b966c256d00d993b9cd5d5da473655b2600edd6d1c3f4795bd9bc20c1624ed37\"" Sep 9 00:38:44.726174 kubelet[2733]: E0909 00:38:44.726139 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:44.727575 containerd[1580]: time="2025-09-09T00:38:44.727537262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:38:44.748366 containerd[1580]: time="2025-09-09T00:38:44.748324231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lgwcv,Uid:f87ba660-f275-46ae-82e1-f1967aba8d14,Namespace:calico-system,Attempt:0,} returns sandbox id \"d22eb9021d83cd933f4bdf98684af1fb526cc8dc6b8f5e6de136391c75bb9bc4\"" Sep 9 00:38:45.924814 kubelet[2733]: E0909 00:38:45.924669 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5g5jz" podUID="8af07ab4-1ceb-404b-af22-0045bd45398e" Sep 9 00:38:47.160177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3333676722.mount: Deactivated successfully. Sep 9 00:38:47.497705 containerd[1580]: time="2025-09-09T00:38:47.497575293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:47.498699 containerd[1580]: time="2025-09-09T00:38:47.498659375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 00:38:47.499881 containerd[1580]: time="2025-09-09T00:38:47.499827286Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:47.501598 containerd[1580]: time="2025-09-09T00:38:47.501565381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:47.502120 containerd[1580]: time="2025-09-09T00:38:47.502087765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.774512927s" Sep 9 00:38:47.502165 containerd[1580]: time="2025-09-09T00:38:47.502120276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 00:38:47.503019 containerd[1580]: time="2025-09-09T00:38:47.502994303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:38:47.517569 containerd[1580]: time="2025-09-09T00:38:47.517521784Z" level=info msg="CreateContainer within sandbox \"b966c256d00d993b9cd5d5da473655b2600edd6d1c3f4795bd9bc20c1624ed37\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:38:47.527976 containerd[1580]: time="2025-09-09T00:38:47.527936174Z" level=info msg="Container 6bf63ed0f802216d8625818cce1e22708aabc37d30eaab519d2e88d63b4a5c60: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:38:47.535396 containerd[1580]: time="2025-09-09T00:38:47.535368187Z" level=info msg="CreateContainer within sandbox \"b966c256d00d993b9cd5d5da473655b2600edd6d1c3f4795bd9bc20c1624ed37\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6bf63ed0f802216d8625818cce1e22708aabc37d30eaab519d2e88d63b4a5c60\"" Sep 9 00:38:47.535929 containerd[1580]: time="2025-09-09T00:38:47.535902864Z" level=info msg="StartContainer for \"6bf63ed0f802216d8625818cce1e22708aabc37d30eaab519d2e88d63b4a5c60\"" Sep 9 00:38:47.537073 containerd[1580]: time="2025-09-09T00:38:47.537036951Z" level=info msg="connecting to shim 6bf63ed0f802216d8625818cce1e22708aabc37d30eaab519d2e88d63b4a5c60" address="unix:///run/containerd/s/b24b5f544212d4a80a3cbc240011db964f04617c44b2c2afc02c0c4bdb21c3cc" protocol=ttrpc version=3 Sep 9 00:38:47.559993 systemd[1]: Started cri-containerd-6bf63ed0f802216d8625818cce1e22708aabc37d30eaab519d2e88d63b4a5c60.scope - libcontainer container 6bf63ed0f802216d8625818cce1e22708aabc37d30eaab519d2e88d63b4a5c60. Sep 9 00:38:47.615971 containerd[1580]: time="2025-09-09T00:38:47.615932133Z" level=info msg="StartContainer for \"6bf63ed0f802216d8625818cce1e22708aabc37d30eaab519d2e88d63b4a5c60\" returns successfully" Sep 9 00:38:47.924553 kubelet[2733]: E0909 00:38:47.924489 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5g5jz" podUID="8af07ab4-1ceb-404b-af22-0045bd45398e" Sep 9 00:38:47.988798 kubelet[2733]: E0909 00:38:47.988032 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:47.997268 kubelet[2733]: I0909 00:38:47.997209 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58c46d4c6-8xzg8" podStartSLOduration=1.221451087 podStartE2EDuration="3.997193918s" podCreationTimestamp="2025-09-09 00:38:44 +0000 UTC" firstStartedPulling="2025-09-09 00:38:44.727163325 +0000 UTC m=+20.905051659" lastFinishedPulling="2025-09-09 00:38:47.502906166 +0000 UTC m=+23.680794490" observedRunningTime="2025-09-09 00:38:47.996974396 +0000 UTC m=+24.174862720" watchObservedRunningTime="2025-09-09 00:38:47.997193918 +0000 UTC m=+24.175082252" Sep 9 00:38:48.044183 kubelet[2733]: E0909 00:38:48.044133 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.044183 kubelet[2733]: W0909 00:38:48.044162 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.044183 kubelet[2733]: E0909 00:38:48.044185 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.044366 kubelet[2733]: E0909 00:38:48.044349 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.044366 kubelet[2733]: W0909 00:38:48.044357 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.044366 kubelet[2733]: E0909 00:38:48.044364 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.044538 kubelet[2733]: E0909 00:38:48.044511 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.044538 kubelet[2733]: W0909 00:38:48.044522 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.044538 kubelet[2733]: E0909 00:38:48.044529 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.044716 kubelet[2733]: E0909 00:38:48.044692 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.044716 kubelet[2733]: W0909 00:38:48.044703 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.044716 kubelet[2733]: E0909 00:38:48.044711 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.044937 kubelet[2733]: E0909 00:38:48.044909 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.044937 kubelet[2733]: W0909 00:38:48.044923 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.044937 kubelet[2733]: E0909 00:38:48.044934 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.045181 kubelet[2733]: E0909 00:38:48.045165 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.045181 kubelet[2733]: W0909 00:38:48.045176 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.045251 kubelet[2733]: E0909 00:38:48.045184 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.045352 kubelet[2733]: E0909 00:38:48.045336 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.045352 kubelet[2733]: W0909 00:38:48.045345 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.045352 kubelet[2733]: E0909 00:38:48.045353 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.045517 kubelet[2733]: E0909 00:38:48.045501 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.045517 kubelet[2733]: W0909 00:38:48.045511 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.045567 kubelet[2733]: E0909 00:38:48.045519 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.045681 kubelet[2733]: E0909 00:38:48.045664 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.045681 kubelet[2733]: W0909 00:38:48.045675 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.045728 kubelet[2733]: E0909 00:38:48.045682 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.045895 kubelet[2733]: E0909 00:38:48.045877 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.045895 kubelet[2733]: W0909 00:38:48.045887 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.045895 kubelet[2733]: E0909 00:38:48.045895 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.046067 kubelet[2733]: E0909 00:38:48.046052 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.046067 kubelet[2733]: W0909 00:38:48.046061 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.046067 kubelet[2733]: E0909 00:38:48.046069 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.046240 kubelet[2733]: E0909 00:38:48.046225 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.046240 kubelet[2733]: W0909 00:38:48.046235 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.046300 kubelet[2733]: E0909 00:38:48.046248 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.046419 kubelet[2733]: E0909 00:38:48.046403 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.046419 kubelet[2733]: W0909 00:38:48.046413 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.046484 kubelet[2733]: E0909 00:38:48.046422 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.046576 kubelet[2733]: E0909 00:38:48.046559 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.046576 kubelet[2733]: W0909 00:38:48.046570 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.046626 kubelet[2733]: E0909 00:38:48.046577 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.046750 kubelet[2733]: E0909 00:38:48.046732 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.046750 kubelet[2733]: W0909 00:38:48.046742 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.046750 kubelet[2733]: E0909 00:38:48.046748 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.063198 kubelet[2733]: E0909 00:38:48.063169 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.063198 kubelet[2733]: W0909 00:38:48.063193 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.063278 kubelet[2733]: E0909 00:38:48.063218 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.063512 kubelet[2733]: E0909 00:38:48.063487 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.063512 kubelet[2733]: W0909 00:38:48.063497 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.063512 kubelet[2733]: E0909 00:38:48.063505 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.063787 kubelet[2733]: E0909 00:38:48.063722 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.063787 kubelet[2733]: W0909 00:38:48.063730 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.063787 kubelet[2733]: E0909 00:38:48.063738 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.064006 kubelet[2733]: E0909 00:38:48.063990 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.064006 kubelet[2733]: W0909 00:38:48.064001 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.064071 kubelet[2733]: E0909 00:38:48.064010 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.064251 kubelet[2733]: E0909 00:38:48.064233 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.064251 kubelet[2733]: W0909 00:38:48.064246 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.064302 kubelet[2733]: E0909 00:38:48.064259 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.064467 kubelet[2733]: E0909 00:38:48.064452 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.064467 kubelet[2733]: W0909 00:38:48.064465 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.064536 kubelet[2733]: E0909 00:38:48.064476 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.064714 kubelet[2733]: E0909 00:38:48.064701 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.064714 kubelet[2733]: W0909 00:38:48.064710 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.064809 kubelet[2733]: E0909 00:38:48.064719 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.064991 kubelet[2733]: E0909 00:38:48.064951 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.064991 kubelet[2733]: W0909 00:38:48.064961 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.064991 kubelet[2733]: E0909 00:38:48.064970 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.065178 kubelet[2733]: E0909 00:38:48.065166 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.065178 kubelet[2733]: W0909 00:38:48.065175 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.065225 kubelet[2733]: E0909 00:38:48.065185 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.065371 kubelet[2733]: E0909 00:38:48.065359 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.065371 kubelet[2733]: W0909 00:38:48.065368 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.065424 kubelet[2733]: E0909 00:38:48.065376 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.065538 kubelet[2733]: E0909 00:38:48.065527 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.065538 kubelet[2733]: W0909 00:38:48.065536 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.065582 kubelet[2733]: E0909 00:38:48.065543 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.065719 kubelet[2733]: E0909 00:38:48.065707 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.065719 kubelet[2733]: W0909 00:38:48.065716 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.065787 kubelet[2733]: E0909 00:38:48.065723 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.065970 kubelet[2733]: E0909 00:38:48.065951 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.065970 kubelet[2733]: W0909 00:38:48.065965 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.066035 kubelet[2733]: E0909 00:38:48.065976 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.066185 kubelet[2733]: E0909 00:38:48.066166 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.066185 kubelet[2733]: W0909 00:38:48.066180 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.066258 kubelet[2733]: E0909 00:38:48.066191 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.066409 kubelet[2733]: E0909 00:38:48.066396 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.066409 kubelet[2733]: W0909 00:38:48.066406 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.066463 kubelet[2733]: E0909 00:38:48.066413 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.066613 kubelet[2733]: E0909 00:38:48.066597 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.066613 kubelet[2733]: W0909 00:38:48.066609 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.066670 kubelet[2733]: E0909 00:38:48.066620 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.066805 kubelet[2733]: E0909 00:38:48.066790 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.066805 kubelet[2733]: W0909 00:38:48.066800 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.066866 kubelet[2733]: E0909 00:38:48.066808 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.067013 kubelet[2733]: E0909 00:38:48.066998 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:38:48.067013 kubelet[2733]: W0909 00:38:48.067008 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:38:48.067066 kubelet[2733]: E0909 00:38:48.067015 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:38:48.892476 containerd[1580]: time="2025-09-09T00:38:48.892404946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:48.893157 containerd[1580]: time="2025-09-09T00:38:48.893124411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 00:38:48.894277 containerd[1580]: time="2025-09-09T00:38:48.894228352Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:48.896850 containerd[1580]: time="2025-09-09T00:38:48.896816187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:48.897561 containerd[1580]: time="2025-09-09T00:38:48.897521085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.394494821s" Sep 9 00:38:48.897561 containerd[1580]: time="2025-09-09T00:38:48.897556431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 00:38:48.902221 containerd[1580]: time="2025-09-09T00:38:48.902169201Z" level=info msg="CreateContainer within sandbox \"d22eb9021d83cd933f4bdf98684af1fb526cc8dc6b8f5e6de136391c75bb9bc4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:38:48.913787 containerd[1580]: time="2025-09-09T00:38:48.913472394Z" level=info msg="Container 7c689bc8e247b858d0a7c7528a479dbed03864360304f8b8c93eb297740deed1: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:38:48.923658 containerd[1580]: time="2025-09-09T00:38:48.923612034Z" level=info msg="CreateContainer within sandbox \"d22eb9021d83cd933f4bdf98684af1fb526cc8dc6b8f5e6de136391c75bb9bc4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7c689bc8e247b858d0a7c7528a479dbed03864360304f8b8c93eb297740deed1\"" Sep 9 00:38:48.925796 containerd[1580]: time="2025-09-09T00:38:48.924115644Z" level=info msg="StartContainer for \"7c689bc8e247b858d0a7c7528a479dbed03864360304f8b8c93eb297740deed1\"" Sep 9 00:38:48.926510 containerd[1580]: time="2025-09-09T00:38:48.926486630Z" level=info msg="connecting to shim 7c689bc8e247b858d0a7c7528a479dbed03864360304f8b8c93eb297740deed1" address="unix:///run/containerd/s/b61d477419b044fa94563b46377c133b294b861b32ff7cc04f44859c51f702fd" protocol=ttrpc version=3 Sep 9 00:38:48.950983 systemd[1]: Started cri-containerd-7c689bc8e247b858d0a7c7528a479dbed03864360304f8b8c93eb297740deed1.scope - libcontainer container 7c689bc8e247b858d0a7c7528a479dbed03864360304f8b8c93eb297740deed1. Sep 9 00:38:48.992493 kubelet[2733]: I0909 00:38:48.992459 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:38:48.992984 kubelet[2733]: E0909 00:38:48.992963 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:49.004136 containerd[1580]: time="2025-09-09T00:38:49.004076784Z" level=info msg="StartContainer for \"7c689bc8e247b858d0a7c7528a479dbed03864360304f8b8c93eb297740deed1\" returns successfully" Sep 9 00:38:49.007518 systemd[1]: cri-containerd-7c689bc8e247b858d0a7c7528a479dbed03864360304f8b8c93eb297740deed1.scope: Deactivated successfully. Sep 9 00:38:49.010813 containerd[1580]: time="2025-09-09T00:38:49.010735106Z" level=info msg="received exit event container_id:\"7c689bc8e247b858d0a7c7528a479dbed03864360304f8b8c93eb297740deed1\" id:\"7c689bc8e247b858d0a7c7528a479dbed03864360304f8b8c93eb297740deed1\" pid:3399 exited_at:{seconds:1757378329 nanos:10331436}" Sep 9 00:38:49.010950 containerd[1580]: time="2025-09-09T00:38:49.010926737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c689bc8e247b858d0a7c7528a479dbed03864360304f8b8c93eb297740deed1\" id:\"7c689bc8e247b858d0a7c7528a479dbed03864360304f8b8c93eb297740deed1\" pid:3399 exited_at:{seconds:1757378329 nanos:10331436}" Sep 9 00:38:49.037910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c689bc8e247b858d0a7c7528a479dbed03864360304f8b8c93eb297740deed1-rootfs.mount: Deactivated successfully. Sep 9 00:38:49.924711 kubelet[2733]: E0909 00:38:49.924634 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5g5jz" podUID="8af07ab4-1ceb-404b-af22-0045bd45398e" Sep 9 00:38:49.997267 containerd[1580]: time="2025-09-09T00:38:49.997215678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:38:51.924596 kubelet[2733]: E0909 00:38:51.924525 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5g5jz" podUID="8af07ab4-1ceb-404b-af22-0045bd45398e" Sep 9 00:38:53.925577 kubelet[2733]: E0909 00:38:53.925505 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5g5jz" podUID="8af07ab4-1ceb-404b-af22-0045bd45398e" Sep 9 00:38:54.350513 containerd[1580]: time="2025-09-09T00:38:54.350457724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:54.352148 containerd[1580]: time="2025-09-09T00:38:54.352106247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 00:38:54.353216 containerd[1580]: time="2025-09-09T00:38:54.353179507Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:54.355172 containerd[1580]: time="2025-09-09T00:38:54.355131231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:54.355783 containerd[1580]: time="2025-09-09T00:38:54.355733285Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.358475748s" Sep 9 00:38:54.355783 containerd[1580]: time="2025-09-09T00:38:54.355772860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 00:38:54.360995 containerd[1580]: time="2025-09-09T00:38:54.360945317Z" level=info msg="CreateContainer within sandbox \"d22eb9021d83cd933f4bdf98684af1fb526cc8dc6b8f5e6de136391c75bb9bc4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:38:54.372333 containerd[1580]: time="2025-09-09T00:38:54.372286957Z" level=info msg="Container 0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:38:54.385438 containerd[1580]: time="2025-09-09T00:38:54.385378372Z" level=info msg="CreateContainer within sandbox \"d22eb9021d83cd933f4bdf98684af1fb526cc8dc6b8f5e6de136391c75bb9bc4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9\"" Sep 9 00:38:54.385961 containerd[1580]: time="2025-09-09T00:38:54.385920783Z" level=info msg="StartContainer for \"0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9\"" Sep 9 00:38:54.387365 containerd[1580]: time="2025-09-09T00:38:54.387328914Z" level=info msg="connecting to shim 0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9" address="unix:///run/containerd/s/b61d477419b044fa94563b46377c133b294b861b32ff7cc04f44859c51f702fd" protocol=ttrpc version=3 Sep 9 00:38:54.413927 systemd[1]: Started cri-containerd-0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9.scope - libcontainer container 0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9. Sep 9 00:38:54.463237 containerd[1580]: time="2025-09-09T00:38:54.463184715Z" level=info msg="StartContainer for \"0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9\" returns successfully" Sep 9 00:38:55.575877 containerd[1580]: time="2025-09-09T00:38:55.575753147Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:38:55.579263 systemd[1]: cri-containerd-0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9.scope: Deactivated successfully. Sep 9 00:38:55.579626 systemd[1]: cri-containerd-0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9.scope: Consumed 541ms CPU time, 182.4M memory peak, 3.5M read from disk, 171.3M written to disk. Sep 9 00:38:55.581073 containerd[1580]: time="2025-09-09T00:38:55.581026223Z" level=info msg="received exit event container_id:\"0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9\" id:\"0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9\" pid:3460 exited_at:{seconds:1757378335 nanos:580801810}" Sep 9 00:38:55.581140 containerd[1580]: time="2025-09-09T00:38:55.581116142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9\" id:\"0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9\" pid:3460 exited_at:{seconds:1757378335 nanos:580801810}" Sep 9 00:38:55.604983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0be1e1798c7ad6d3f0b7d6638a41729b6098b87ad510123fba266e8ae436a7f9-rootfs.mount: Deactivated successfully. Sep 9 00:38:55.672790 kubelet[2733]: I0909 00:38:55.671815 2733 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:38:56.040716 systemd[1]: Created slice kubepods-besteffort-podecaf2959_9a08_4e00_948d_967e50257a25.slice - libcontainer container kubepods-besteffort-podecaf2959_9a08_4e00_948d_967e50257a25.slice. Sep 9 00:38:56.051441 systemd[1]: Created slice kubepods-besteffort-pod8af07ab4_1ceb_404b_af22_0045bd45398e.slice - libcontainer container kubepods-besteffort-pod8af07ab4_1ceb_404b_af22_0045bd45398e.slice. Sep 9 00:38:56.058294 containerd[1580]: time="2025-09-09T00:38:56.057307149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5g5jz,Uid:8af07ab4-1ceb-404b-af22-0045bd45398e,Namespace:calico-system,Attempt:0,}" Sep 9 00:38:56.060441 systemd[1]: Created slice kubepods-besteffort-podf1786461_4df1_4450_84a8_8898a47476b6.slice - libcontainer container kubepods-besteffort-podf1786461_4df1_4450_84a8_8898a47476b6.slice. Sep 9 00:38:56.086330 systemd[1]: Created slice kubepods-besteffort-pode12878cb_06d1_48dd_b8a4_7a5373ea5ca3.slice - libcontainer container kubepods-besteffort-pode12878cb_06d1_48dd_b8a4_7a5373ea5ca3.slice. Sep 9 00:38:56.095921 systemd[1]: Created slice kubepods-besteffort-pod11cfeb17_60a0_4315_be58_d803d500749a.slice - libcontainer container kubepods-besteffort-pod11cfeb17_60a0_4315_be58_d803d500749a.slice. Sep 9 00:38:56.106280 systemd[1]: Created slice kubepods-burstable-podb21b64d9_46e3_4b3a_9c25_2a3c85894cc9.slice - libcontainer container kubepods-burstable-podb21b64d9_46e3_4b3a_9c25_2a3c85894cc9.slice. Sep 9 00:38:56.116443 kubelet[2733]: I0909 00:38:56.115422 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e12878cb-06d1-48dd-b8a4-7a5373ea5ca3-tigera-ca-bundle\") pod \"calico-kube-controllers-5c56b7dd9f-z7qmz\" (UID: \"e12878cb-06d1-48dd-b8a4-7a5373ea5ca3\") " pod="calico-system/calico-kube-controllers-5c56b7dd9f-z7qmz" Sep 9 00:38:56.116443 kubelet[2733]: I0909 00:38:56.115474 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b21b64d9-46e3-4b3a-9c25-2a3c85894cc9-config-volume\") pod \"coredns-674b8bbfcf-khqs9\" (UID: \"b21b64d9-46e3-4b3a-9c25-2a3c85894cc9\") " pod="kube-system/coredns-674b8bbfcf-khqs9" Sep 9 00:38:56.116443 kubelet[2733]: I0909 00:38:56.115499 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc114c26-2614-40fd-b788-c9acb40d825d-whisker-backend-key-pair\") pod \"whisker-7bfc6bf866-jhst4\" (UID: \"cc114c26-2614-40fd-b788-c9acb40d825d\") " pod="calico-system/whisker-7bfc6bf866-jhst4" Sep 9 00:38:56.116443 kubelet[2733]: I0909 00:38:56.115519 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc114c26-2614-40fd-b788-c9acb40d825d-whisker-ca-bundle\") pod \"whisker-7bfc6bf866-jhst4\" (UID: \"cc114c26-2614-40fd-b788-c9acb40d825d\") " pod="calico-system/whisker-7bfc6bf866-jhst4" Sep 9 00:38:56.116443 kubelet[2733]: I0909 00:38:56.115556 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/11cfeb17-60a0-4315-be58-d803d500749a-goldmane-key-pair\") pod \"goldmane-54d579b49d-xr2nx\" (UID: \"11cfeb17-60a0-4315-be58-d803d500749a\") " pod="calico-system/goldmane-54d579b49d-xr2nx" Sep 9 00:38:56.116747 kubelet[2733]: I0909 00:38:56.115615 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ecaf2959-9a08-4e00-948d-967e50257a25-calico-apiserver-certs\") pod \"calico-apiserver-6597668795-jj7hn\" (UID: \"ecaf2959-9a08-4e00-948d-967e50257a25\") " pod="calico-apiserver/calico-apiserver-6597668795-jj7hn" Sep 9 00:38:56.116747 kubelet[2733]: I0909 00:38:56.115643 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11cfeb17-60a0-4315-be58-d803d500749a-config\") pod \"goldmane-54d579b49d-xr2nx\" (UID: \"11cfeb17-60a0-4315-be58-d803d500749a\") " pod="calico-system/goldmane-54d579b49d-xr2nx" Sep 9 00:38:56.116747 kubelet[2733]: I0909 00:38:56.115664 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11cfeb17-60a0-4315-be58-d803d500749a-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-xr2nx\" (UID: \"11cfeb17-60a0-4315-be58-d803d500749a\") " pod="calico-system/goldmane-54d579b49d-xr2nx" Sep 9 00:38:56.116747 kubelet[2733]: I0909 00:38:56.115689 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7795f\" (UniqueName: \"kubernetes.io/projected/cc114c26-2614-40fd-b788-c9acb40d825d-kube-api-access-7795f\") pod \"whisker-7bfc6bf866-jhst4\" (UID: \"cc114c26-2614-40fd-b788-c9acb40d825d\") " pod="calico-system/whisker-7bfc6bf866-jhst4" Sep 9 00:38:56.116747 kubelet[2733]: I0909 00:38:56.115740 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f1786461-4df1-4450-84a8-8898a47476b6-calico-apiserver-certs\") pod \"calico-apiserver-6597668795-hjwdq\" (UID: \"f1786461-4df1-4450-84a8-8898a47476b6\") " pod="calico-apiserver/calico-apiserver-6597668795-hjwdq" Sep 9 00:38:56.116958 kubelet[2733]: I0909 00:38:56.115794 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5xbr\" (UniqueName: \"kubernetes.io/projected/f1786461-4df1-4450-84a8-8898a47476b6-kube-api-access-h5xbr\") pod \"calico-apiserver-6597668795-hjwdq\" (UID: \"f1786461-4df1-4450-84a8-8898a47476b6\") " pod="calico-apiserver/calico-apiserver-6597668795-hjwdq" Sep 9 00:38:56.116958 kubelet[2733]: I0909 00:38:56.115820 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4twvx\" (UniqueName: \"kubernetes.io/projected/e12878cb-06d1-48dd-b8a4-7a5373ea5ca3-kube-api-access-4twvx\") pod \"calico-kube-controllers-5c56b7dd9f-z7qmz\" (UID: \"e12878cb-06d1-48dd-b8a4-7a5373ea5ca3\") " pod="calico-system/calico-kube-controllers-5c56b7dd9f-z7qmz" Sep 9 00:38:56.116958 kubelet[2733]: I0909 00:38:56.115843 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkrmq\" (UniqueName: \"kubernetes.io/projected/11cfeb17-60a0-4315-be58-d803d500749a-kube-api-access-wkrmq\") pod \"goldmane-54d579b49d-xr2nx\" (UID: \"11cfeb17-60a0-4315-be58-d803d500749a\") " pod="calico-system/goldmane-54d579b49d-xr2nx" Sep 9 00:38:56.116958 kubelet[2733]: I0909 00:38:56.115864 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkr4k\" (UniqueName: \"kubernetes.io/projected/b21b64d9-46e3-4b3a-9c25-2a3c85894cc9-kube-api-access-qkr4k\") pod \"coredns-674b8bbfcf-khqs9\" (UID: \"b21b64d9-46e3-4b3a-9c25-2a3c85894cc9\") " pod="kube-system/coredns-674b8bbfcf-khqs9" Sep 9 00:38:56.116958 kubelet[2733]: I0909 00:38:56.115919 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2acf78cf-85e1-4c7c-9660-677990131edf-config-volume\") pod \"coredns-674b8bbfcf-zcdlm\" (UID: \"2acf78cf-85e1-4c7c-9660-677990131edf\") " pod="kube-system/coredns-674b8bbfcf-zcdlm" Sep 9 00:38:56.117235 kubelet[2733]: I0909 00:38:56.115961 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkvdh\" (UniqueName: \"kubernetes.io/projected/2acf78cf-85e1-4c7c-9660-677990131edf-kube-api-access-kkvdh\") pod \"coredns-674b8bbfcf-zcdlm\" (UID: \"2acf78cf-85e1-4c7c-9660-677990131edf\") " pod="kube-system/coredns-674b8bbfcf-zcdlm" Sep 9 00:38:56.117235 kubelet[2733]: I0909 00:38:56.115991 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh2d4\" (UniqueName: \"kubernetes.io/projected/ecaf2959-9a08-4e00-948d-967e50257a25-kube-api-access-kh2d4\") pod \"calico-apiserver-6597668795-jj7hn\" (UID: \"ecaf2959-9a08-4e00-948d-967e50257a25\") " pod="calico-apiserver/calico-apiserver-6597668795-jj7hn" Sep 9 00:38:56.117016 systemd[1]: Created slice kubepods-burstable-pod2acf78cf_85e1_4c7c_9660_677990131edf.slice - libcontainer container kubepods-burstable-pod2acf78cf_85e1_4c7c_9660_677990131edf.slice. Sep 9 00:38:56.124376 systemd[1]: Created slice kubepods-besteffort-podcc114c26_2614_40fd_b788_c9acb40d825d.slice - libcontainer container kubepods-besteffort-podcc114c26_2614_40fd_b788_c9acb40d825d.slice. Sep 9 00:38:56.143570 containerd[1580]: time="2025-09-09T00:38:56.143501299Z" level=error msg="Failed to destroy network for sandbox \"8bbb46a5b5694d7b8a3ce0dd23543603efb0191608839c727257550e0c8a7865\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.144976 containerd[1580]: time="2025-09-09T00:38:56.144923406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5g5jz,Uid:8af07ab4-1ceb-404b-af22-0045bd45398e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bbb46a5b5694d7b8a3ce0dd23543603efb0191608839c727257550e0c8a7865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.145297 kubelet[2733]: E0909 00:38:56.145245 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bbb46a5b5694d7b8a3ce0dd23543603efb0191608839c727257550e0c8a7865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.145376 kubelet[2733]: E0909 00:38:56.145344 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bbb46a5b5694d7b8a3ce0dd23543603efb0191608839c727257550e0c8a7865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5g5jz" Sep 9 00:38:56.145402 kubelet[2733]: E0909 00:38:56.145374 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bbb46a5b5694d7b8a3ce0dd23543603efb0191608839c727257550e0c8a7865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5g5jz" Sep 9 00:38:56.145606 kubelet[2733]: E0909 00:38:56.145552 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5g5jz_calico-system(8af07ab4-1ceb-404b-af22-0045bd45398e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5g5jz_calico-system(8af07ab4-1ceb-404b-af22-0045bd45398e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bbb46a5b5694d7b8a3ce0dd23543603efb0191608839c727257550e0c8a7865\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5g5jz" podUID="8af07ab4-1ceb-404b-af22-0045bd45398e" Sep 9 00:38:56.146417 systemd[1]: run-netns-cni\x2d3f92c0df\x2deff7\x2dd0d3\x2d30b2\x2dd1b66925a3cc.mount: Deactivated successfully. Sep 9 00:38:56.349504 containerd[1580]: time="2025-09-09T00:38:56.349355135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6597668795-jj7hn,Uid:ecaf2959-9a08-4e00-948d-967e50257a25,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:38:56.373302 containerd[1580]: time="2025-09-09T00:38:56.373251281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6597668795-hjwdq,Uid:f1786461-4df1-4450-84a8-8898a47476b6,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:38:56.392659 containerd[1580]: time="2025-09-09T00:38:56.392344196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c56b7dd9f-z7qmz,Uid:e12878cb-06d1-48dd-b8a4-7a5373ea5ca3,Namespace:calico-system,Attempt:0,}" Sep 9 00:38:56.402218 containerd[1580]: time="2025-09-09T00:38:56.402165882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xr2nx,Uid:11cfeb17-60a0-4315-be58-d803d500749a,Namespace:calico-system,Attempt:0,}" Sep 9 00:38:56.413436 kubelet[2733]: E0909 00:38:56.413382 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:56.414274 containerd[1580]: time="2025-09-09T00:38:56.414230159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-khqs9,Uid:b21b64d9-46e3-4b3a-9c25-2a3c85894cc9,Namespace:kube-system,Attempt:0,}" Sep 9 00:38:56.420840 containerd[1580]: time="2025-09-09T00:38:56.420733729Z" level=error msg="Failed to destroy network for sandbox \"c7b1d5f0d8a420315559983b06fcdc38cea16677dedc7bba3fc7ecdd4fd29e2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.422270 kubelet[2733]: E0909 00:38:56.422232 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:56.425784 containerd[1580]: time="2025-09-09T00:38:56.425614556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zcdlm,Uid:2acf78cf-85e1-4c7c-9660-677990131edf,Namespace:kube-system,Attempt:0,}" Sep 9 00:38:56.429384 containerd[1580]: time="2025-09-09T00:38:56.429348454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bfc6bf866-jhst4,Uid:cc114c26-2614-40fd-b788-c9acb40d825d,Namespace:calico-system,Attempt:0,}" Sep 9 00:38:56.431145 containerd[1580]: time="2025-09-09T00:38:56.430562408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6597668795-jj7hn,Uid:ecaf2959-9a08-4e00-948d-967e50257a25,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b1d5f0d8a420315559983b06fcdc38cea16677dedc7bba3fc7ecdd4fd29e2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.431261 kubelet[2733]: E0909 00:38:56.430715 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b1d5f0d8a420315559983b06fcdc38cea16677dedc7bba3fc7ecdd4fd29e2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.431261 kubelet[2733]: E0909 00:38:56.430800 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b1d5f0d8a420315559983b06fcdc38cea16677dedc7bba3fc7ecdd4fd29e2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6597668795-jj7hn" Sep 9 00:38:56.431261 kubelet[2733]: E0909 00:38:56.430822 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b1d5f0d8a420315559983b06fcdc38cea16677dedc7bba3fc7ecdd4fd29e2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6597668795-jj7hn" Sep 9 00:38:56.431355 kubelet[2733]: E0909 00:38:56.430868 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6597668795-jj7hn_calico-apiserver(ecaf2959-9a08-4e00-948d-967e50257a25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6597668795-jj7hn_calico-apiserver(ecaf2959-9a08-4e00-948d-967e50257a25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7b1d5f0d8a420315559983b06fcdc38cea16677dedc7bba3fc7ecdd4fd29e2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6597668795-jj7hn" podUID="ecaf2959-9a08-4e00-948d-967e50257a25" Sep 9 00:38:56.470597 containerd[1580]: time="2025-09-09T00:38:56.470533991Z" level=error msg="Failed to destroy network for sandbox \"396bf963bb395012091f52cc27589159b14fe742d054818b334a9c43313f79e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.472516 containerd[1580]: time="2025-09-09T00:38:56.472354677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6597668795-hjwdq,Uid:f1786461-4df1-4450-84a8-8898a47476b6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"396bf963bb395012091f52cc27589159b14fe742d054818b334a9c43313f79e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.475014 kubelet[2733]: E0909 00:38:56.472661 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396bf963bb395012091f52cc27589159b14fe742d054818b334a9c43313f79e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.475097 kubelet[2733]: E0909 00:38:56.475051 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396bf963bb395012091f52cc27589159b14fe742d054818b334a9c43313f79e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6597668795-hjwdq" Sep 9 00:38:56.475097 kubelet[2733]: E0909 00:38:56.475083 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396bf963bb395012091f52cc27589159b14fe742d054818b334a9c43313f79e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6597668795-hjwdq" Sep 9 00:38:56.475245 kubelet[2733]: E0909 00:38:56.475147 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6597668795-hjwdq_calico-apiserver(f1786461-4df1-4450-84a8-8898a47476b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6597668795-hjwdq_calico-apiserver(f1786461-4df1-4450-84a8-8898a47476b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"396bf963bb395012091f52cc27589159b14fe742d054818b334a9c43313f79e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6597668795-hjwdq" podUID="f1786461-4df1-4450-84a8-8898a47476b6" Sep 9 00:38:56.488028 containerd[1580]: time="2025-09-09T00:38:56.487980258Z" level=error msg="Failed to destroy network for sandbox \"fe4d29eb9711c6171b4b668242dd68564fbbe0b86ec932efee3dc4502d04d7bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.490431 containerd[1580]: time="2025-09-09T00:38:56.490225513Z" level=error msg="Failed to destroy network for sandbox \"e0c9f26948dd4544736b4b49212a80b2a1594cab64f9944372045a2b7cce02f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.494325 containerd[1580]: time="2025-09-09T00:38:56.494283170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c56b7dd9f-z7qmz,Uid:e12878cb-06d1-48dd-b8a4-7a5373ea5ca3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4d29eb9711c6171b4b668242dd68564fbbe0b86ec932efee3dc4502d04d7bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.495013 kubelet[2733]: E0909 00:38:56.494956 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4d29eb9711c6171b4b668242dd68564fbbe0b86ec932efee3dc4502d04d7bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.495125 kubelet[2733]: E0909 00:38:56.495037 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4d29eb9711c6171b4b668242dd68564fbbe0b86ec932efee3dc4502d04d7bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c56b7dd9f-z7qmz" Sep 9 00:38:56.495125 kubelet[2733]: E0909 00:38:56.495059 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4d29eb9711c6171b4b668242dd68564fbbe0b86ec932efee3dc4502d04d7bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c56b7dd9f-z7qmz" Sep 9 00:38:56.495187 kubelet[2733]: E0909 00:38:56.495112 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c56b7dd9f-z7qmz_calico-system(e12878cb-06d1-48dd-b8a4-7a5373ea5ca3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c56b7dd9f-z7qmz_calico-system(e12878cb-06d1-48dd-b8a4-7a5373ea5ca3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe4d29eb9711c6171b4b668242dd68564fbbe0b86ec932efee3dc4502d04d7bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c56b7dd9f-z7qmz" podUID="e12878cb-06d1-48dd-b8a4-7a5373ea5ca3" Sep 9 00:38:56.495979 containerd[1580]: time="2025-09-09T00:38:56.495943045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xr2nx,Uid:11cfeb17-60a0-4315-be58-d803d500749a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c9f26948dd4544736b4b49212a80b2a1594cab64f9944372045a2b7cce02f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.496306 kubelet[2733]: E0909 00:38:56.496280 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c9f26948dd4544736b4b49212a80b2a1594cab64f9944372045a2b7cce02f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.496354 kubelet[2733]: E0909 00:38:56.496312 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c9f26948dd4544736b4b49212a80b2a1594cab64f9944372045a2b7cce02f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-xr2nx" Sep 9 00:38:56.496354 kubelet[2733]: E0909 00:38:56.496327 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c9f26948dd4544736b4b49212a80b2a1594cab64f9944372045a2b7cce02f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-xr2nx" Sep 9 00:38:56.496418 kubelet[2733]: E0909 00:38:56.496362 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-xr2nx_calico-system(11cfeb17-60a0-4315-be58-d803d500749a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-xr2nx_calico-system(11cfeb17-60a0-4315-be58-d803d500749a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0c9f26948dd4544736b4b49212a80b2a1594cab64f9944372045a2b7cce02f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-xr2nx" podUID="11cfeb17-60a0-4315-be58-d803d500749a" Sep 9 00:38:56.510489 containerd[1580]: time="2025-09-09T00:38:56.510420573Z" level=error msg="Failed to destroy network for sandbox \"499527afc8a357cae197faf69b57cd732f14bfea029b78a7903eb9ca0d4adb69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.511907 containerd[1580]: time="2025-09-09T00:38:56.511834054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zcdlm,Uid:2acf78cf-85e1-4c7c-9660-677990131edf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"499527afc8a357cae197faf69b57cd732f14bfea029b78a7903eb9ca0d4adb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.512210 kubelet[2733]: E0909 00:38:56.512156 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"499527afc8a357cae197faf69b57cd732f14bfea029b78a7903eb9ca0d4adb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.512381 kubelet[2733]: E0909 00:38:56.512225 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"499527afc8a357cae197faf69b57cd732f14bfea029b78a7903eb9ca0d4adb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zcdlm" Sep 9 00:38:56.512381 kubelet[2733]: E0909 00:38:56.512287 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"499527afc8a357cae197faf69b57cd732f14bfea029b78a7903eb9ca0d4adb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zcdlm" Sep 9 00:38:56.512381 kubelet[2733]: E0909 00:38:56.512342 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zcdlm_kube-system(2acf78cf-85e1-4c7c-9660-677990131edf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zcdlm_kube-system(2acf78cf-85e1-4c7c-9660-677990131edf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"499527afc8a357cae197faf69b57cd732f14bfea029b78a7903eb9ca0d4adb69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zcdlm" podUID="2acf78cf-85e1-4c7c-9660-677990131edf" Sep 9 00:38:56.514333 containerd[1580]: time="2025-09-09T00:38:56.514284205Z" level=error msg="Failed to destroy network for sandbox \"a9181fcd611b185d1c88bcd7df53b772f34fbde6de4711d8311d6fda0b46f926\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.515783 containerd[1580]: time="2025-09-09T00:38:56.515677297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bfc6bf866-jhst4,Uid:cc114c26-2614-40fd-b788-c9acb40d825d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9181fcd611b185d1c88bcd7df53b772f34fbde6de4711d8311d6fda0b46f926\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.515903 kubelet[2733]: E0909 00:38:56.515870 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9181fcd611b185d1c88bcd7df53b772f34fbde6de4711d8311d6fda0b46f926\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.515968 kubelet[2733]: E0909 00:38:56.515920 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9181fcd611b185d1c88bcd7df53b772f34fbde6de4711d8311d6fda0b46f926\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bfc6bf866-jhst4" Sep 9 00:38:56.515968 kubelet[2733]: E0909 00:38:56.515955 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9181fcd611b185d1c88bcd7df53b772f34fbde6de4711d8311d6fda0b46f926\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bfc6bf866-jhst4" Sep 9 00:38:56.516026 kubelet[2733]: E0909 00:38:56.515998 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bfc6bf866-jhst4_calico-system(cc114c26-2614-40fd-b788-c9acb40d825d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bfc6bf866-jhst4_calico-system(cc114c26-2614-40fd-b788-c9acb40d825d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9181fcd611b185d1c88bcd7df53b772f34fbde6de4711d8311d6fda0b46f926\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bfc6bf866-jhst4" podUID="cc114c26-2614-40fd-b788-c9acb40d825d" Sep 9 00:38:56.520887 containerd[1580]: time="2025-09-09T00:38:56.520847588Z" level=error msg="Failed to destroy network for sandbox \"adb5310b5050b12a2ce7c24ca7e1b3bd1cb84d8d0428ffcd3eb71d11d1bcafa6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.522059 containerd[1580]: time="2025-09-09T00:38:56.522027028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-khqs9,Uid:b21b64d9-46e3-4b3a-9c25-2a3c85894cc9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb5310b5050b12a2ce7c24ca7e1b3bd1cb84d8d0428ffcd3eb71d11d1bcafa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.522233 kubelet[2733]: E0909 00:38:56.522200 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb5310b5050b12a2ce7c24ca7e1b3bd1cb84d8d0428ffcd3eb71d11d1bcafa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:38:56.522278 kubelet[2733]: E0909 00:38:56.522252 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb5310b5050b12a2ce7c24ca7e1b3bd1cb84d8d0428ffcd3eb71d11d1bcafa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-khqs9" Sep 9 00:38:56.522316 kubelet[2733]: E0909 00:38:56.522271 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb5310b5050b12a2ce7c24ca7e1b3bd1cb84d8d0428ffcd3eb71d11d1bcafa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-khqs9" Sep 9 00:38:56.522357 kubelet[2733]: E0909 00:38:56.522329 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-khqs9_kube-system(b21b64d9-46e3-4b3a-9c25-2a3c85894cc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-khqs9_kube-system(b21b64d9-46e3-4b3a-9c25-2a3c85894cc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adb5310b5050b12a2ce7c24ca7e1b3bd1cb84d8d0428ffcd3eb71d11d1bcafa6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-khqs9" podUID="b21b64d9-46e3-4b3a-9c25-2a3c85894cc9" Sep 9 00:38:57.016529 containerd[1580]: time="2025-09-09T00:38:57.016472672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:38:58.343805 kubelet[2733]: I0909 00:38:58.343233 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:38:58.343805 kubelet[2733]: E0909 00:38:58.343714 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:59.019310 kubelet[2733]: E0909 00:38:59.019256 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:05.881849 systemd[1]: Started sshd@7-10.0.0.118:22-10.0.0.1:34406.service - OpenSSH per-connection server daemon (10.0.0.1:34406). Sep 9 00:39:05.980811 sshd[3781]: Accepted publickey for core from 10.0.0.1 port 34406 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:05.982329 sshd-session[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:05.988442 systemd-logind[1516]: New session 8 of user core. Sep 9 00:39:05.993911 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:39:06.141524 sshd[3784]: Connection closed by 10.0.0.1 port 34406 Sep 9 00:39:06.142112 sshd-session[3781]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:06.148645 systemd[1]: sshd@7-10.0.0.118:22-10.0.0.1:34406.service: Deactivated successfully. Sep 9 00:39:06.151650 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:39:06.152981 systemd-logind[1516]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:39:06.155490 systemd-logind[1516]: Removed session 8. Sep 9 00:39:06.925014 containerd[1580]: time="2025-09-09T00:39:06.924954412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6597668795-jj7hn,Uid:ecaf2959-9a08-4e00-948d-967e50257a25,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:39:06.941074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1995239868.mount: Deactivated successfully. Sep 9 00:39:07.924798 containerd[1580]: time="2025-09-09T00:39:07.924647047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6597668795-hjwdq,Uid:f1786461-4df1-4450-84a8-8898a47476b6,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:39:07.924963 containerd[1580]: time="2025-09-09T00:39:07.924648680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c56b7dd9f-z7qmz,Uid:e12878cb-06d1-48dd-b8a4-7a5373ea5ca3,Namespace:calico-system,Attempt:0,}" Sep 9 00:39:08.736045 containerd[1580]: time="2025-09-09T00:39:08.735998284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 00:39:08.737323 containerd[1580]: time="2025-09-09T00:39:08.737270386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:08.759877 containerd[1580]: time="2025-09-09T00:39:08.759799779Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:08.792157 containerd[1580]: time="2025-09-09T00:39:08.792086688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:08.795688 containerd[1580]: time="2025-09-09T00:39:08.793749354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 11.776761242s" Sep 9 00:39:08.795688 containerd[1580]: time="2025-09-09T00:39:08.793895770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 00:39:08.795943 containerd[1580]: time="2025-09-09T00:39:08.795632977Z" level=error msg="Failed to destroy network for sandbox \"3f5700566ec2ab2559ded5b850ed592ef5f99634fbdfeea7d003105ece2018e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:08.799059 systemd[1]: run-netns-cni\x2d084c1568\x2dc5aa\x2d6751\x2dc274\x2dd885d80b44d6.mount: Deactivated successfully. Sep 9 00:39:08.814510 containerd[1580]: time="2025-09-09T00:39:08.809130807Z" level=error msg="Failed to destroy network for sandbox \"8f14003165b85df645d9736826d9cfcbc7b2b80e18a949a8bca33d507d3c5d60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:08.814510 containerd[1580]: time="2025-09-09T00:39:08.813227830Z" level=error msg="Failed to destroy network for sandbox \"9cd858b0ba1a7e59e8c094a7595961ad2aa9efd83c87acc2d2bb2fe79b483ad1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:08.825995 containerd[1580]: time="2025-09-09T00:39:08.817512376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6597668795-jj7hn,Uid:ecaf2959-9a08-4e00-948d-967e50257a25,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f5700566ec2ab2559ded5b850ed592ef5f99634fbdfeea7d003105ece2018e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:08.826175 kubelet[2733]: E0909 00:39:08.818567 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f5700566ec2ab2559ded5b850ed592ef5f99634fbdfeea7d003105ece2018e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:08.826175 kubelet[2733]: E0909 00:39:08.818620 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f5700566ec2ab2559ded5b850ed592ef5f99634fbdfeea7d003105ece2018e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6597668795-jj7hn" Sep 9 00:39:08.826175 kubelet[2733]: E0909 00:39:08.818640 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f5700566ec2ab2559ded5b850ed592ef5f99634fbdfeea7d003105ece2018e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6597668795-jj7hn" Sep 9 00:39:08.826621 kubelet[2733]: E0909 00:39:08.818702 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6597668795-jj7hn_calico-apiserver(ecaf2959-9a08-4e00-948d-967e50257a25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6597668795-jj7hn_calico-apiserver(ecaf2959-9a08-4e00-948d-967e50257a25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f5700566ec2ab2559ded5b850ed592ef5f99634fbdfeea7d003105ece2018e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6597668795-jj7hn" podUID="ecaf2959-9a08-4e00-948d-967e50257a25" Sep 9 00:39:08.856907 containerd[1580]: time="2025-09-09T00:39:08.856864832Z" level=info msg="CreateContainer within sandbox \"d22eb9021d83cd933f4bdf98684af1fb526cc8dc6b8f5e6de136391c75bb9bc4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:39:08.864545 containerd[1580]: time="2025-09-09T00:39:08.864494518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c56b7dd9f-z7qmz,Uid:e12878cb-06d1-48dd-b8a4-7a5373ea5ca3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f14003165b85df645d9736826d9cfcbc7b2b80e18a949a8bca33d507d3c5d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:08.864787 kubelet[2733]: E0909 00:39:08.864704 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f14003165b85df645d9736826d9cfcbc7b2b80e18a949a8bca33d507d3c5d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:08.864957 kubelet[2733]: E0909 00:39:08.864821 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f14003165b85df645d9736826d9cfcbc7b2b80e18a949a8bca33d507d3c5d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c56b7dd9f-z7qmz" Sep 9 00:39:08.864957 kubelet[2733]: E0909 00:39:08.864857 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f14003165b85df645d9736826d9cfcbc7b2b80e18a949a8bca33d507d3c5d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c56b7dd9f-z7qmz" Sep 9 00:39:08.864957 kubelet[2733]: E0909 00:39:08.864921 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c56b7dd9f-z7qmz_calico-system(e12878cb-06d1-48dd-b8a4-7a5373ea5ca3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c56b7dd9f-z7qmz_calico-system(e12878cb-06d1-48dd-b8a4-7a5373ea5ca3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f14003165b85df645d9736826d9cfcbc7b2b80e18a949a8bca33d507d3c5d60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c56b7dd9f-z7qmz" podUID="e12878cb-06d1-48dd-b8a4-7a5373ea5ca3" Sep 9 00:39:08.888153 containerd[1580]: time="2025-09-09T00:39:08.888052104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6597668795-hjwdq,Uid:f1786461-4df1-4450-84a8-8898a47476b6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cd858b0ba1a7e59e8c094a7595961ad2aa9efd83c87acc2d2bb2fe79b483ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:08.888507 kubelet[2733]: E0909 00:39:08.888429 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cd858b0ba1a7e59e8c094a7595961ad2aa9efd83c87acc2d2bb2fe79b483ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:08.888569 kubelet[2733]: E0909 00:39:08.888539 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cd858b0ba1a7e59e8c094a7595961ad2aa9efd83c87acc2d2bb2fe79b483ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6597668795-hjwdq" Sep 9 00:39:08.888599 kubelet[2733]: E0909 00:39:08.888574 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cd858b0ba1a7e59e8c094a7595961ad2aa9efd83c87acc2d2bb2fe79b483ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6597668795-hjwdq" Sep 9 00:39:08.888704 kubelet[2733]: E0909 00:39:08.888643 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6597668795-hjwdq_calico-apiserver(f1786461-4df1-4450-84a8-8898a47476b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6597668795-hjwdq_calico-apiserver(f1786461-4df1-4450-84a8-8898a47476b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cd858b0ba1a7e59e8c094a7595961ad2aa9efd83c87acc2d2bb2fe79b483ad1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6597668795-hjwdq" podUID="f1786461-4df1-4450-84a8-8898a47476b6" Sep 9 00:39:08.925150 containerd[1580]: time="2025-09-09T00:39:08.925004064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xr2nx,Uid:11cfeb17-60a0-4315-be58-d803d500749a,Namespace:calico-system,Attempt:0,}" Sep 9 00:39:08.925150 containerd[1580]: time="2025-09-09T00:39:08.925086540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bfc6bf866-jhst4,Uid:cc114c26-2614-40fd-b788-c9acb40d825d,Namespace:calico-system,Attempt:0,}" Sep 9 00:39:08.976810 kubelet[2733]: E0909 00:39:08.925066 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:08.976876 containerd[1580]: time="2025-09-09T00:39:08.925365244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-khqs9,Uid:b21b64d9-46e3-4b3a-9c25-2a3c85894cc9,Namespace:kube-system,Attempt:0,}" Sep 9 00:39:09.103790 containerd[1580]: time="2025-09-09T00:39:09.103223541Z" level=info msg="Container 78eb3dd5a2d64950b4dcde20366c009c9afe71212a813b2cfad1c49266fe8f46: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:39:09.138029 containerd[1580]: time="2025-09-09T00:39:09.137950855Z" level=error msg="Failed to destroy network for sandbox \"86c0fdf02d704c20742518f21e9194b4305b5bbfd89ec7360ce698b331d4e5a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:09.232491 containerd[1580]: time="2025-09-09T00:39:09.232439792Z" level=error msg="Failed to destroy network for sandbox \"ae6531d208c409d0522c898e5dd648b11bbc837b87579942cacb38907b0c2c8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:09.236834 containerd[1580]: time="2025-09-09T00:39:09.236689563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bfc6bf866-jhst4,Uid:cc114c26-2614-40fd-b788-c9acb40d825d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae6531d208c409d0522c898e5dd648b11bbc837b87579942cacb38907b0c2c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:09.237020 kubelet[2733]: E0909 00:39:09.236974 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae6531d208c409d0522c898e5dd648b11bbc837b87579942cacb38907b0c2c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:09.237109 kubelet[2733]: E0909 00:39:09.237043 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae6531d208c409d0522c898e5dd648b11bbc837b87579942cacb38907b0c2c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bfc6bf866-jhst4" Sep 9 00:39:09.237109 kubelet[2733]: E0909 00:39:09.237065 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae6531d208c409d0522c898e5dd648b11bbc837b87579942cacb38907b0c2c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bfc6bf866-jhst4" Sep 9 00:39:09.237170 kubelet[2733]: E0909 00:39:09.237115 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bfc6bf866-jhst4_calico-system(cc114c26-2614-40fd-b788-c9acb40d825d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bfc6bf866-jhst4_calico-system(cc114c26-2614-40fd-b788-c9acb40d825d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae6531d208c409d0522c898e5dd648b11bbc837b87579942cacb38907b0c2c8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bfc6bf866-jhst4" podUID="cc114c26-2614-40fd-b788-c9acb40d825d" Sep 9 00:39:09.238169 containerd[1580]: time="2025-09-09T00:39:09.238140181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xr2nx,Uid:11cfeb17-60a0-4315-be58-d803d500749a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c0fdf02d704c20742518f21e9194b4305b5bbfd89ec7360ce698b331d4e5a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:09.238452 kubelet[2733]: E0909 00:39:09.238427 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c0fdf02d704c20742518f21e9194b4305b5bbfd89ec7360ce698b331d4e5a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:09.239105 kubelet[2733]: E0909 00:39:09.239060 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c0fdf02d704c20742518f21e9194b4305b5bbfd89ec7360ce698b331d4e5a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-xr2nx" Sep 9 00:39:09.239105 kubelet[2733]: E0909 00:39:09.239097 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c0fdf02d704c20742518f21e9194b4305b5bbfd89ec7360ce698b331d4e5a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-xr2nx" Sep 9 00:39:09.239595 kubelet[2733]: E0909 00:39:09.239141 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-xr2nx_calico-system(11cfeb17-60a0-4315-be58-d803d500749a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-xr2nx_calico-system(11cfeb17-60a0-4315-be58-d803d500749a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86c0fdf02d704c20742518f21e9194b4305b5bbfd89ec7360ce698b331d4e5a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-xr2nx" podUID="11cfeb17-60a0-4315-be58-d803d500749a" Sep 9 00:39:09.246546 containerd[1580]: time="2025-09-09T00:39:09.246503556Z" level=info msg="CreateContainer within sandbox \"d22eb9021d83cd933f4bdf98684af1fb526cc8dc6b8f5e6de136391c75bb9bc4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"78eb3dd5a2d64950b4dcde20366c009c9afe71212a813b2cfad1c49266fe8f46\"" Sep 9 00:39:09.247519 containerd[1580]: time="2025-09-09T00:39:09.247450786Z" level=info msg="StartContainer for \"78eb3dd5a2d64950b4dcde20366c009c9afe71212a813b2cfad1c49266fe8f46\"" Sep 9 00:39:09.261274 containerd[1580]: time="2025-09-09T00:39:09.261198645Z" level=info msg="connecting to shim 78eb3dd5a2d64950b4dcde20366c009c9afe71212a813b2cfad1c49266fe8f46" address="unix:///run/containerd/s/b61d477419b044fa94563b46377c133b294b861b32ff7cc04f44859c51f702fd" protocol=ttrpc version=3 Sep 9 00:39:09.291157 systemd[1]: Started cri-containerd-78eb3dd5a2d64950b4dcde20366c009c9afe71212a813b2cfad1c49266fe8f46.scope - libcontainer container 78eb3dd5a2d64950b4dcde20366c009c9afe71212a813b2cfad1c49266fe8f46. Sep 9 00:39:09.296785 containerd[1580]: time="2025-09-09T00:39:09.296304581Z" level=error msg="Failed to destroy network for sandbox \"61fbdaf2a528eee51fba35349e5b8ab268aa3c61487e288fa760011f31184fc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:09.297802 containerd[1580]: time="2025-09-09T00:39:09.297727006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-khqs9,Uid:b21b64d9-46e3-4b3a-9c25-2a3c85894cc9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61fbdaf2a528eee51fba35349e5b8ab268aa3c61487e288fa760011f31184fc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:09.298151 kubelet[2733]: E0909 00:39:09.298112 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61fbdaf2a528eee51fba35349e5b8ab268aa3c61487e288fa760011f31184fc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:39:09.298326 kubelet[2733]: E0909 00:39:09.298277 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61fbdaf2a528eee51fba35349e5b8ab268aa3c61487e288fa760011f31184fc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-khqs9" Sep 9 00:39:09.298326 kubelet[2733]: E0909 00:39:09.298313 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61fbdaf2a528eee51fba35349e5b8ab268aa3c61487e288fa760011f31184fc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-khqs9" Sep 9 00:39:09.298511 kubelet[2733]: E0909 00:39:09.298376 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-khqs9_kube-system(b21b64d9-46e3-4b3a-9c25-2a3c85894cc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-khqs9_kube-system(b21b64d9-46e3-4b3a-9c25-2a3c85894cc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61fbdaf2a528eee51fba35349e5b8ab268aa3c61487e288fa760011f31184fc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-khqs9" podUID="b21b64d9-46e3-4b3a-9c25-2a3c85894cc9" Sep 9 00:39:09.342071 containerd[1580]: time="2025-09-09T00:39:09.342019333Z" level=info msg="StartContainer for \"78eb3dd5a2d64950b4dcde20366c009c9afe71212a813b2cfad1c49266fe8f46\" returns successfully" Sep 9 00:39:09.424020 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:39:09.424803 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:39:09.687244 systemd[1]: run-netns-cni\x2dafd045dc\x2d0bf1\x2df983\x2d92e7\x2daf22fdd4aa91.mount: Deactivated successfully. Sep 9 00:39:09.687369 systemd[1]: run-netns-cni\x2dc046f494\x2d4236\x2da5a9\x2d5d84\x2d39c175e301b2.mount: Deactivated successfully. Sep 9 00:39:09.925278 kubelet[2733]: E0909 00:39:09.925226 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:09.926183 containerd[1580]: time="2025-09-09T00:39:09.925596702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zcdlm,Uid:2acf78cf-85e1-4c7c-9660-677990131edf,Namespace:kube-system,Attempt:0,}" Sep 9 00:39:09.926183 containerd[1580]: time="2025-09-09T00:39:09.925994620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5g5jz,Uid:8af07ab4-1ceb-404b-af22-0045bd45398e,Namespace:calico-system,Attempt:0,}" Sep 9 00:39:10.085030 kubelet[2733]: I0909 00:39:10.084876 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lgwcv" podStartSLOduration=2.038309334 podStartE2EDuration="26.08486142s" podCreationTimestamp="2025-09-09 00:38:44 +0000 UTC" firstStartedPulling="2025-09-09 00:38:44.749528382 +0000 UTC m=+20.927485876" lastFinishedPulling="2025-09-09 00:39:08.796149628 +0000 UTC m=+44.974037962" observedRunningTime="2025-09-09 00:39:10.075637268 +0000 UTC m=+46.253525602" watchObservedRunningTime="2025-09-09 00:39:10.08486142 +0000 UTC m=+46.262749754" Sep 9 00:39:10.105926 kubelet[2733]: I0909 00:39:10.105873 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc114c26-2614-40fd-b788-c9acb40d825d-whisker-ca-bundle\") pod \"cc114c26-2614-40fd-b788-c9acb40d825d\" (UID: \"cc114c26-2614-40fd-b788-c9acb40d825d\") " Sep 9 00:39:10.105926 kubelet[2733]: I0909 00:39:10.105930 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc114c26-2614-40fd-b788-c9acb40d825d-whisker-backend-key-pair\") pod \"cc114c26-2614-40fd-b788-c9acb40d825d\" (UID: \"cc114c26-2614-40fd-b788-c9acb40d825d\") " Sep 9 00:39:10.106112 kubelet[2733]: I0909 00:39:10.105972 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7795f\" (UniqueName: \"kubernetes.io/projected/cc114c26-2614-40fd-b788-c9acb40d825d-kube-api-access-7795f\") pod \"cc114c26-2614-40fd-b788-c9acb40d825d\" (UID: \"cc114c26-2614-40fd-b788-c9acb40d825d\") " Sep 9 00:39:10.106614 kubelet[2733]: I0909 00:39:10.106484 2733 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc114c26-2614-40fd-b788-c9acb40d825d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cc114c26-2614-40fd-b788-c9acb40d825d" (UID: "cc114c26-2614-40fd-b788-c9acb40d825d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:39:10.114480 systemd-networkd[1476]: cali1aae67eab09: Link UP Sep 9 00:39:10.117691 systemd[1]: var-lib-kubelet-pods-cc114c26\x2d2614\x2d40fd\x2db788\x2dc9acb40d825d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7795f.mount: Deactivated successfully. Sep 9 00:39:10.118106 systemd-networkd[1476]: cali1aae67eab09: Gained carrier Sep 9 00:39:10.122573 systemd[1]: var-lib-kubelet-pods-cc114c26\x2d2614\x2d40fd\x2db788\x2dc9acb40d825d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:39:10.124993 kubelet[2733]: I0909 00:39:10.124934 2733 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc114c26-2614-40fd-b788-c9acb40d825d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cc114c26-2614-40fd-b788-c9acb40d825d" (UID: "cc114c26-2614-40fd-b788-c9acb40d825d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:39:10.125238 kubelet[2733]: I0909 00:39:10.125201 2733 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc114c26-2614-40fd-b788-c9acb40d825d-kube-api-access-7795f" (OuterVolumeSpecName: "kube-api-access-7795f") pod "cc114c26-2614-40fd-b788-c9acb40d825d" (UID: "cc114c26-2614-40fd-b788-c9acb40d825d"). InnerVolumeSpecName "kube-api-access-7795f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:39:10.140395 containerd[1580]: 2025-09-09 00:39:09.956 [INFO][4062] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:39:10.140395 containerd[1580]: 2025-09-09 00:39:09.977 [INFO][4062] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5g5jz-eth0 csi-node-driver- calico-system 8af07ab4-1ceb-404b-af22-0045bd45398e 720 0 2025-09-09 00:38:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5g5jz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1aae67eab09 [] [] }} ContainerID="79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" Namespace="calico-system" Pod="csi-node-driver-5g5jz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5g5jz-" Sep 9 00:39:10.140395 containerd[1580]: 2025-09-09 00:39:09.977 [INFO][4062] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" Namespace="calico-system" Pod="csi-node-driver-5g5jz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5g5jz-eth0" Sep 9 00:39:10.140395 containerd[1580]: 2025-09-09 00:39:10.041 [INFO][4087] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" HandleID="k8s-pod-network.79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" Workload="localhost-k8s-csi--node--driver--5g5jz-eth0" Sep 9 00:39:10.140640 containerd[1580]: 2025-09-09 00:39:10.042 [INFO][4087] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" HandleID="k8s-pod-network.79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" Workload="localhost-k8s-csi--node--driver--5g5jz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001394b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5g5jz", "timestamp":"2025-09-09 00:39:10.041659287 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:39:10.140640 containerd[1580]: 2025-09-09 00:39:10.042 [INFO][4087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:39:10.140640 containerd[1580]: 2025-09-09 00:39:10.042 [INFO][4087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:39:10.140640 containerd[1580]: 2025-09-09 00:39:10.042 [INFO][4087] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:39:10.140640 containerd[1580]: 2025-09-09 00:39:10.059 [INFO][4087] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" host="localhost" Sep 9 00:39:10.140640 containerd[1580]: 2025-09-09 00:39:10.066 [INFO][4087] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:39:10.140640 containerd[1580]: 2025-09-09 00:39:10.071 [INFO][4087] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:39:10.140640 containerd[1580]: 2025-09-09 00:39:10.074 [INFO][4087] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:10.140640 containerd[1580]: 2025-09-09 00:39:10.077 [INFO][4087] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:10.140640 containerd[1580]: 2025-09-09 00:39:10.077 [INFO][4087] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" host="localhost" Sep 9 00:39:10.140896 containerd[1580]: 2025-09-09 00:39:10.079 [INFO][4087] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b Sep 9 00:39:10.140896 containerd[1580]: 2025-09-09 00:39:10.083 [INFO][4087] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" host="localhost" Sep 9 00:39:10.140896 containerd[1580]: 2025-09-09 00:39:10.094 [INFO][4087] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" host="localhost" Sep 9 00:39:10.140896 containerd[1580]: 2025-09-09 00:39:10.094 [INFO][4087] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" host="localhost" Sep 9 00:39:10.140896 containerd[1580]: 2025-09-09 00:39:10.094 [INFO][4087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:39:10.140896 containerd[1580]: 2025-09-09 00:39:10.094 [INFO][4087] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" HandleID="k8s-pod-network.79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" Workload="localhost-k8s-csi--node--driver--5g5jz-eth0" Sep 9 00:39:10.141121 containerd[1580]: 2025-09-09 00:39:10.098 [INFO][4062] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" Namespace="calico-system" Pod="csi-node-driver-5g5jz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5g5jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5g5jz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8af07ab4-1ceb-404b-af22-0045bd45398e", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5g5jz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1aae67eab09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:10.141174 containerd[1580]: 2025-09-09 00:39:10.098 [INFO][4062] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" Namespace="calico-system" Pod="csi-node-driver-5g5jz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5g5jz-eth0" Sep 9 00:39:10.141174 containerd[1580]: 2025-09-09 00:39:10.098 [INFO][4062] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1aae67eab09 ContainerID="79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" Namespace="calico-system" Pod="csi-node-driver-5g5jz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5g5jz-eth0" Sep 9 00:39:10.141174 containerd[1580]: 2025-09-09 00:39:10.125 [INFO][4062] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" Namespace="calico-system" Pod="csi-node-driver-5g5jz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5g5jz-eth0" Sep 9 00:39:10.141240 containerd[1580]: 2025-09-09 00:39:10.126 [INFO][4062] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" Namespace="calico-system" Pod="csi-node-driver-5g5jz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5g5jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5g5jz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8af07ab4-1ceb-404b-af22-0045bd45398e", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b", Pod:"csi-node-driver-5g5jz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1aae67eab09", MAC:"b6:f5:c7:f5:5e:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:10.141287 containerd[1580]: 2025-09-09 00:39:10.136 [INFO][4062] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" Namespace="calico-system" Pod="csi-node-driver-5g5jz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5g5jz-eth0" Sep 9 00:39:10.198875 systemd-networkd[1476]: cali1d95484dfb4: Link UP Sep 9 00:39:10.199704 systemd-networkd[1476]: cali1d95484dfb4: Gained carrier Sep 9 00:39:10.210284 kubelet[2733]: I0909 00:39:10.210232 2733 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7795f\" (UniqueName: \"kubernetes.io/projected/cc114c26-2614-40fd-b788-c9acb40d825d-kube-api-access-7795f\") on node \"localhost\" DevicePath \"\"" Sep 9 00:39:10.211343 kubelet[2733]: I0909 00:39:10.211166 2733 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc114c26-2614-40fd-b788-c9acb40d825d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:39:10.211343 kubelet[2733]: I0909 00:39:10.211180 2733 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc114c26-2614-40fd-b788-c9acb40d825d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:39:10.218452 containerd[1580]: 2025-09-09 00:39:09.958 [INFO][4055] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:39:10.218452 containerd[1580]: 2025-09-09 00:39:09.977 [INFO][4055] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zcdlm-eth0 coredns-674b8bbfcf- kube-system 2acf78cf-85e1-4c7c-9660-677990131edf 838 0 2025-09-09 00:38:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zcdlm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1d95484dfb4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" Namespace="kube-system" Pod="coredns-674b8bbfcf-zcdlm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zcdlm-" Sep 9 00:39:10.218452 containerd[1580]: 2025-09-09 00:39:09.977 [INFO][4055] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" Namespace="kube-system" Pod="coredns-674b8bbfcf-zcdlm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zcdlm-eth0" Sep 9 00:39:10.218452 containerd[1580]: 2025-09-09 00:39:10.041 [INFO][4085] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" HandleID="k8s-pod-network.8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" Workload="localhost-k8s-coredns--674b8bbfcf--zcdlm-eth0" Sep 9 00:39:10.218644 containerd[1580]: 2025-09-09 00:39:10.042 [INFO][4085] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" HandleID="k8s-pod-network.8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" Workload="localhost-k8s-coredns--674b8bbfcf--zcdlm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000168460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zcdlm", "timestamp":"2025-09-09 00:39:10.041215092 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:39:10.218644 containerd[1580]: 2025-09-09 00:39:10.042 [INFO][4085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:39:10.218644 containerd[1580]: 2025-09-09 00:39:10.094 [INFO][4085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:39:10.218644 containerd[1580]: 2025-09-09 00:39:10.094 [INFO][4085] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:39:10.218644 containerd[1580]: 2025-09-09 00:39:10.159 [INFO][4085] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" host="localhost" Sep 9 00:39:10.218644 containerd[1580]: 2025-09-09 00:39:10.166 [INFO][4085] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:39:10.218644 containerd[1580]: 2025-09-09 00:39:10.173 [INFO][4085] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:39:10.218644 containerd[1580]: 2025-09-09 00:39:10.174 [INFO][4085] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:10.218644 containerd[1580]: 2025-09-09 00:39:10.178 [INFO][4085] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:10.218644 containerd[1580]: 2025-09-09 00:39:10.178 [INFO][4085] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" host="localhost" Sep 9 00:39:10.218908 containerd[1580]: 2025-09-09 00:39:10.181 [INFO][4085] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42 Sep 9 00:39:10.218908 containerd[1580]: 2025-09-09 00:39:10.185 [INFO][4085] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" host="localhost" Sep 9 00:39:10.218908 containerd[1580]: 2025-09-09 00:39:10.191 [INFO][4085] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" host="localhost" Sep 9 00:39:10.218908 containerd[1580]: 2025-09-09 00:39:10.191 [INFO][4085] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" host="localhost" Sep 9 00:39:10.218908 containerd[1580]: 2025-09-09 00:39:10.191 [INFO][4085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:39:10.218908 containerd[1580]: 2025-09-09 00:39:10.191 [INFO][4085] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" HandleID="k8s-pod-network.8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" Workload="localhost-k8s-coredns--674b8bbfcf--zcdlm-eth0" Sep 9 00:39:10.219032 containerd[1580]: 2025-09-09 00:39:10.195 [INFO][4055] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" Namespace="kube-system" Pod="coredns-674b8bbfcf-zcdlm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zcdlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zcdlm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2acf78cf-85e1-4c7c-9660-677990131edf", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zcdlm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d95484dfb4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:10.219109 containerd[1580]: 2025-09-09 00:39:10.195 [INFO][4055] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" Namespace="kube-system" Pod="coredns-674b8bbfcf-zcdlm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zcdlm-eth0" Sep 9 00:39:10.219109 containerd[1580]: 2025-09-09 00:39:10.195 [INFO][4055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d95484dfb4 ContainerID="8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" Namespace="kube-system" Pod="coredns-674b8bbfcf-zcdlm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zcdlm-eth0" Sep 9 00:39:10.219109 containerd[1580]: 2025-09-09 00:39:10.199 [INFO][4055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" Namespace="kube-system" Pod="coredns-674b8bbfcf-zcdlm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zcdlm-eth0" Sep 9 00:39:10.219175 containerd[1580]: 2025-09-09 00:39:10.200 [INFO][4055] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" Namespace="kube-system" Pod="coredns-674b8bbfcf-zcdlm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zcdlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zcdlm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2acf78cf-85e1-4c7c-9660-677990131edf", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42", Pod:"coredns-674b8bbfcf-zcdlm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d95484dfb4", MAC:"62:d7:17:2d:36:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:10.219175 containerd[1580]: 2025-09-09 00:39:10.213 [INFO][4055] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" Namespace="kube-system" Pod="coredns-674b8bbfcf-zcdlm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zcdlm-eth0" Sep 9 00:39:10.273107 containerd[1580]: time="2025-09-09T00:39:10.272620631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78eb3dd5a2d64950b4dcde20366c009c9afe71212a813b2cfad1c49266fe8f46\" id:\"459faa3e2577dda00be34196aed556b3a1b5796a30a1373626e075f7b5a0c075\" pid:4124 exit_status:1 exited_at:{seconds:1757378350 nanos:272203737}" Sep 9 00:39:10.316964 containerd[1580]: time="2025-09-09T00:39:10.316888969Z" level=info msg="connecting to shim 8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42" address="unix:///run/containerd/s/66d54d967ff939370b832e8920fe8646beaa2cea2d268ffda548a6c224b28aa1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:39:10.317431 containerd[1580]: time="2025-09-09T00:39:10.317381976Z" level=info msg="connecting to shim 79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b" address="unix:///run/containerd/s/3b0d51d4ec6f51444caf9b0c3d114b72a91e09e1fa8a2a1c8282e35db60a0e87" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:39:10.345905 systemd[1]: Started cri-containerd-79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b.scope - libcontainer container 79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b. Sep 9 00:39:10.347727 systemd[1]: Started cri-containerd-8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42.scope - libcontainer container 8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42. Sep 9 00:39:10.363000 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:39:10.364349 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:39:10.384595 containerd[1580]: time="2025-09-09T00:39:10.384549265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5g5jz,Uid:8af07ab4-1ceb-404b-af22-0045bd45398e,Namespace:calico-system,Attempt:0,} returns sandbox id \"79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b\"" Sep 9 00:39:10.387889 containerd[1580]: time="2025-09-09T00:39:10.387856233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:39:10.404542 containerd[1580]: time="2025-09-09T00:39:10.404496037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zcdlm,Uid:2acf78cf-85e1-4c7c-9660-677990131edf,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42\"" Sep 9 00:39:10.405366 kubelet[2733]: E0909 00:39:10.405334 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:10.409645 containerd[1580]: time="2025-09-09T00:39:10.409595405Z" level=info msg="CreateContainer within sandbox \"8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:39:10.419543 containerd[1580]: time="2025-09-09T00:39:10.419468466Z" level=info msg="Container e189d1798cd2f0ff11ec9c56e1268aa4f3cc9fafe34aa91f4411bf9207c7d319: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:39:10.426037 containerd[1580]: time="2025-09-09T00:39:10.426004625Z" level=info msg="CreateContainer within sandbox \"8d286240e101f53e5e786a71292f4cbf5c6656c3ff9c98aadda7f50a8144aa42\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e189d1798cd2f0ff11ec9c56e1268aa4f3cc9fafe34aa91f4411bf9207c7d319\"" Sep 9 00:39:10.426542 containerd[1580]: time="2025-09-09T00:39:10.426466282Z" level=info msg="StartContainer for \"e189d1798cd2f0ff11ec9c56e1268aa4f3cc9fafe34aa91f4411bf9207c7d319\"" Sep 9 00:39:10.427287 containerd[1580]: time="2025-09-09T00:39:10.427261898Z" level=info msg="connecting to shim e189d1798cd2f0ff11ec9c56e1268aa4f3cc9fafe34aa91f4411bf9207c7d319" address="unix:///run/containerd/s/66d54d967ff939370b832e8920fe8646beaa2cea2d268ffda548a6c224b28aa1" protocol=ttrpc version=3 Sep 9 00:39:10.451906 systemd[1]: Started cri-containerd-e189d1798cd2f0ff11ec9c56e1268aa4f3cc9fafe34aa91f4411bf9207c7d319.scope - libcontainer container e189d1798cd2f0ff11ec9c56e1268aa4f3cc9fafe34aa91f4411bf9207c7d319. Sep 9 00:39:10.640229 containerd[1580]: time="2025-09-09T00:39:10.640104659Z" level=info msg="StartContainer for \"e189d1798cd2f0ff11ec9c56e1268aa4f3cc9fafe34aa91f4411bf9207c7d319\" returns successfully" Sep 9 00:39:11.056808 kubelet[2733]: E0909 00:39:11.056312 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:11.071429 systemd[1]: Removed slice kubepods-besteffort-podcc114c26_2614_40fd_b788_c9acb40d825d.slice - libcontainer container kubepods-besteffort-podcc114c26_2614_40fd_b788_c9acb40d825d.slice. Sep 9 00:39:11.092301 kubelet[2733]: I0909 00:39:11.091360 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zcdlm" podStartSLOduration=41.091338461 podStartE2EDuration="41.091338461s" podCreationTimestamp="2025-09-09 00:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:39:11.074554138 +0000 UTC m=+47.252442472" watchObservedRunningTime="2025-09-09 00:39:11.091338461 +0000 UTC m=+47.269226795" Sep 9 00:39:11.159868 systemd[1]: Started sshd@8-10.0.0.118:22-10.0.0.1:59704.service - OpenSSH per-connection server daemon (10.0.0.1:59704). Sep 9 00:39:11.172430 systemd[1]: Created slice kubepods-besteffort-podcd28b2ea_1aed_4851_9150_c8a7aab1e753.slice - libcontainer container kubepods-besteffort-podcd28b2ea_1aed_4851_9150_c8a7aab1e753.slice. Sep 9 00:39:11.210054 systemd-networkd[1476]: cali1aae67eab09: Gained IPv6LL Sep 9 00:39:11.221405 kubelet[2733]: I0909 00:39:11.221279 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd28b2ea-1aed-4851-9150-c8a7aab1e753-whisker-ca-bundle\") pod \"whisker-698765db9-qt98n\" (UID: \"cd28b2ea-1aed-4851-9150-c8a7aab1e753\") " pod="calico-system/whisker-698765db9-qt98n" Sep 9 00:39:11.221405 kubelet[2733]: I0909 00:39:11.221325 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cd28b2ea-1aed-4851-9150-c8a7aab1e753-whisker-backend-key-pair\") pod \"whisker-698765db9-qt98n\" (UID: \"cd28b2ea-1aed-4851-9150-c8a7aab1e753\") " pod="calico-system/whisker-698765db9-qt98n" Sep 9 00:39:11.221405 kubelet[2733]: I0909 00:39:11.221341 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h85h\" (UniqueName: \"kubernetes.io/projected/cd28b2ea-1aed-4851-9150-c8a7aab1e753-kube-api-access-6h85h\") pod \"whisker-698765db9-qt98n\" (UID: \"cd28b2ea-1aed-4851-9150-c8a7aab1e753\") " pod="calico-system/whisker-698765db9-qt98n" Sep 9 00:39:11.228776 containerd[1580]: time="2025-09-09T00:39:11.228719361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78eb3dd5a2d64950b4dcde20366c009c9afe71212a813b2cfad1c49266fe8f46\" id:\"31c4b8bb789b0636e6dae92082673eea675c918d7fb286f284d3a4311e1d9f4f\" pid:4410 exit_status:1 exited_at:{seconds:1757378351 nanos:227773944}" Sep 9 00:39:11.253656 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 59704 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:11.255528 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:11.260605 systemd-logind[1516]: New session 9 of user core. Sep 9 00:39:11.268902 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:39:11.410997 systemd-networkd[1476]: vxlan.calico: Link UP Sep 9 00:39:11.411010 systemd-networkd[1476]: vxlan.calico: Gained carrier Sep 9 00:39:11.435045 sshd[4427]: Connection closed by 10.0.0.1 port 59704 Sep 9 00:39:11.434961 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:11.446814 systemd-logind[1516]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:39:11.447533 systemd[1]: sshd@8-10.0.0.118:22-10.0.0.1:59704.service: Deactivated successfully. Sep 9 00:39:11.449885 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:39:11.451644 systemd-logind[1516]: Removed session 9. Sep 9 00:39:11.479799 containerd[1580]: time="2025-09-09T00:39:11.479679827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698765db9-qt98n,Uid:cd28b2ea-1aed-4851-9150-c8a7aab1e753,Namespace:calico-system,Attempt:0,}" Sep 9 00:39:11.594712 systemd-networkd[1476]: cali0cacc5c1d38: Link UP Sep 9 00:39:11.595945 systemd-networkd[1476]: cali0cacc5c1d38: Gained carrier Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.519 [INFO][4478] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--698765db9--qt98n-eth0 whisker-698765db9- calico-system cd28b2ea-1aed-4851-9150-c8a7aab1e753 995 0 2025-09-09 00:39:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:698765db9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-698765db9-qt98n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0cacc5c1d38 [] [] }} ContainerID="66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" Namespace="calico-system" Pod="whisker-698765db9-qt98n" WorkloadEndpoint="localhost-k8s-whisker--698765db9--qt98n-" Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.519 [INFO][4478] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" Namespace="calico-system" Pod="whisker-698765db9-qt98n" WorkloadEndpoint="localhost-k8s-whisker--698765db9--qt98n-eth0" Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.550 [INFO][4495] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" HandleID="k8s-pod-network.66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" Workload="localhost-k8s-whisker--698765db9--qt98n-eth0" Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.550 [INFO][4495] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" HandleID="k8s-pod-network.66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" Workload="localhost-k8s-whisker--698765db9--qt98n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf200), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-698765db9-qt98n", "timestamp":"2025-09-09 00:39:11.550501379 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.550 [INFO][4495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.550 [INFO][4495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.550 [INFO][4495] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.557 [INFO][4495] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" host="localhost" Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.561 [INFO][4495] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.566 [INFO][4495] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.568 [INFO][4495] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.570 [INFO][4495] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.570 [INFO][4495] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" host="localhost" Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.575 [INFO][4495] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6 Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.582 [INFO][4495] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" host="localhost" Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.586 [INFO][4495] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" host="localhost" Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.587 [INFO][4495] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" host="localhost" Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.587 [INFO][4495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:39:11.607991 containerd[1580]: 2025-09-09 00:39:11.587 [INFO][4495] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" HandleID="k8s-pod-network.66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" Workload="localhost-k8s-whisker--698765db9--qt98n-eth0" Sep 9 00:39:11.608677 containerd[1580]: 2025-09-09 00:39:11.591 [INFO][4478] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" Namespace="calico-system" Pod="whisker-698765db9-qt98n" WorkloadEndpoint="localhost-k8s-whisker--698765db9--qt98n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--698765db9--qt98n-eth0", GenerateName:"whisker-698765db9-", Namespace:"calico-system", SelfLink:"", UID:"cd28b2ea-1aed-4851-9150-c8a7aab1e753", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 39, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"698765db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-698765db9-qt98n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0cacc5c1d38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:11.608677 containerd[1580]: 2025-09-09 00:39:11.591 [INFO][4478] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" Namespace="calico-system" Pod="whisker-698765db9-qt98n" WorkloadEndpoint="localhost-k8s-whisker--698765db9--qt98n-eth0" Sep 9 00:39:11.608677 containerd[1580]: 2025-09-09 00:39:11.591 [INFO][4478] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0cacc5c1d38 ContainerID="66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" Namespace="calico-system" Pod="whisker-698765db9-qt98n" WorkloadEndpoint="localhost-k8s-whisker--698765db9--qt98n-eth0" Sep 9 00:39:11.608677 containerd[1580]: 2025-09-09 00:39:11.596 [INFO][4478] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" Namespace="calico-system" Pod="whisker-698765db9-qt98n" WorkloadEndpoint="localhost-k8s-whisker--698765db9--qt98n-eth0" Sep 9 00:39:11.608677 containerd[1580]: 2025-09-09 00:39:11.597 [INFO][4478] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" Namespace="calico-system" Pod="whisker-698765db9-qt98n" WorkloadEndpoint="localhost-k8s-whisker--698765db9--qt98n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--698765db9--qt98n-eth0", GenerateName:"whisker-698765db9-", Namespace:"calico-system", SelfLink:"", UID:"cd28b2ea-1aed-4851-9150-c8a7aab1e753", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 39, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"698765db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6", Pod:"whisker-698765db9-qt98n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0cacc5c1d38", MAC:"06:92:3d:ec:ec:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:11.608677 containerd[1580]: 2025-09-09 00:39:11.603 [INFO][4478] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" Namespace="calico-system" Pod="whisker-698765db9-qt98n" WorkloadEndpoint="localhost-k8s-whisker--698765db9--qt98n-eth0" Sep 9 00:39:11.629032 containerd[1580]: time="2025-09-09T00:39:11.628977380Z" level=info msg="connecting to shim 66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6" address="unix:///run/containerd/s/42dc1250328069bdbdfddca77f62fc56b1527275fdf79a75f263465f5d25e7c9" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:39:11.659001 systemd[1]: Started cri-containerd-66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6.scope - libcontainer container 66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6. Sep 9 00:39:11.674519 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:39:11.711720 containerd[1580]: time="2025-09-09T00:39:11.711637197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698765db9-qt98n,Uid:cd28b2ea-1aed-4851-9150-c8a7aab1e753,Namespace:calico-system,Attempt:0,} returns sandbox id \"66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6\"" Sep 9 00:39:11.927417 kubelet[2733]: I0909 00:39:11.927265 2733 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc114c26-2614-40fd-b788-c9acb40d825d" path="/var/lib/kubelet/pods/cc114c26-2614-40fd-b788-c9acb40d825d/volumes" Sep 9 00:39:12.061224 kubelet[2733]: E0909 00:39:12.061155 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:12.105928 systemd-networkd[1476]: cali1d95484dfb4: Gained IPv6LL Sep 9 00:39:12.153159 containerd[1580]: time="2025-09-09T00:39:12.153107756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:12.153894 containerd[1580]: time="2025-09-09T00:39:12.153871442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 00:39:12.155167 containerd[1580]: time="2025-09-09T00:39:12.155127984Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:12.157419 containerd[1580]: time="2025-09-09T00:39:12.157371872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:12.157892 containerd[1580]: time="2025-09-09T00:39:12.157852205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.76996342s" Sep 9 00:39:12.157892 containerd[1580]: time="2025-09-09T00:39:12.157878985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 00:39:12.158984 containerd[1580]: time="2025-09-09T00:39:12.158788585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:39:12.162450 containerd[1580]: time="2025-09-09T00:39:12.162414772Z" level=info msg="CreateContainer within sandbox \"79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:39:12.173024 containerd[1580]: time="2025-09-09T00:39:12.172973631Z" level=info msg="Container f453e6b910c48bf9b4ff6585c421c07d16f4385a60690c7a1b79b15f6c80739f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:39:12.181807 containerd[1580]: time="2025-09-09T00:39:12.181680278Z" level=info msg="CreateContainer within sandbox \"79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f453e6b910c48bf9b4ff6585c421c07d16f4385a60690c7a1b79b15f6c80739f\"" Sep 9 00:39:12.182488 containerd[1580]: time="2025-09-09T00:39:12.182239650Z" level=info msg="StartContainer for \"f453e6b910c48bf9b4ff6585c421c07d16f4385a60690c7a1b79b15f6c80739f\"" Sep 9 00:39:12.184105 containerd[1580]: time="2025-09-09T00:39:12.184076443Z" level=info msg="connecting to shim f453e6b910c48bf9b4ff6585c421c07d16f4385a60690c7a1b79b15f6c80739f" address="unix:///run/containerd/s/3b0d51d4ec6f51444caf9b0c3d114b72a91e09e1fa8a2a1c8282e35db60a0e87" protocol=ttrpc version=3 Sep 9 00:39:12.220021 systemd[1]: Started cri-containerd-f453e6b910c48bf9b4ff6585c421c07d16f4385a60690c7a1b79b15f6c80739f.scope - libcontainer container f453e6b910c48bf9b4ff6585c421c07d16f4385a60690c7a1b79b15f6c80739f. Sep 9 00:39:12.264522 containerd[1580]: time="2025-09-09T00:39:12.264468518Z" level=info msg="StartContainer for \"f453e6b910c48bf9b4ff6585c421c07d16f4385a60690c7a1b79b15f6c80739f\" returns successfully" Sep 9 00:39:13.067274 systemd-networkd[1476]: cali0cacc5c1d38: Gained IPv6LL Sep 9 00:39:13.070931 kubelet[2733]: E0909 00:39:13.070892 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:13.194063 systemd-networkd[1476]: vxlan.calico: Gained IPv6LL Sep 9 00:39:13.584376 containerd[1580]: time="2025-09-09T00:39:13.584296758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:13.585458 containerd[1580]: time="2025-09-09T00:39:13.585351440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 00:39:13.586409 containerd[1580]: time="2025-09-09T00:39:13.586359916Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:13.589001 containerd[1580]: time="2025-09-09T00:39:13.588950936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:13.589650 containerd[1580]: time="2025-09-09T00:39:13.589613812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.430788177s" Sep 9 00:39:13.589701 containerd[1580]: time="2025-09-09T00:39:13.589651803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 00:39:13.590796 containerd[1580]: time="2025-09-09T00:39:13.590556023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:39:13.596390 containerd[1580]: time="2025-09-09T00:39:13.596348452Z" level=info msg="CreateContainer within sandbox \"66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:39:13.614330 containerd[1580]: time="2025-09-09T00:39:13.614263507Z" level=info msg="Container 5ac07aa5f90cc938eae038f4d965f5c9e086df8486909111c07d878dbf72e849: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:39:13.622767 containerd[1580]: time="2025-09-09T00:39:13.622707059Z" level=info msg="CreateContainer within sandbox \"66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5ac07aa5f90cc938eae038f4d965f5c9e086df8486909111c07d878dbf72e849\"" Sep 9 00:39:13.623275 containerd[1580]: time="2025-09-09T00:39:13.623244179Z" level=info msg="StartContainer for \"5ac07aa5f90cc938eae038f4d965f5c9e086df8486909111c07d878dbf72e849\"" Sep 9 00:39:13.624521 containerd[1580]: time="2025-09-09T00:39:13.624496993Z" level=info msg="connecting to shim 5ac07aa5f90cc938eae038f4d965f5c9e086df8486909111c07d878dbf72e849" address="unix:///run/containerd/s/42dc1250328069bdbdfddca77f62fc56b1527275fdf79a75f263465f5d25e7c9" protocol=ttrpc version=3 Sep 9 00:39:13.649999 systemd[1]: Started cri-containerd-5ac07aa5f90cc938eae038f4d965f5c9e086df8486909111c07d878dbf72e849.scope - libcontainer container 5ac07aa5f90cc938eae038f4d965f5c9e086df8486909111c07d878dbf72e849. Sep 9 00:39:13.698268 containerd[1580]: time="2025-09-09T00:39:13.698208722Z" level=info msg="StartContainer for \"5ac07aa5f90cc938eae038f4d965f5c9e086df8486909111c07d878dbf72e849\" returns successfully" Sep 9 00:39:14.071878 kubelet[2733]: E0909 00:39:14.071838 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:15.483906 containerd[1580]: time="2025-09-09T00:39:15.483814199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:15.484898 containerd[1580]: time="2025-09-09T00:39:15.484869353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 00:39:15.486479 containerd[1580]: time="2025-09-09T00:39:15.486396422Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:15.492922 containerd[1580]: time="2025-09-09T00:39:15.492858809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:15.494505 containerd[1580]: time="2025-09-09T00:39:15.494421545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.90382724s" Sep 9 00:39:15.494505 containerd[1580]: time="2025-09-09T00:39:15.494491598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 00:39:15.495663 containerd[1580]: time="2025-09-09T00:39:15.495617874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:39:15.500197 containerd[1580]: time="2025-09-09T00:39:15.500139072Z" level=info msg="CreateContainer within sandbox \"79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:39:15.527501 containerd[1580]: time="2025-09-09T00:39:15.527431830Z" level=info msg="Container b04c4f7eed6afa95483c2eaba81a7c70689dd747f33bf0f1cbd6d73c9bc2cb10: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:39:15.546415 containerd[1580]: time="2025-09-09T00:39:15.546363643Z" level=info msg="CreateContainer within sandbox \"79974aca70a8c60c9041b6f7c5d37f8f0ae69dc9e368da2fd086cadd39a0454b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b04c4f7eed6afa95483c2eaba81a7c70689dd747f33bf0f1cbd6d73c9bc2cb10\"" Sep 9 00:39:15.547465 containerd[1580]: time="2025-09-09T00:39:15.547374693Z" level=info msg="StartContainer for \"b04c4f7eed6afa95483c2eaba81a7c70689dd747f33bf0f1cbd6d73c9bc2cb10\"" Sep 9 00:39:15.550460 containerd[1580]: time="2025-09-09T00:39:15.550416921Z" level=info msg="connecting to shim b04c4f7eed6afa95483c2eaba81a7c70689dd747f33bf0f1cbd6d73c9bc2cb10" address="unix:///run/containerd/s/3b0d51d4ec6f51444caf9b0c3d114b72a91e09e1fa8a2a1c8282e35db60a0e87" protocol=ttrpc version=3 Sep 9 00:39:15.583008 systemd[1]: Started cri-containerd-b04c4f7eed6afa95483c2eaba81a7c70689dd747f33bf0f1cbd6d73c9bc2cb10.scope - libcontainer container b04c4f7eed6afa95483c2eaba81a7c70689dd747f33bf0f1cbd6d73c9bc2cb10. Sep 9 00:39:15.645472 containerd[1580]: time="2025-09-09T00:39:15.645384015Z" level=info msg="StartContainer for \"b04c4f7eed6afa95483c2eaba81a7c70689dd747f33bf0f1cbd6d73c9bc2cb10\" returns successfully" Sep 9 00:39:15.987506 kubelet[2733]: I0909 00:39:15.987453 2733 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:39:16.078267 kubelet[2733]: I0909 00:39:16.078222 2733 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:39:16.345278 kubelet[2733]: I0909 00:39:16.345187 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5g5jz" podStartSLOduration=27.235779807 podStartE2EDuration="32.34516953s" podCreationTimestamp="2025-09-09 00:38:44 +0000 UTC" firstStartedPulling="2025-09-09 00:39:10.386058663 +0000 UTC m=+46.563946987" lastFinishedPulling="2025-09-09 00:39:15.495448376 +0000 UTC m=+51.673336710" observedRunningTime="2025-09-09 00:39:16.342943937 +0000 UTC m=+52.520832281" watchObservedRunningTime="2025-09-09 00:39:16.34516953 +0000 UTC m=+52.523057874" Sep 9 00:39:16.450586 systemd[1]: Started sshd@9-10.0.0.118:22-10.0.0.1:59714.service - OpenSSH per-connection server daemon (10.0.0.1:59714). Sep 9 00:39:16.518338 sshd[4712]: Accepted publickey for core from 10.0.0.1 port 59714 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:16.520092 sshd-session[4712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:16.525954 systemd-logind[1516]: New session 10 of user core. Sep 9 00:39:16.539159 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:39:16.896751 sshd[4715]: Connection closed by 10.0.0.1 port 59714 Sep 9 00:39:16.899638 sshd-session[4712]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:16.905113 systemd[1]: sshd@9-10.0.0.118:22-10.0.0.1:59714.service: Deactivated successfully. Sep 9 00:39:16.909879 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:39:16.913514 systemd-logind[1516]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:39:16.917185 systemd-logind[1516]: Removed session 10. Sep 9 00:39:18.460910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796471132.mount: Deactivated successfully. Sep 9 00:39:18.485288 containerd[1580]: time="2025-09-09T00:39:18.485227139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:18.486048 containerd[1580]: time="2025-09-09T00:39:18.485982047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 00:39:18.487201 containerd[1580]: time="2025-09-09T00:39:18.487172585Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:18.498989 containerd[1580]: time="2025-09-09T00:39:18.489466907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:18.499221 containerd[1580]: time="2025-09-09T00:39:18.490111077Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.994465951s" Sep 9 00:39:18.499221 containerd[1580]: time="2025-09-09T00:39:18.499126089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 00:39:18.506351 containerd[1580]: time="2025-09-09T00:39:18.505945644Z" level=info msg="CreateContainer within sandbox \"66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:39:18.515344 containerd[1580]: time="2025-09-09T00:39:18.515259847Z" level=info msg="Container c8dabc35c37fa2c171d984ea80412ceb8e048ecaff7e581e925c5c72015fc723: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:39:18.526549 containerd[1580]: time="2025-09-09T00:39:18.526487437Z" level=info msg="CreateContainer within sandbox \"66caa4e2394516962e77687a746790bd2f43b5c2a2ede2bda285a44bc01f19a6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c8dabc35c37fa2c171d984ea80412ceb8e048ecaff7e581e925c5c72015fc723\"" Sep 9 00:39:18.527476 containerd[1580]: time="2025-09-09T00:39:18.527242887Z" level=info msg="StartContainer for \"c8dabc35c37fa2c171d984ea80412ceb8e048ecaff7e581e925c5c72015fc723\"" Sep 9 00:39:18.528973 containerd[1580]: time="2025-09-09T00:39:18.528924878Z" level=info msg="connecting to shim c8dabc35c37fa2c171d984ea80412ceb8e048ecaff7e581e925c5c72015fc723" address="unix:///run/containerd/s/42dc1250328069bdbdfddca77f62fc56b1527275fdf79a75f263465f5d25e7c9" protocol=ttrpc version=3 Sep 9 00:39:18.552964 systemd[1]: Started cri-containerd-c8dabc35c37fa2c171d984ea80412ceb8e048ecaff7e581e925c5c72015fc723.scope - libcontainer container c8dabc35c37fa2c171d984ea80412ceb8e048ecaff7e581e925c5c72015fc723. Sep 9 00:39:18.610292 containerd[1580]: time="2025-09-09T00:39:18.610250218Z" level=info msg="StartContainer for \"c8dabc35c37fa2c171d984ea80412ceb8e048ecaff7e581e925c5c72015fc723\" returns successfully" Sep 9 00:39:20.925154 containerd[1580]: time="2025-09-09T00:39:20.925084646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6597668795-hjwdq,Uid:f1786461-4df1-4450-84a8-8898a47476b6,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:39:20.925725 containerd[1580]: time="2025-09-09T00:39:20.925419776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c56b7dd9f-z7qmz,Uid:e12878cb-06d1-48dd-b8a4-7a5373ea5ca3,Namespace:calico-system,Attempt:0,}" Sep 9 00:39:21.366468 systemd-networkd[1476]: cali58890cb4b84: Link UP Sep 9 00:39:21.367443 systemd-networkd[1476]: cali58890cb4b84: Gained carrier Sep 9 00:39:21.380395 kubelet[2733]: I0909 00:39:21.380297 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-698765db9-qt98n" podStartSLOduration=3.593331474 podStartE2EDuration="10.380274628s" podCreationTimestamp="2025-09-09 00:39:11 +0000 UTC" firstStartedPulling="2025-09-09 00:39:11.713194365 +0000 UTC m=+47.891082689" lastFinishedPulling="2025-09-09 00:39:18.500137509 +0000 UTC m=+54.678025843" observedRunningTime="2025-09-09 00:39:19.105836807 +0000 UTC m=+55.283725141" watchObservedRunningTime="2025-09-09 00:39:21.380274628 +0000 UTC m=+57.558162962" Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.280 [INFO][4790] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-eth0 calico-kube-controllers-5c56b7dd9f- calico-system e12878cb-06d1-48dd-b8a4-7a5373ea5ca3 837 0 2025-09-09 00:38:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c56b7dd9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5c56b7dd9f-z7qmz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali58890cb4b84 [] [] }} ContainerID="4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" Namespace="calico-system" Pod="calico-kube-controllers-5c56b7dd9f-z7qmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-" Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.281 [INFO][4790] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" Namespace="calico-system" Pod="calico-kube-controllers-5c56b7dd9f-z7qmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-eth0" Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.319 [INFO][4810] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" HandleID="k8s-pod-network.4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" Workload="localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-eth0" Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.320 [INFO][4810] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" HandleID="k8s-pod-network.4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" Workload="localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7810), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5c56b7dd9f-z7qmz", "timestamp":"2025-09-09 00:39:21.319717161 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.320 [INFO][4810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.320 [INFO][4810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.320 [INFO][4810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.329 [INFO][4810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" host="localhost" Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.336 [INFO][4810] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.341 [INFO][4810] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.343 [INFO][4810] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.345 [INFO][4810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.345 [INFO][4810] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" host="localhost" Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.347 [INFO][4810] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152 Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.351 [INFO][4810] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" host="localhost" Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.358 [INFO][4810] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" host="localhost" Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.358 [INFO][4810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" host="localhost" Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.358 [INFO][4810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:39:21.386152 containerd[1580]: 2025-09-09 00:39:21.358 [INFO][4810] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" HandleID="k8s-pod-network.4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" Workload="localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-eth0" Sep 9 00:39:21.386734 containerd[1580]: 2025-09-09 00:39:21.362 [INFO][4790] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" Namespace="calico-system" Pod="calico-kube-controllers-5c56b7dd9f-z7qmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-eth0", GenerateName:"calico-kube-controllers-5c56b7dd9f-", Namespace:"calico-system", SelfLink:"", UID:"e12878cb-06d1-48dd-b8a4-7a5373ea5ca3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c56b7dd9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5c56b7dd9f-z7qmz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58890cb4b84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:21.386734 containerd[1580]: 2025-09-09 00:39:21.362 [INFO][4790] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" Namespace="calico-system" Pod="calico-kube-controllers-5c56b7dd9f-z7qmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-eth0" Sep 9 00:39:21.386734 containerd[1580]: 2025-09-09 00:39:21.362 [INFO][4790] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58890cb4b84 ContainerID="4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" Namespace="calico-system" Pod="calico-kube-controllers-5c56b7dd9f-z7qmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-eth0" Sep 9 00:39:21.386734 containerd[1580]: 2025-09-09 00:39:21.366 [INFO][4790] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" Namespace="calico-system" Pod="calico-kube-controllers-5c56b7dd9f-z7qmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-eth0" Sep 9 00:39:21.386734 containerd[1580]: 2025-09-09 00:39:21.367 [INFO][4790] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" Namespace="calico-system" Pod="calico-kube-controllers-5c56b7dd9f-z7qmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-eth0", GenerateName:"calico-kube-controllers-5c56b7dd9f-", Namespace:"calico-system", SelfLink:"", UID:"e12878cb-06d1-48dd-b8a4-7a5373ea5ca3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c56b7dd9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152", Pod:"calico-kube-controllers-5c56b7dd9f-z7qmz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58890cb4b84", MAC:"8e:2e:7f:3c:18:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:21.386734 containerd[1580]: 2025-09-09 00:39:21.381 [INFO][4790] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" Namespace="calico-system" Pod="calico-kube-controllers-5c56b7dd9f-z7qmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c56b7dd9f--z7qmz-eth0" Sep 9 00:39:21.418962 containerd[1580]: time="2025-09-09T00:39:21.418855230Z" level=info msg="connecting to shim 4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152" address="unix:///run/containerd/s/c76b39b2df8bde7adb0621ab237823ff086b37c7208b64a1f121ae9b69c854cb" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:39:21.459142 systemd[1]: Started cri-containerd-4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152.scope - libcontainer container 4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152. Sep 9 00:39:21.470426 systemd-networkd[1476]: cali3295a03ed0d: Link UP Sep 9 00:39:21.473522 systemd-networkd[1476]: cali3295a03ed0d: Gained carrier Sep 9 00:39:21.479487 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.280 [INFO][4779] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6597668795--hjwdq-eth0 calico-apiserver-6597668795- calico-apiserver f1786461-4df1-4450-84a8-8898a47476b6 836 0 2025-09-09 00:38:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6597668795 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6597668795-hjwdq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3295a03ed0d [] [] }} ContainerID="af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-hjwdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--hjwdq-" Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.281 [INFO][4779] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-hjwdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--hjwdq-eth0" Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.320 [INFO][4811] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" HandleID="k8s-pod-network.af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" Workload="localhost-k8s-calico--apiserver--6597668795--hjwdq-eth0" Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.320 [INFO][4811] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" HandleID="k8s-pod-network.af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" Workload="localhost-k8s-calico--apiserver--6597668795--hjwdq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aeaf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6597668795-hjwdq", "timestamp":"2025-09-09 00:39:21.320798613 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.321 [INFO][4811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.358 [INFO][4811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.358 [INFO][4811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.431 [INFO][4811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" host="localhost" Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.437 [INFO][4811] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.441 [INFO][4811] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.443 [INFO][4811] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.445 [INFO][4811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.445 [INFO][4811] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" host="localhost" Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.447 [INFO][4811] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315 Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.452 [INFO][4811] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" host="localhost" Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.460 [INFO][4811] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" host="localhost" Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.460 [INFO][4811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" host="localhost" Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.460 [INFO][4811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:39:21.491994 containerd[1580]: 2025-09-09 00:39:21.460 [INFO][4811] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" HandleID="k8s-pod-network.af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" Workload="localhost-k8s-calico--apiserver--6597668795--hjwdq-eth0" Sep 9 00:39:21.492898 containerd[1580]: 2025-09-09 00:39:21.464 [INFO][4779] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-hjwdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--hjwdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6597668795--hjwdq-eth0", GenerateName:"calico-apiserver-6597668795-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1786461-4df1-4450-84a8-8898a47476b6", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6597668795", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6597668795-hjwdq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3295a03ed0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:21.492898 containerd[1580]: 2025-09-09 00:39:21.464 [INFO][4779] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-hjwdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--hjwdq-eth0" Sep 9 00:39:21.492898 containerd[1580]: 2025-09-09 00:39:21.464 [INFO][4779] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3295a03ed0d ContainerID="af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-hjwdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--hjwdq-eth0" Sep 9 00:39:21.492898 containerd[1580]: 2025-09-09 00:39:21.471 [INFO][4779] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-hjwdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--hjwdq-eth0" Sep 9 00:39:21.492898 containerd[1580]: 2025-09-09 00:39:21.471 [INFO][4779] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-hjwdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--hjwdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6597668795--hjwdq-eth0", GenerateName:"calico-apiserver-6597668795-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1786461-4df1-4450-84a8-8898a47476b6", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6597668795", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315", Pod:"calico-apiserver-6597668795-hjwdq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3295a03ed0d", MAC:"1a:73:81:24:f5:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:21.492898 containerd[1580]: 2025-09-09 00:39:21.487 [INFO][4779] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-hjwdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--hjwdq-eth0" Sep 9 00:39:21.520109 containerd[1580]: time="2025-09-09T00:39:21.520048491Z" level=info msg="connecting to shim af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315" address="unix:///run/containerd/s/35b5d1160e5abcce382ee5ed587d75ee2b7f9fac252b659864483790a4c6f351" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:39:21.523493 containerd[1580]: time="2025-09-09T00:39:21.523461223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c56b7dd9f-z7qmz,Uid:e12878cb-06d1-48dd-b8a4-7a5373ea5ca3,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152\"" Sep 9 00:39:21.530099 containerd[1580]: time="2025-09-09T00:39:21.530031438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:39:21.550269 systemd[1]: Started cri-containerd-af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315.scope - libcontainer container af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315. Sep 9 00:39:21.569692 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:39:21.915298 systemd[1]: Started sshd@10-10.0.0.118:22-10.0.0.1:43198.service - OpenSSH per-connection server daemon (10.0.0.1:43198). Sep 9 00:39:21.925374 containerd[1580]: time="2025-09-09T00:39:21.925313077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6597668795-jj7hn,Uid:ecaf2959-9a08-4e00-948d-967e50257a25,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:39:22.145597 sshd[4943]: Accepted publickey for core from 10.0.0.1 port 43198 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:22.147220 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:22.151885 systemd-logind[1516]: New session 11 of user core. Sep 9 00:39:22.162903 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:39:22.275613 containerd[1580]: time="2025-09-09T00:39:22.274922646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6597668795-hjwdq,Uid:f1786461-4df1-4450-84a8-8898a47476b6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315\"" Sep 9 00:39:22.434523 sshd[4946]: Connection closed by 10.0.0.1 port 43198 Sep 9 00:39:22.434908 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:22.443382 systemd[1]: sshd@10-10.0.0.118:22-10.0.0.1:43198.service: Deactivated successfully. Sep 9 00:39:22.445151 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:39:22.445945 systemd-logind[1516]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:39:22.448546 systemd[1]: Started sshd@11-10.0.0.118:22-10.0.0.1:43210.service - OpenSSH per-connection server daemon (10.0.0.1:43210). Sep 9 00:39:22.449179 systemd-logind[1516]: Removed session 11. Sep 9 00:39:22.497493 sshd[4960]: Accepted publickey for core from 10.0.0.1 port 43210 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:22.499314 sshd-session[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:22.503724 systemd-logind[1516]: New session 12 of user core. Sep 9 00:39:22.512886 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:39:22.732749 sshd[4963]: Connection closed by 10.0.0.1 port 43210 Sep 9 00:39:22.733428 sshd-session[4960]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:22.748778 systemd[1]: sshd@11-10.0.0.118:22-10.0.0.1:43210.service: Deactivated successfully. Sep 9 00:39:22.751267 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:39:22.755614 systemd-logind[1516]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:39:22.759337 systemd[1]: Started sshd@12-10.0.0.118:22-10.0.0.1:43218.service - OpenSSH per-connection server daemon (10.0.0.1:43218). Sep 9 00:39:22.761158 systemd-logind[1516]: Removed session 12. Sep 9 00:39:22.794838 systemd-networkd[1476]: calia1ad1cc71e2: Link UP Sep 9 00:39:22.796216 systemd-networkd[1476]: calia1ad1cc71e2: Gained carrier Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.689 [INFO][4971] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6597668795--jj7hn-eth0 calico-apiserver-6597668795- calico-apiserver ecaf2959-9a08-4e00-948d-967e50257a25 834 0 2025-09-09 00:38:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6597668795 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6597668795-jj7hn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia1ad1cc71e2 [] [] }} ContainerID="33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-jj7hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--jj7hn-" Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.689 [INFO][4971] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-jj7hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--jj7hn-eth0" Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.727 [INFO][4986] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" HandleID="k8s-pod-network.33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" Workload="localhost-k8s-calico--apiserver--6597668795--jj7hn-eth0" Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.727 [INFO][4986] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" HandleID="k8s-pod-network.33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" Workload="localhost-k8s-calico--apiserver--6597668795--jj7hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6597668795-jj7hn", "timestamp":"2025-09-09 00:39:22.726992221 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.727 [INFO][4986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.727 [INFO][4986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.727 [INFO][4986] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.737 [INFO][4986] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" host="localhost" Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.754 [INFO][4986] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.765 [INFO][4986] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.768 [INFO][4986] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.771 [INFO][4986] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.771 [INFO][4986] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" host="localhost" Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.773 [INFO][4986] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11 Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.781 [INFO][4986] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" host="localhost" Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.787 [INFO][4986] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" host="localhost" Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.787 [INFO][4986] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" host="localhost" Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.787 [INFO][4986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:39:22.815704 containerd[1580]: 2025-09-09 00:39:22.787 [INFO][4986] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" HandleID="k8s-pod-network.33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" Workload="localhost-k8s-calico--apiserver--6597668795--jj7hn-eth0" Sep 9 00:39:22.816854 containerd[1580]: 2025-09-09 00:39:22.790 [INFO][4971] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-jj7hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--jj7hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6597668795--jj7hn-eth0", GenerateName:"calico-apiserver-6597668795-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecaf2959-9a08-4e00-948d-967e50257a25", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6597668795", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6597668795-jj7hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia1ad1cc71e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:22.816854 containerd[1580]: 2025-09-09 00:39:22.790 [INFO][4971] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-jj7hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--jj7hn-eth0" Sep 9 00:39:22.816854 containerd[1580]: 2025-09-09 00:39:22.790 [INFO][4971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1ad1cc71e2 ContainerID="33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-jj7hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--jj7hn-eth0" Sep 9 00:39:22.816854 containerd[1580]: 2025-09-09 00:39:22.797 [INFO][4971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-jj7hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--jj7hn-eth0" Sep 9 00:39:22.816854 containerd[1580]: 2025-09-09 00:39:22.797 [INFO][4971] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-jj7hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--jj7hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6597668795--jj7hn-eth0", GenerateName:"calico-apiserver-6597668795-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecaf2959-9a08-4e00-948d-967e50257a25", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6597668795", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11", Pod:"calico-apiserver-6597668795-jj7hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia1ad1cc71e2", MAC:"9e:79:81:a5:25:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:22.816854 containerd[1580]: 2025-09-09 00:39:22.811 [INFO][4971] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" Namespace="calico-apiserver" Pod="calico-apiserver-6597668795-jj7hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6597668795--jj7hn-eth0" Sep 9 00:39:22.819492 sshd[4997]: Accepted publickey for core from 10.0.0.1 port 43218 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:22.821787 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:22.826664 systemd-logind[1516]: New session 13 of user core. Sep 9 00:39:22.833971 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:39:22.866481 containerd[1580]: time="2025-09-09T00:39:22.865907198Z" level=info msg="connecting to shim 33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11" address="unix:///run/containerd/s/ba3f20a1195829e64e43dc9b61588fccc59e6f43241db887b0a1dadab3b0ec34" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:39:22.898015 systemd[1]: Started cri-containerd-33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11.scope - libcontainer container 33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11. Sep 9 00:39:22.914060 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:39:22.956831 containerd[1580]: time="2025-09-09T00:39:22.956734704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6597668795-jj7hn,Uid:ecaf2959-9a08-4e00-948d-967e50257a25,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11\"" Sep 9 00:39:22.967959 sshd[5010]: Connection closed by 10.0.0.1 port 43218 Sep 9 00:39:22.969014 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:22.974606 systemd[1]: sshd@12-10.0.0.118:22-10.0.0.1:43218.service: Deactivated successfully. Sep 9 00:39:22.976902 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:39:22.977717 systemd-logind[1516]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:39:22.978906 systemd-logind[1516]: Removed session 13. Sep 9 00:39:23.241992 systemd-networkd[1476]: cali58890cb4b84: Gained IPv6LL Sep 9 00:39:23.306013 systemd-networkd[1476]: cali3295a03ed0d: Gained IPv6LL Sep 9 00:39:23.974793 kubelet[2733]: E0909 00:39:23.974494 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:23.981943 containerd[1580]: time="2025-09-09T00:39:23.981886608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-khqs9,Uid:b21b64d9-46e3-4b3a-9c25-2a3c85894cc9,Namespace:kube-system,Attempt:0,}" Sep 9 00:39:23.984676 containerd[1580]: time="2025-09-09T00:39:23.984330359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xr2nx,Uid:11cfeb17-60a0-4315-be58-d803d500749a,Namespace:calico-system,Attempt:0,}" Sep 9 00:39:24.159030 systemd-networkd[1476]: cali519d62840bd: Link UP Sep 9 00:39:24.161454 systemd-networkd[1476]: cali519d62840bd: Gained carrier Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.049 [INFO][5085] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--xr2nx-eth0 goldmane-54d579b49d- calico-system 11cfeb17-60a0-4315-be58-d803d500749a 840 0 2025-09-09 00:38:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-xr2nx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali519d62840bd [] [] }} ContainerID="9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" Namespace="calico-system" Pod="goldmane-54d579b49d-xr2nx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xr2nx-" Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.049 [INFO][5085] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" Namespace="calico-system" Pod="goldmane-54d579b49d-xr2nx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xr2nx-eth0" Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.095 [INFO][5107] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" HandleID="k8s-pod-network.9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" Workload="localhost-k8s-goldmane--54d579b49d--xr2nx-eth0" Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.095 [INFO][5107] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" HandleID="k8s-pod-network.9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" Workload="localhost-k8s-goldmane--54d579b49d--xr2nx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002877d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-xr2nx", "timestamp":"2025-09-09 00:39:24.095463692 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.095 [INFO][5107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.095 [INFO][5107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.095 [INFO][5107] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.106 [INFO][5107] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" host="localhost" Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.118 [INFO][5107] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.125 [INFO][5107] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.128 [INFO][5107] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.131 [INFO][5107] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.131 [INFO][5107] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" host="localhost" Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.133 [INFO][5107] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3 Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.140 [INFO][5107] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" host="localhost" Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.149 [INFO][5107] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" host="localhost" Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.149 [INFO][5107] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" host="localhost" Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.149 [INFO][5107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:39:24.187229 containerd[1580]: 2025-09-09 00:39:24.149 [INFO][5107] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" HandleID="k8s-pod-network.9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" Workload="localhost-k8s-goldmane--54d579b49d--xr2nx-eth0" Sep 9 00:39:24.188883 containerd[1580]: 2025-09-09 00:39:24.154 [INFO][5085] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" Namespace="calico-system" Pod="goldmane-54d579b49d-xr2nx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xr2nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--xr2nx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"11cfeb17-60a0-4315-be58-d803d500749a", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-xr2nx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali519d62840bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:24.188883 containerd[1580]: 2025-09-09 00:39:24.154 [INFO][5085] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" Namespace="calico-system" Pod="goldmane-54d579b49d-xr2nx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xr2nx-eth0" Sep 9 00:39:24.188883 containerd[1580]: 2025-09-09 00:39:24.154 [INFO][5085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali519d62840bd ContainerID="9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" Namespace="calico-system" Pod="goldmane-54d579b49d-xr2nx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xr2nx-eth0" Sep 9 00:39:24.188883 containerd[1580]: 2025-09-09 00:39:24.160 [INFO][5085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" Namespace="calico-system" Pod="goldmane-54d579b49d-xr2nx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xr2nx-eth0" Sep 9 00:39:24.188883 containerd[1580]: 2025-09-09 00:39:24.162 [INFO][5085] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" Namespace="calico-system" Pod="goldmane-54d579b49d-xr2nx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xr2nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--xr2nx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"11cfeb17-60a0-4315-be58-d803d500749a", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3", Pod:"goldmane-54d579b49d-xr2nx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali519d62840bd", MAC:"2e:3a:c7:6a:be:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:24.188883 containerd[1580]: 2025-09-09 00:39:24.173 [INFO][5085] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" Namespace="calico-system" Pod="goldmane-54d579b49d-xr2nx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xr2nx-eth0" Sep 9 00:39:24.458863 systemd-networkd[1476]: calia1ad1cc71e2: Gained IPv6LL Sep 9 00:39:24.500913 systemd-networkd[1476]: cali65d40b9882c: Link UP Sep 9 00:39:24.501741 systemd-networkd[1476]: cali65d40b9882c: Gained carrier Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.071 [INFO][5080] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--khqs9-eth0 coredns-674b8bbfcf- kube-system b21b64d9-46e3-4b3a-9c25-2a3c85894cc9 841 0 2025-09-09 00:38:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-khqs9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali65d40b9882c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-khqs9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--khqs9-" Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.071 [INFO][5080] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-khqs9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--khqs9-eth0" Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.122 [INFO][5113] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" HandleID="k8s-pod-network.a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" Workload="localhost-k8s-coredns--674b8bbfcf--khqs9-eth0" Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.122 [INFO][5113] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" HandleID="k8s-pod-network.a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" Workload="localhost-k8s-coredns--674b8bbfcf--khqs9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000582ad0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-khqs9", "timestamp":"2025-09-09 00:39:24.12236883 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.123 [INFO][5113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.149 [INFO][5113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.150 [INFO][5113] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.208 [INFO][5113] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" host="localhost" Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.218 [INFO][5113] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.223 [INFO][5113] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.226 [INFO][5113] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.228 [INFO][5113] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.229 [INFO][5113] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" host="localhost" Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.230 [INFO][5113] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.404 [INFO][5113] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" host="localhost" Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.486 [INFO][5113] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" host="localhost" Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.486 [INFO][5113] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" host="localhost" Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.487 [INFO][5113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:39:24.557677 containerd[1580]: 2025-09-09 00:39:24.487 [INFO][5113] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" HandleID="k8s-pod-network.a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" Workload="localhost-k8s-coredns--674b8bbfcf--khqs9-eth0" Sep 9 00:39:24.558369 containerd[1580]: 2025-09-09 00:39:24.494 [INFO][5080] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-khqs9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--khqs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--khqs9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b21b64d9-46e3-4b3a-9c25-2a3c85894cc9", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-khqs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65d40b9882c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:24.558369 containerd[1580]: 2025-09-09 00:39:24.494 [INFO][5080] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-khqs9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--khqs9-eth0" Sep 9 00:39:24.558369 containerd[1580]: 2025-09-09 00:39:24.494 [INFO][5080] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65d40b9882c ContainerID="a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-khqs9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--khqs9-eth0" Sep 9 00:39:24.558369 containerd[1580]: 2025-09-09 00:39:24.502 [INFO][5080] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-khqs9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--khqs9-eth0" Sep 9 00:39:24.558369 containerd[1580]: 2025-09-09 00:39:24.508 [INFO][5080] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-khqs9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--khqs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--khqs9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b21b64d9-46e3-4b3a-9c25-2a3c85894cc9", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff", Pod:"coredns-674b8bbfcf-khqs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65d40b9882c", MAC:"c6:85:9a:2a:63:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:39:24.558369 containerd[1580]: 2025-09-09 00:39:24.551 [INFO][5080] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-khqs9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--khqs9-eth0" Sep 9 00:39:25.225969 systemd-networkd[1476]: cali519d62840bd: Gained IPv6LL Sep 9 00:39:26.505946 systemd-networkd[1476]: cali65d40b9882c: Gained IPv6LL Sep 9 00:39:27.991973 systemd[1]: Started sshd@13-10.0.0.118:22-10.0.0.1:43222.service - OpenSSH per-connection server daemon (10.0.0.1:43222). Sep 9 00:39:28.065387 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 43222 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:28.067334 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:28.071780 systemd-logind[1516]: New session 14 of user core. Sep 9 00:39:28.082898 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:39:28.208053 sshd[5148]: Connection closed by 10.0.0.1 port 43222 Sep 9 00:39:28.208401 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:28.213081 systemd[1]: sshd@13-10.0.0.118:22-10.0.0.1:43222.service: Deactivated successfully. Sep 9 00:39:28.215265 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:39:28.216088 systemd-logind[1516]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:39:28.217655 systemd-logind[1516]: Removed session 14. Sep 9 00:39:28.929924 containerd[1580]: time="2025-09-09T00:39:28.929868093Z" level=info msg="connecting to shim 9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3" address="unix:///run/containerd/s/53bf95378ef27ba6b0c7bc4d55f78325f9d1dc311316496bba573cf417f3e7d3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:39:28.933002 containerd[1580]: time="2025-09-09T00:39:28.932960411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:28.946999 containerd[1580]: time="2025-09-09T00:39:28.946945742Z" level=info msg="connecting to shim a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff" address="unix:///run/containerd/s/b135bba5c53e66348df9a49f3404b0a40cb991d3fadc0b199d3aca0bf3860bff" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:39:28.950497 containerd[1580]: time="2025-09-09T00:39:28.950432462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 00:39:28.989960 systemd[1]: Started cri-containerd-9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3.scope - libcontainer container 9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3. Sep 9 00:39:28.992010 systemd[1]: Started cri-containerd-a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff.scope - libcontainer container a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff. Sep 9 00:39:29.006374 containerd[1580]: time="2025-09-09T00:39:29.006335293Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:29.012916 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:39:29.013396 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:39:29.014876 containerd[1580]: time="2025-09-09T00:39:29.014268836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:29.015088 containerd[1580]: time="2025-09-09T00:39:29.015041787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 7.484964193s" Sep 9 00:39:29.015287 containerd[1580]: time="2025-09-09T00:39:29.015261921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 00:39:29.021657 containerd[1580]: time="2025-09-09T00:39:29.021605617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:39:29.169182 containerd[1580]: time="2025-09-09T00:39:29.169119587Z" level=info msg="CreateContainer within sandbox \"4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:39:29.171603 containerd[1580]: time="2025-09-09T00:39:29.171035446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xr2nx,Uid:11cfeb17-60a0-4315-be58-d803d500749a,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3\"" Sep 9 00:39:29.173663 containerd[1580]: time="2025-09-09T00:39:29.173637534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-khqs9,Uid:b21b64d9-46e3-4b3a-9c25-2a3c85894cc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff\"" Sep 9 00:39:29.174381 kubelet[2733]: E0909 00:39:29.174347 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:29.179726 containerd[1580]: time="2025-09-09T00:39:29.179692668Z" level=info msg="CreateContainer within sandbox \"a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:39:29.189114 containerd[1580]: time="2025-09-09T00:39:29.188989772Z" level=info msg="Container 14c60bf71358d7ffa85a2e6398ef1bcbff51098a6f9c27200a8888674c14eb91: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:39:29.196745 containerd[1580]: time="2025-09-09T00:39:29.196713990Z" level=info msg="Container 6167e567adc2d833b2fe06150029830e0d81d056e7f9ab5388c931d386aeeafb: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:39:29.199319 containerd[1580]: time="2025-09-09T00:39:29.199287545Z" level=info msg="CreateContainer within sandbox \"4a356e7013a62d0f3f028a26f1bb9eafda2f182954102aaf938c5ddf3afd6152\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"14c60bf71358d7ffa85a2e6398ef1bcbff51098a6f9c27200a8888674c14eb91\"" Sep 9 00:39:29.199712 containerd[1580]: time="2025-09-09T00:39:29.199689369Z" level=info msg="StartContainer for \"14c60bf71358d7ffa85a2e6398ef1bcbff51098a6f9c27200a8888674c14eb91\"" Sep 9 00:39:29.200685 containerd[1580]: time="2025-09-09T00:39:29.200663700Z" level=info msg="connecting to shim 14c60bf71358d7ffa85a2e6398ef1bcbff51098a6f9c27200a8888674c14eb91" address="unix:///run/containerd/s/c76b39b2df8bde7adb0621ab237823ff086b37c7208b64a1f121ae9b69c854cb" protocol=ttrpc version=3 Sep 9 00:39:29.216464 containerd[1580]: time="2025-09-09T00:39:29.216399408Z" level=info msg="CreateContainer within sandbox \"a3b5e8566447543397e9ce29fb1a639f7d82edf9cce8bd4faf9f93af16ec47ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6167e567adc2d833b2fe06150029830e0d81d056e7f9ab5388c931d386aeeafb\"" Sep 9 00:39:29.216817 containerd[1580]: time="2025-09-09T00:39:29.216794289Z" level=info msg="StartContainer for \"6167e567adc2d833b2fe06150029830e0d81d056e7f9ab5388c931d386aeeafb\"" Sep 9 00:39:29.217591 containerd[1580]: time="2025-09-09T00:39:29.217544078Z" level=info msg="connecting to shim 6167e567adc2d833b2fe06150029830e0d81d056e7f9ab5388c931d386aeeafb" address="unix:///run/containerd/s/b135bba5c53e66348df9a49f3404b0a40cb991d3fadc0b199d3aca0bf3860bff" protocol=ttrpc version=3 Sep 9 00:39:29.222992 systemd[1]: Started cri-containerd-14c60bf71358d7ffa85a2e6398ef1bcbff51098a6f9c27200a8888674c14eb91.scope - libcontainer container 14c60bf71358d7ffa85a2e6398ef1bcbff51098a6f9c27200a8888674c14eb91. Sep 9 00:39:29.242909 systemd[1]: Started cri-containerd-6167e567adc2d833b2fe06150029830e0d81d056e7f9ab5388c931d386aeeafb.scope - libcontainer container 6167e567adc2d833b2fe06150029830e0d81d056e7f9ab5388c931d386aeeafb. Sep 9 00:39:29.275157 containerd[1580]: time="2025-09-09T00:39:29.275100351Z" level=info msg="StartContainer for \"6167e567adc2d833b2fe06150029830e0d81d056e7f9ab5388c931d386aeeafb\" returns successfully" Sep 9 00:39:29.292876 containerd[1580]: time="2025-09-09T00:39:29.292819044Z" level=info msg="StartContainer for \"14c60bf71358d7ffa85a2e6398ef1bcbff51098a6f9c27200a8888674c14eb91\" returns successfully" Sep 9 00:39:30.180455 kubelet[2733]: E0909 00:39:30.180323 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:30.198732 kubelet[2733]: I0909 00:39:30.198459 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c56b7dd9f-z7qmz" podStartSLOduration=38.706388646 podStartE2EDuration="46.198445312s" podCreationTimestamp="2025-09-09 00:38:44 +0000 UTC" firstStartedPulling="2025-09-09 00:39:21.529436911 +0000 UTC m=+57.707325245" lastFinishedPulling="2025-09-09 00:39:29.021493577 +0000 UTC m=+65.199381911" observedRunningTime="2025-09-09 00:39:30.197960671 +0000 UTC m=+66.375849005" watchObservedRunningTime="2025-09-09 00:39:30.198445312 +0000 UTC m=+66.376333646" Sep 9 00:39:30.229879 containerd[1580]: time="2025-09-09T00:39:30.229818090Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14c60bf71358d7ffa85a2e6398ef1bcbff51098a6f9c27200a8888674c14eb91\" id:\"192286cb89af786937058c58a90d1bdf859e7dcc4b14c3ee0a8756c6057efef1\" pid:5348 exited_at:{seconds:1757378370 nanos:229298725}" Sep 9 00:39:30.245445 kubelet[2733]: I0909 00:39:30.245355 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-khqs9" podStartSLOduration=60.245339485 podStartE2EDuration="1m0.245339485s" podCreationTimestamp="2025-09-09 00:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:39:30.210344121 +0000 UTC m=+66.388232475" watchObservedRunningTime="2025-09-09 00:39:30.245339485 +0000 UTC m=+66.423227819" Sep 9 00:39:31.182440 kubelet[2733]: E0909 00:39:31.182393 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:32.183871 kubelet[2733]: E0909 00:39:32.183836 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:32.235064 containerd[1580]: time="2025-09-09T00:39:32.235003044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:32.235675 containerd[1580]: time="2025-09-09T00:39:32.235640110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 00:39:32.236742 containerd[1580]: time="2025-09-09T00:39:32.236707695Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:32.238802 containerd[1580]: time="2025-09-09T00:39:32.238770970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:32.239366 containerd[1580]: time="2025-09-09T00:39:32.239331313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.217692163s" Sep 9 00:39:32.239410 containerd[1580]: time="2025-09-09T00:39:32.239370366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:39:32.240245 containerd[1580]: time="2025-09-09T00:39:32.240216615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:39:32.244195 containerd[1580]: time="2025-09-09T00:39:32.244156276Z" level=info msg="CreateContainer within sandbox \"af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:39:32.253801 containerd[1580]: time="2025-09-09T00:39:32.252525294Z" level=info msg="Container 8d1b91dcba206858927e0e58774316cd7733388733ec91227d8908bd34e57cc6: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:39:32.260003 containerd[1580]: time="2025-09-09T00:39:32.259969576Z" level=info msg="CreateContainer within sandbox \"af8fb9238767e8a66338c20ffae64be7ad515544736e528e968abe1831711315\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d1b91dcba206858927e0e58774316cd7733388733ec91227d8908bd34e57cc6\"" Sep 9 00:39:32.260394 containerd[1580]: time="2025-09-09T00:39:32.260370759Z" level=info msg="StartContainer for \"8d1b91dcba206858927e0e58774316cd7733388733ec91227d8908bd34e57cc6\"" Sep 9 00:39:32.261349 containerd[1580]: time="2025-09-09T00:39:32.261321455Z" level=info msg="connecting to shim 8d1b91dcba206858927e0e58774316cd7733388733ec91227d8908bd34e57cc6" address="unix:///run/containerd/s/35b5d1160e5abcce382ee5ed587d75ee2b7f9fac252b659864483790a4c6f351" protocol=ttrpc version=3 Sep 9 00:39:32.281918 systemd[1]: Started cri-containerd-8d1b91dcba206858927e0e58774316cd7733388733ec91227d8908bd34e57cc6.scope - libcontainer container 8d1b91dcba206858927e0e58774316cd7733388733ec91227d8908bd34e57cc6. Sep 9 00:39:32.370942 containerd[1580]: time="2025-09-09T00:39:32.370891852Z" level=info msg="StartContainer for \"8d1b91dcba206858927e0e58774316cd7733388733ec91227d8908bd34e57cc6\" returns successfully" Sep 9 00:39:32.700786 containerd[1580]: time="2025-09-09T00:39:32.700503163Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:32.702700 containerd[1580]: time="2025-09-09T00:39:32.702651138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 00:39:32.704138 containerd[1580]: time="2025-09-09T00:39:32.704107944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 463.862194ms" Sep 9 00:39:32.704138 containerd[1580]: time="2025-09-09T00:39:32.704137419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:39:32.705972 containerd[1580]: time="2025-09-09T00:39:32.705946808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:39:32.709378 containerd[1580]: time="2025-09-09T00:39:32.709310135Z" level=info msg="CreateContainer within sandbox \"33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:39:32.720719 containerd[1580]: time="2025-09-09T00:39:32.718644266Z" level=info msg="Container e3aae632d2eb605d2e93d1e499e3b49da396a445e70a8210fa89888fa9f3b384: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:39:32.727857 containerd[1580]: time="2025-09-09T00:39:32.727822004Z" level=info msg="CreateContainer within sandbox \"33a371ac1ee0e6c5cb268980c2365bd9452f26efe4f7a969bb47511def5a9d11\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e3aae632d2eb605d2e93d1e499e3b49da396a445e70a8210fa89888fa9f3b384\"" Sep 9 00:39:32.728835 containerd[1580]: time="2025-09-09T00:39:32.728560431Z" level=info msg="StartContainer for \"e3aae632d2eb605d2e93d1e499e3b49da396a445e70a8210fa89888fa9f3b384\"" Sep 9 00:39:32.729979 containerd[1580]: time="2025-09-09T00:39:32.729930254Z" level=info msg="connecting to shim e3aae632d2eb605d2e93d1e499e3b49da396a445e70a8210fa89888fa9f3b384" address="unix:///run/containerd/s/ba3f20a1195829e64e43dc9b61588fccc59e6f43241db887b0a1dadab3b0ec34" protocol=ttrpc version=3 Sep 9 00:39:32.756015 systemd[1]: Started cri-containerd-e3aae632d2eb605d2e93d1e499e3b49da396a445e70a8210fa89888fa9f3b384.scope - libcontainer container e3aae632d2eb605d2e93d1e499e3b49da396a445e70a8210fa89888fa9f3b384. Sep 9 00:39:32.822188 containerd[1580]: time="2025-09-09T00:39:32.822141176Z" level=info msg="StartContainer for \"e3aae632d2eb605d2e93d1e499e3b49da396a445e70a8210fa89888fa9f3b384\" returns successfully" Sep 9 00:39:33.211910 kubelet[2733]: I0909 00:39:33.211843 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6597668795-jj7hn" podStartSLOduration=44.465106186 podStartE2EDuration="54.211825195s" podCreationTimestamp="2025-09-09 00:38:39 +0000 UTC" firstStartedPulling="2025-09-09 00:39:22.958305176 +0000 UTC m=+59.136193510" lastFinishedPulling="2025-09-09 00:39:32.705024185 +0000 UTC m=+68.882912519" observedRunningTime="2025-09-09 00:39:33.19799521 +0000 UTC m=+69.375883564" watchObservedRunningTime="2025-09-09 00:39:33.211825195 +0000 UTC m=+69.389713529" Sep 9 00:39:33.212410 kubelet[2733]: I0909 00:39:33.212099 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6597668795-hjwdq" podStartSLOduration=44.248811889 podStartE2EDuration="54.212095222s" podCreationTimestamp="2025-09-09 00:38:39 +0000 UTC" firstStartedPulling="2025-09-09 00:39:22.276801185 +0000 UTC m=+58.454689519" lastFinishedPulling="2025-09-09 00:39:32.240084518 +0000 UTC m=+68.417972852" observedRunningTime="2025-09-09 00:39:33.211059026 +0000 UTC m=+69.388947350" watchObservedRunningTime="2025-09-09 00:39:33.212095222 +0000 UTC m=+69.389983556" Sep 9 00:39:33.224466 systemd[1]: Started sshd@14-10.0.0.118:22-10.0.0.1:57124.service - OpenSSH per-connection server daemon (10.0.0.1:57124). Sep 9 00:39:33.306086 sshd[5452]: Accepted publickey for core from 10.0.0.1 port 57124 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:33.308344 sshd-session[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:33.315737 systemd-logind[1516]: New session 15 of user core. Sep 9 00:39:33.321144 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:39:33.489504 sshd[5457]: Connection closed by 10.0.0.1 port 57124 Sep 9 00:39:33.489816 sshd-session[5452]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:33.495156 systemd[1]: sshd@14-10.0.0.118:22-10.0.0.1:57124.service: Deactivated successfully. Sep 9 00:39:33.497280 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:39:33.498216 systemd-logind[1516]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:39:33.499345 systemd-logind[1516]: Removed session 15. Sep 9 00:39:34.197198 kubelet[2733]: I0909 00:39:34.197142 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:39:34.887056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537913646.mount: Deactivated successfully. Sep 9 00:39:35.224527 kubelet[2733]: E0909 00:39:34.924556 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:35.340815 systemd-journald[1192]: Under memory pressure, flushing caches. Sep 9 00:39:35.814160 containerd[1580]: time="2025-09-09T00:39:35.814105065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:35.815089 containerd[1580]: time="2025-09-09T00:39:35.815069617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 00:39:35.816376 containerd[1580]: time="2025-09-09T00:39:35.816324503Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:35.819115 containerd[1580]: time="2025-09-09T00:39:35.819069109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:39:35.819800 containerd[1580]: time="2025-09-09T00:39:35.819745800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.113766821s" Sep 9 00:39:35.819844 containerd[1580]: time="2025-09-09T00:39:35.819804510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 00:39:35.825339 containerd[1580]: time="2025-09-09T00:39:35.825295813Z" level=info msg="CreateContainer within sandbox \"9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:39:35.846554 containerd[1580]: time="2025-09-09T00:39:35.846499053Z" level=info msg="Container 13c43fc3f542203fff5963e2b0970d249340d4fddcff635afc567906bf04ada9: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:39:35.855734 containerd[1580]: time="2025-09-09T00:39:35.855696998Z" level=info msg="CreateContainer within sandbox \"9d9d5d90cf894a08e9d3552c5ba55c892bdcc3a6b7112a3bd0cd8bf0cecb91a3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"13c43fc3f542203fff5963e2b0970d249340d4fddcff635afc567906bf04ada9\"" Sep 9 00:39:35.856340 containerd[1580]: time="2025-09-09T00:39:35.856185024Z" level=info msg="StartContainer for \"13c43fc3f542203fff5963e2b0970d249340d4fddcff635afc567906bf04ada9\"" Sep 9 00:39:35.857444 containerd[1580]: time="2025-09-09T00:39:35.857412128Z" level=info msg="connecting to shim 13c43fc3f542203fff5963e2b0970d249340d4fddcff635afc567906bf04ada9" address="unix:///run/containerd/s/53bf95378ef27ba6b0c7bc4d55f78325f9d1dc311316496bba573cf417f3e7d3" protocol=ttrpc version=3 Sep 9 00:39:35.974940 systemd[1]: Started cri-containerd-13c43fc3f542203fff5963e2b0970d249340d4fddcff635afc567906bf04ada9.scope - libcontainer container 13c43fc3f542203fff5963e2b0970d249340d4fddcff635afc567906bf04ada9. Sep 9 00:39:36.027178 containerd[1580]: time="2025-09-09T00:39:36.027131178Z" level=info msg="StartContainer for \"13c43fc3f542203fff5963e2b0970d249340d4fddcff635afc567906bf04ada9\" returns successfully" Sep 9 00:39:36.214935 kubelet[2733]: I0909 00:39:36.214658 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-xr2nx" podStartSLOduration=46.567499145 podStartE2EDuration="53.214639749s" podCreationTimestamp="2025-09-09 00:38:43 +0000 UTC" firstStartedPulling="2025-09-09 00:39:29.17335855 +0000 UTC m=+65.351246884" lastFinishedPulling="2025-09-09 00:39:35.820499154 +0000 UTC m=+71.998387488" observedRunningTime="2025-09-09 00:39:36.214162682 +0000 UTC m=+72.392051016" watchObservedRunningTime="2025-09-09 00:39:36.214639749 +0000 UTC m=+72.392528083" Sep 9 00:39:36.288387 containerd[1580]: time="2025-09-09T00:39:36.288333856Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13c43fc3f542203fff5963e2b0970d249340d4fddcff635afc567906bf04ada9\" id:\"c9a6054d8a7244528abc0a4f8816c2e32070a839260a2c99de53a53094598211\" pid:5530 exit_status:1 exited_at:{seconds:1757378376 nanos:287874863}" Sep 9 00:39:37.288726 containerd[1580]: time="2025-09-09T00:39:37.288675069Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13c43fc3f542203fff5963e2b0970d249340d4fddcff635afc567906bf04ada9\" id:\"05070b945dfb368663a45acd37249136aaa5098608735f6e2b856e2f53675231\" pid:5555 exit_status:1 exited_at:{seconds:1757378377 nanos:288291940}" Sep 9 00:39:38.506271 systemd[1]: Started sshd@15-10.0.0.118:22-10.0.0.1:57138.service - OpenSSH per-connection server daemon (10.0.0.1:57138). Sep 9 00:39:38.586791 sshd[5569]: Accepted publickey for core from 10.0.0.1 port 57138 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:38.589350 sshd-session[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:38.594274 systemd-logind[1516]: New session 16 of user core. Sep 9 00:39:38.606906 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:39:38.748425 sshd[5574]: Connection closed by 10.0.0.1 port 57138 Sep 9 00:39:38.748785 sshd-session[5569]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:38.753950 systemd[1]: sshd@15-10.0.0.118:22-10.0.0.1:57138.service: Deactivated successfully. Sep 9 00:39:38.756078 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:39:38.756889 systemd-logind[1516]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:39:38.758653 systemd-logind[1516]: Removed session 16. Sep 9 00:39:39.924712 kubelet[2733]: E0909 00:39:39.924615 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:41.146474 containerd[1580]: time="2025-09-09T00:39:41.146417738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78eb3dd5a2d64950b4dcde20366c009c9afe71212a813b2cfad1c49266fe8f46\" id:\"5e64acfeaf3e4173ae37930dc96c1a07fc822c8b9be0d82ac61799a5fb753cf4\" pid:5600 exited_at:{seconds:1757378381 nanos:146080575}" Sep 9 00:39:43.774081 systemd[1]: Started sshd@16-10.0.0.118:22-10.0.0.1:57822.service - OpenSSH per-connection server daemon (10.0.0.1:57822). Sep 9 00:39:43.832950 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 57822 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:43.834673 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:43.839693 systemd-logind[1516]: New session 17 of user core. Sep 9 00:39:43.847915 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:39:43.970650 sshd[5617]: Connection closed by 10.0.0.1 port 57822 Sep 9 00:39:43.971208 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:43.984785 systemd[1]: sshd@16-10.0.0.118:22-10.0.0.1:57822.service: Deactivated successfully. Sep 9 00:39:43.987269 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:39:43.990835 systemd-logind[1516]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:39:43.993637 systemd[1]: Started sshd@17-10.0.0.118:22-10.0.0.1:57836.service - OpenSSH per-connection server daemon (10.0.0.1:57836). Sep 9 00:39:43.995924 systemd-logind[1516]: Removed session 17. Sep 9 00:39:44.049873 sshd[5631]: Accepted publickey for core from 10.0.0.1 port 57836 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:44.051486 sshd-session[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:44.056343 systemd-logind[1516]: New session 18 of user core. Sep 9 00:39:44.066907 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:39:44.349301 sshd[5634]: Connection closed by 10.0.0.1 port 57836 Sep 9 00:39:44.349697 sshd-session[5631]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:44.361791 systemd[1]: sshd@17-10.0.0.118:22-10.0.0.1:57836.service: Deactivated successfully. Sep 9 00:39:44.364081 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:39:44.364852 systemd-logind[1516]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:39:44.368287 systemd[1]: Started sshd@18-10.0.0.118:22-10.0.0.1:57842.service - OpenSSH per-connection server daemon (10.0.0.1:57842). Sep 9 00:39:44.369030 systemd-logind[1516]: Removed session 18. Sep 9 00:39:44.427755 sshd[5647]: Accepted publickey for core from 10.0.0.1 port 57842 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:44.429556 sshd-session[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:44.434165 systemd-logind[1516]: New session 19 of user core. Sep 9 00:39:44.447902 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:39:45.173817 sshd[5650]: Connection closed by 10.0.0.1 port 57842 Sep 9 00:39:45.173654 sshd-session[5647]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:45.188991 systemd[1]: Started sshd@19-10.0.0.118:22-10.0.0.1:57848.service - OpenSSH per-connection server daemon (10.0.0.1:57848). Sep 9 00:39:45.194371 systemd[1]: sshd@18-10.0.0.118:22-10.0.0.1:57842.service: Deactivated successfully. Sep 9 00:39:45.199720 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:39:45.208029 systemd-logind[1516]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:39:45.211903 systemd-logind[1516]: Removed session 19. Sep 9 00:39:45.258483 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 57848 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:45.260360 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:45.267238 systemd-logind[1516]: New session 20 of user core. Sep 9 00:39:45.273904 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:39:45.616675 sshd[5674]: Connection closed by 10.0.0.1 port 57848 Sep 9 00:39:45.617187 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:45.630956 systemd[1]: sshd@19-10.0.0.118:22-10.0.0.1:57848.service: Deactivated successfully. Sep 9 00:39:45.633513 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:39:45.635325 systemd-logind[1516]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:39:45.639905 systemd[1]: Started sshd@20-10.0.0.118:22-10.0.0.1:57852.service - OpenSSH per-connection server daemon (10.0.0.1:57852). Sep 9 00:39:45.641211 systemd-logind[1516]: Removed session 20. Sep 9 00:39:45.701083 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 57852 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:45.702780 sshd-session[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:45.707726 systemd-logind[1516]: New session 21 of user core. Sep 9 00:39:45.718941 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:39:45.874078 sshd[5689]: Connection closed by 10.0.0.1 port 57852 Sep 9 00:39:45.874259 sshd-session[5686]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:45.880562 systemd[1]: sshd@20-10.0.0.118:22-10.0.0.1:57852.service: Deactivated successfully. Sep 9 00:39:45.883179 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:39:45.885851 systemd-logind[1516]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:39:45.887961 systemd-logind[1516]: Removed session 21. Sep 9 00:39:45.906574 containerd[1580]: time="2025-09-09T00:39:45.906394218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14c60bf71358d7ffa85a2e6398ef1bcbff51098a6f9c27200a8888674c14eb91\" id:\"25243522527b396122ec3500f755fdb53e03e965881ebab425c8a561e34ad7a2\" pid:5710 exited_at:{seconds:1757378385 nanos:906174655}" Sep 9 00:39:49.472387 kubelet[2733]: I0909 00:39:49.472334 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:39:49.924667 kubelet[2733]: E0909 00:39:49.924614 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:39:50.893329 systemd[1]: Started sshd@21-10.0.0.118:22-10.0.0.1:33878.service - OpenSSH per-connection server daemon (10.0.0.1:33878). Sep 9 00:39:50.966751 sshd[5727]: Accepted publickey for core from 10.0.0.1 port 33878 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:50.968493 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:50.972919 systemd-logind[1516]: New session 22 of user core. Sep 9 00:39:50.980901 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:39:51.103632 sshd[5730]: Connection closed by 10.0.0.1 port 33878 Sep 9 00:39:51.103976 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:51.108643 systemd[1]: sshd@21-10.0.0.118:22-10.0.0.1:33878.service: Deactivated successfully. Sep 9 00:39:51.110877 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:39:51.111881 systemd-logind[1516]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:39:51.113555 systemd-logind[1516]: Removed session 22. Sep 9 00:39:56.120453 systemd[1]: Started sshd@22-10.0.0.118:22-10.0.0.1:33884.service - OpenSSH per-connection server daemon (10.0.0.1:33884). Sep 9 00:39:56.180232 sshd[5753]: Accepted publickey for core from 10.0.0.1 port 33884 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:39:56.182006 sshd-session[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:39:56.186672 systemd-logind[1516]: New session 23 of user core. Sep 9 00:39:56.199917 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:39:56.322011 sshd[5756]: Connection closed by 10.0.0.1 port 33884 Sep 9 00:39:56.322459 sshd-session[5753]: pam_unix(sshd:session): session closed for user core Sep 9 00:39:56.325836 systemd[1]: sshd@22-10.0.0.118:22-10.0.0.1:33884.service: Deactivated successfully. Sep 9 00:39:56.327918 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:39:56.329520 systemd-logind[1516]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:39:56.330629 systemd-logind[1516]: Removed session 23. Sep 9 00:40:00.228494 containerd[1580]: time="2025-09-09T00:40:00.228423495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14c60bf71358d7ffa85a2e6398ef1bcbff51098a6f9c27200a8888674c14eb91\" id:\"6229f5b8e73cf9cb7d7e88ca201fbd3dac12db6d351a607bf5274c02bacb3c18\" pid:5780 exited_at:{seconds:1757378400 nanos:228081824}" Sep 9 00:40:01.339423 systemd[1]: Started sshd@23-10.0.0.118:22-10.0.0.1:34682.service - OpenSSH per-connection server daemon (10.0.0.1:34682). Sep 9 00:40:01.408777 sshd[5794]: Accepted publickey for core from 10.0.0.1 port 34682 ssh2: RSA SHA256:r4RYwwi8TxJo8A9HOrX22Pz91MmSKBBpciSWwVO8Lcc Sep 9 00:40:01.410705 sshd-session[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:40:01.415561 systemd-logind[1516]: New session 24 of user core. Sep 9 00:40:01.423905 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:40:01.613201 sshd[5797]: Connection closed by 10.0.0.1 port 34682 Sep 9 00:40:01.613648 sshd-session[5794]: pam_unix(sshd:session): session closed for user core Sep 9 00:40:01.618340 systemd[1]: sshd@23-10.0.0.118:22-10.0.0.1:34682.service: Deactivated successfully. Sep 9 00:40:01.620878 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:40:01.621754 systemd-logind[1516]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:40:01.623211 systemd-logind[1516]: Removed session 24.