Sep 12 18:06:58.842147 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:34:39 -00 2025 Sep 12 18:06:58.842178 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 18:06:58.842188 kernel: BIOS-provided physical RAM map: Sep 12 18:06:58.842195 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 18:06:58.842202 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 18:06:58.842209 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 18:06:58.842216 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 12 18:06:58.842230 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 12 18:06:58.842240 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 18:06:58.842247 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 18:06:58.842254 kernel: NX (Execute Disable) protection: active Sep 12 18:06:58.842261 kernel: APIC: Static calls initialized Sep 12 18:06:58.842268 kernel: SMBIOS 2.8 present. Sep 12 18:06:58.842275 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 12 18:06:58.842286 kernel: DMI: Memory slots populated: 1/1 Sep 12 18:06:58.842294 kernel: Hypervisor detected: KVM Sep 12 18:06:58.842304 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 18:06:58.842313 kernel: kvm-clock: using sched offset of 4082872144 cycles Sep 12 18:06:58.842322 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 18:06:58.842330 kernel: tsc: Detected 2494.146 MHz processor Sep 12 18:06:58.842338 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 18:06:58.842346 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 18:06:58.842354 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 12 18:06:58.842365 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 18:06:58.842373 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 18:06:58.842381 kernel: ACPI: Early table checksum verification disabled Sep 12 18:06:58.842388 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 12 18:06:58.842396 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:06:58.842404 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:06:58.842413 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:06:58.842420 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 12 18:06:58.842428 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:06:58.842439 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:06:58.842446 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:06:58.842454 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:06:58.842462 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 12 18:06:58.842470 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 12 18:06:58.842478 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 12 18:06:58.842486 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 12 18:06:58.842494 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 12 18:06:58.842508 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 12 18:06:58.842516 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 12 18:06:58.842524 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 18:06:58.842532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 12 18:06:58.842541 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Sep 12 18:06:58.842551 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Sep 12 18:06:58.842560 kernel: Zone ranges: Sep 12 18:06:58.842568 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 18:06:58.842576 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 12 18:06:58.842584 kernel: Normal empty Sep 12 18:06:58.842593 kernel: Device empty Sep 12 18:06:58.842601 kernel: Movable zone start for each node Sep 12 18:06:58.842609 kernel: Early memory node ranges Sep 12 18:06:58.842617 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 18:06:58.842625 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 12 18:06:58.842635 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 12 18:06:58.842644 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 18:06:58.842652 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 18:06:58.842660 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 12 18:06:58.842668 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 18:06:58.842676 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 18:06:58.842686 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 18:06:58.842694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 18:06:58.842704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 18:06:58.842715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 18:06:58.842723 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 18:06:58.842734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 18:06:58.842742 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 18:06:58.842750 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 18:06:58.842758 kernel: TSC deadline timer available Sep 12 18:06:58.842766 kernel: CPU topo: Max. logical packages: 1 Sep 12 18:06:58.842774 kernel: CPU topo: Max. logical dies: 1 Sep 12 18:06:58.842782 kernel: CPU topo: Max. dies per package: 1 Sep 12 18:06:58.842790 kernel: CPU topo: Max. threads per core: 1 Sep 12 18:06:58.842800 kernel: CPU topo: Num. cores per package: 2 Sep 12 18:06:58.842809 kernel: CPU topo: Num. threads per package: 2 Sep 12 18:06:58.842817 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 12 18:06:58.842824 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 18:06:58.842833 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 12 18:06:58.842853 kernel: Booting paravirtualized kernel on KVM Sep 12 18:06:58.842862 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 18:06:58.842870 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 18:06:58.842878 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 12 18:06:58.842889 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 12 18:06:58.842896 kernel: pcpu-alloc: [0] 0 1 Sep 12 18:06:58.842905 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 12 18:06:58.842914 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 18:06:58.842923 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 18:06:58.842931 kernel: random: crng init done Sep 12 18:06:58.842939 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 18:06:58.842947 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 18:06:58.842958 kernel: Fallback order for Node 0: 0 Sep 12 18:06:58.842966 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Sep 12 18:06:58.842974 kernel: Policy zone: DMA32 Sep 12 18:06:58.842982 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 18:06:58.842990 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 18:06:58.842999 kernel: Kernel/User page tables isolation: enabled Sep 12 18:06:58.843007 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 18:06:58.843015 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 18:06:58.843061 kernel: Dynamic Preempt: voluntary Sep 12 18:06:58.843073 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 18:06:58.843086 kernel: rcu: RCU event tracing is enabled. Sep 12 18:06:58.843094 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 18:06:58.843102 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 18:06:58.843111 kernel: Rude variant of Tasks RCU enabled. Sep 12 18:06:58.843119 kernel: Tracing variant of Tasks RCU enabled. Sep 12 18:06:58.843127 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 18:06:58.843135 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 18:06:58.843143 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 18:06:58.843156 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 18:06:58.843165 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 18:06:58.843173 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 18:06:58.843181 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 18:06:58.843189 kernel: Console: colour VGA+ 80x25 Sep 12 18:06:58.843197 kernel: printk: legacy console [tty0] enabled Sep 12 18:06:58.843205 kernel: printk: legacy console [ttyS0] enabled Sep 12 18:06:58.843213 kernel: ACPI: Core revision 20240827 Sep 12 18:06:58.843221 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 18:06:58.843240 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 18:06:58.843249 kernel: x2apic enabled Sep 12 18:06:58.843257 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 18:06:58.843268 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 18:06:58.843280 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39fcb9af, max_idle_ns: 440795211412 ns Sep 12 18:06:58.843289 kernel: Calibrating delay loop (skipped) preset value.. 4988.29 BogoMIPS (lpj=2494146) Sep 12 18:06:58.843297 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 12 18:06:58.843306 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 12 18:06:58.843315 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 18:06:58.843326 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 18:06:58.843334 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 18:06:58.843343 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 12 18:06:58.843352 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 18:06:58.843360 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 18:06:58.843369 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 18:06:58.843377 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 18:06:58.843388 kernel: active return thunk: its_return_thunk Sep 12 18:06:58.843397 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 18:06:58.843405 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 18:06:58.843414 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 18:06:58.843423 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 18:06:58.843431 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 18:06:58.843440 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 18:06:58.843449 kernel: Freeing SMP alternatives memory: 32K Sep 12 18:06:58.843457 kernel: pid_max: default: 32768 minimum: 301 Sep 12 18:06:58.843468 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 18:06:58.843477 kernel: landlock: Up and running. Sep 12 18:06:58.843485 kernel: SELinux: Initializing. Sep 12 18:06:58.843494 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 18:06:58.843503 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 18:06:58.843511 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 12 18:06:58.843523 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 12 18:06:58.843538 kernel: signal: max sigframe size: 1776 Sep 12 18:06:58.843550 kernel: rcu: Hierarchical SRCU implementation. Sep 12 18:06:58.843566 kernel: rcu: Max phase no-delay instances is 400. Sep 12 18:06:58.843578 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 18:06:58.843590 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 18:06:58.843604 kernel: smp: Bringing up secondary CPUs ... Sep 12 18:06:58.843621 kernel: smpboot: x86: Booting SMP configuration: Sep 12 18:06:58.843633 kernel: .... node #0, CPUs: #1 Sep 12 18:06:58.843645 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 18:06:58.843657 kernel: smpboot: Total of 2 processors activated (9976.58 BogoMIPS) Sep 12 18:06:58.843670 kernel: Memory: 1966912K/2096612K available (14336K kernel code, 2432K rwdata, 9960K rodata, 54040K init, 2924K bss, 125144K reserved, 0K cma-reserved) Sep 12 18:06:58.843686 kernel: devtmpfs: initialized Sep 12 18:06:58.843699 kernel: x86/mm: Memory block size: 128MB Sep 12 18:06:58.843712 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 18:06:58.843721 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 18:06:58.843734 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 18:06:58.843747 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 18:06:58.843762 kernel: audit: initializing netlink subsys (disabled) Sep 12 18:06:58.843771 kernel: audit: type=2000 audit(1757700415.848:1): state=initialized audit_enabled=0 res=1 Sep 12 18:06:58.843780 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 18:06:58.843791 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 18:06:58.843800 kernel: cpuidle: using governor menu Sep 12 18:06:58.843809 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 18:06:58.843818 kernel: dca service started, version 1.12.1 Sep 12 18:06:58.843826 kernel: PCI: Using configuration type 1 for base access Sep 12 18:06:58.843835 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 18:06:58.843844 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 18:06:58.843853 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 18:06:58.843861 kernel: ACPI: Added _OSI(Module Device) Sep 12 18:06:58.843873 kernel: ACPI: Added _OSI(Processor Device) Sep 12 18:06:58.843881 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 18:06:58.843890 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 18:06:58.843899 kernel: ACPI: Interpreter enabled Sep 12 18:06:58.843908 kernel: ACPI: PM: (supports S0 S5) Sep 12 18:06:58.843916 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 18:06:58.843925 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 18:06:58.843934 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 18:06:58.843943 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 18:06:58.843954 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 18:06:58.845200 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 18:06:58.845366 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 18:06:58.845503 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 18:06:58.845518 kernel: acpiphp: Slot [3] registered Sep 12 18:06:58.845527 kernel: acpiphp: Slot [4] registered Sep 12 18:06:58.845536 kernel: acpiphp: Slot [5] registered Sep 12 18:06:58.845551 kernel: acpiphp: Slot [6] registered Sep 12 18:06:58.845560 kernel: acpiphp: Slot [7] registered Sep 12 18:06:58.845568 kernel: acpiphp: Slot [8] registered Sep 12 18:06:58.845577 kernel: acpiphp: Slot [9] registered Sep 12 18:06:58.845586 kernel: acpiphp: Slot [10] registered Sep 12 18:06:58.845595 kernel: acpiphp: Slot [11] registered Sep 12 18:06:58.845604 kernel: acpiphp: Slot [12] registered Sep 12 18:06:58.845612 kernel: acpiphp: Slot [13] registered Sep 12 18:06:58.845621 kernel: acpiphp: Slot [14] registered Sep 12 18:06:58.845630 kernel: acpiphp: Slot [15] registered Sep 12 18:06:58.845641 kernel: acpiphp: Slot [16] registered Sep 12 18:06:58.845650 kernel: acpiphp: Slot [17] registered Sep 12 18:06:58.845658 kernel: acpiphp: Slot [18] registered Sep 12 18:06:58.845667 kernel: acpiphp: Slot [19] registered Sep 12 18:06:58.845675 kernel: acpiphp: Slot [20] registered Sep 12 18:06:58.845684 kernel: acpiphp: Slot [21] registered Sep 12 18:06:58.845692 kernel: acpiphp: Slot [22] registered Sep 12 18:06:58.845701 kernel: acpiphp: Slot [23] registered Sep 12 18:06:58.845710 kernel: acpiphp: Slot [24] registered Sep 12 18:06:58.845721 kernel: acpiphp: Slot [25] registered Sep 12 18:06:58.845729 kernel: acpiphp: Slot [26] registered Sep 12 18:06:58.845738 kernel: acpiphp: Slot [27] registered Sep 12 18:06:58.845746 kernel: acpiphp: Slot [28] registered Sep 12 18:06:58.845755 kernel: acpiphp: Slot [29] registered Sep 12 18:06:58.845763 kernel: acpiphp: Slot [30] registered Sep 12 18:06:58.845772 kernel: acpiphp: Slot [31] registered Sep 12 18:06:58.845780 kernel: PCI host bridge to bus 0000:00 Sep 12 18:06:58.845893 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 18:06:58.845981 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 18:06:58.849102 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 18:06:58.849216 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 18:06:58.849302 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 12 18:06:58.849384 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 18:06:58.849513 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 12 18:06:58.849624 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 12 18:06:58.849734 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Sep 12 18:06:58.849859 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Sep 12 18:06:58.849955 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Sep 12 18:06:58.852130 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Sep 12 18:06:58.852256 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Sep 12 18:06:58.852350 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Sep 12 18:06:58.852471 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Sep 12 18:06:58.852564 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Sep 12 18:06:58.852664 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 12 18:06:58.852850 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 12 18:06:58.852942 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 12 18:06:58.853052 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Sep 12 18:06:58.853152 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Sep 12 18:06:58.853270 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Sep 12 18:06:58.853380 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Sep 12 18:06:58.853508 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Sep 12 18:06:58.853612 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 18:06:58.853719 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 18:06:58.853811 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Sep 12 18:06:58.853908 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Sep 12 18:06:58.853997 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Sep 12 18:06:58.859684 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 18:06:58.859814 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Sep 12 18:06:58.859910 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Sep 12 18:06:58.860046 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 12 18:06:58.860174 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Sep 12 18:06:58.860290 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Sep 12 18:06:58.860383 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Sep 12 18:06:58.860473 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 12 18:06:58.860570 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 18:06:58.860661 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Sep 12 18:06:58.860777 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Sep 12 18:06:58.860868 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Sep 12 18:06:58.860975 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 18:06:58.861086 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Sep 12 18:06:58.861176 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Sep 12 18:06:58.861266 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Sep 12 18:06:58.861365 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 18:06:58.861456 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Sep 12 18:06:58.861549 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 12 18:06:58.861560 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 18:06:58.861570 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 18:06:58.861579 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 18:06:58.861588 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 18:06:58.861597 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 18:06:58.861606 kernel: iommu: Default domain type: Translated Sep 12 18:06:58.861615 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 18:06:58.861623 kernel: PCI: Using ACPI for IRQ routing Sep 12 18:06:58.861636 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 18:06:58.861645 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 18:06:58.861654 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 12 18:06:58.861745 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 12 18:06:58.861869 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 12 18:06:58.861999 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 18:06:58.862017 kernel: vgaarb: loaded Sep 12 18:06:58.862058 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 18:06:58.862077 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 18:06:58.862090 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 18:06:58.862103 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 18:06:58.862116 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 18:06:58.862131 kernel: pnp: PnP ACPI init Sep 12 18:06:58.862144 kernel: pnp: PnP ACPI: found 4 devices Sep 12 18:06:58.862157 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 18:06:58.862168 kernel: NET: Registered PF_INET protocol family Sep 12 18:06:58.862177 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 18:06:58.862190 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 18:06:58.862204 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 18:06:58.862219 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 18:06:58.862235 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 18:06:58.862244 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 18:06:58.862253 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 18:06:58.862261 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 18:06:58.862270 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 18:06:58.862279 kernel: NET: Registered PF_XDP protocol family Sep 12 18:06:58.862394 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 18:06:58.862479 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 18:06:58.862566 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 18:06:58.862648 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 18:06:58.862730 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 12 18:06:58.862828 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 12 18:06:58.862923 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 18:06:58.862937 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 18:06:58.864298 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 25918 usecs Sep 12 18:06:58.864323 kernel: PCI: CLS 0 bytes, default 64 Sep 12 18:06:58.864333 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 18:06:58.864343 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39fcb9af, max_idle_ns: 440795211412 ns Sep 12 18:06:58.864352 kernel: Initialise system trusted keyrings Sep 12 18:06:58.864361 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 18:06:58.864370 kernel: Key type asymmetric registered Sep 12 18:06:58.864379 kernel: Asymmetric key parser 'x509' registered Sep 12 18:06:58.864394 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 18:06:58.864403 kernel: io scheduler mq-deadline registered Sep 12 18:06:58.864412 kernel: io scheduler kyber registered Sep 12 18:06:58.864420 kernel: io scheduler bfq registered Sep 12 18:06:58.864429 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 18:06:58.864439 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 12 18:06:58.864448 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 12 18:06:58.864457 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 12 18:06:58.864466 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 18:06:58.864475 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 18:06:58.864486 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 18:06:58.864495 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 18:06:58.864504 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 18:06:58.864641 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 12 18:06:58.864655 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 18:06:58.864826 kernel: rtc_cmos 00:03: registered as rtc0 Sep 12 18:06:58.864919 kernel: rtc_cmos 00:03: setting system clock to 2025-09-12T18:06:58 UTC (1757700418) Sep 12 18:06:58.867808 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 12 18:06:58.867838 kernel: intel_pstate: CPU model not supported Sep 12 18:06:58.867849 kernel: NET: Registered PF_INET6 protocol family Sep 12 18:06:58.867858 kernel: Segment Routing with IPv6 Sep 12 18:06:58.867867 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 18:06:58.867876 kernel: NET: Registered PF_PACKET protocol family Sep 12 18:06:58.867885 kernel: Key type dns_resolver registered Sep 12 18:06:58.867894 kernel: IPI shorthand broadcast: enabled Sep 12 18:06:58.867903 kernel: sched_clock: Marking stable (3367002949, 83122024)->(3464690376, -14565403) Sep 12 18:06:58.867919 kernel: registered taskstats version 1 Sep 12 18:06:58.867928 kernel: Loading compiled-in X.509 certificates Sep 12 18:06:58.867937 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: f1ae8d6e9bfae84d90f4136cf098b0465b2a5bd7' Sep 12 18:06:58.867945 kernel: Demotion targets for Node 0: null Sep 12 18:06:58.867954 kernel: Key type .fscrypt registered Sep 12 18:06:58.867963 kernel: Key type fscrypt-provisioning registered Sep 12 18:06:58.867998 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 18:06:58.868013 kernel: ima: Allocated hash algorithm: sha1 Sep 12 18:06:58.868022 kernel: ima: No architecture policies found Sep 12 18:06:58.868052 kernel: clk: Disabling unused clocks Sep 12 18:06:58.868062 kernel: Warning: unable to open an initial console. Sep 12 18:06:58.868071 kernel: Freeing unused kernel image (initmem) memory: 54040K Sep 12 18:06:58.868080 kernel: Write protecting the kernel read-only data: 24576k Sep 12 18:06:58.868090 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 12 18:06:58.868099 kernel: Run /init as init process Sep 12 18:06:58.868108 kernel: with arguments: Sep 12 18:06:58.868117 kernel: /init Sep 12 18:06:58.868126 kernel: with environment: Sep 12 18:06:58.868137 kernel: HOME=/ Sep 12 18:06:58.868146 kernel: TERM=linux Sep 12 18:06:58.868155 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 18:06:58.868166 systemd[1]: Successfully made /usr/ read-only. Sep 12 18:06:58.868179 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 18:06:58.868189 systemd[1]: Detected virtualization kvm. Sep 12 18:06:58.868198 systemd[1]: Detected architecture x86-64. Sep 12 18:06:58.868210 systemd[1]: Running in initrd. Sep 12 18:06:58.868220 systemd[1]: No hostname configured, using default hostname. Sep 12 18:06:58.868230 systemd[1]: Hostname set to . Sep 12 18:06:58.868239 systemd[1]: Initializing machine ID from VM UUID. Sep 12 18:06:58.868249 systemd[1]: Queued start job for default target initrd.target. Sep 12 18:06:58.868258 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 18:06:58.868268 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 18:06:58.868280 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 18:06:58.868290 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 18:06:58.868302 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 18:06:58.868315 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 18:06:58.868326 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 18:06:58.868338 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 18:06:58.868348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 18:06:58.868358 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 18:06:58.868367 systemd[1]: Reached target paths.target - Path Units. Sep 12 18:06:58.868377 systemd[1]: Reached target slices.target - Slice Units. Sep 12 18:06:58.868386 systemd[1]: Reached target swap.target - Swaps. Sep 12 18:06:58.868396 systemd[1]: Reached target timers.target - Timer Units. Sep 12 18:06:58.868406 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 18:06:58.868415 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 18:06:58.868428 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 18:06:58.868437 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 18:06:58.868447 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 18:06:58.868457 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 18:06:58.868467 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 18:06:58.868476 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 18:06:58.868486 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 18:06:58.868495 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 18:06:58.868507 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 18:06:58.868517 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 18:06:58.868528 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 18:06:58.868537 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 18:06:58.868547 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 18:06:58.868557 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:06:58.868566 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 18:06:58.868579 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 18:06:58.868589 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 18:06:58.868633 systemd-journald[210]: Collecting audit messages is disabled. Sep 12 18:06:58.868660 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 18:06:58.868672 systemd-journald[210]: Journal started Sep 12 18:06:58.868693 systemd-journald[210]: Runtime Journal (/run/log/journal/f6c8e331bfa84f80bab040a6030c03b1) is 4.9M, max 39.5M, 34.6M free. Sep 12 18:06:58.869052 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 18:06:58.869629 systemd-modules-load[212]: Inserted module 'overlay' Sep 12 18:06:58.897050 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 18:06:58.899325 systemd-modules-load[212]: Inserted module 'br_netfilter' Sep 12 18:06:58.900051 kernel: Bridge firewalling registered Sep 12 18:06:58.901227 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:06:58.902873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 18:06:58.906207 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 18:06:58.909176 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 18:06:58.911828 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 18:06:58.913408 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 18:06:58.920447 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 18:06:58.930177 systemd-tmpfiles[230]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 18:06:58.938325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 18:06:58.939405 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 18:06:58.942419 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 18:06:58.945412 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 18:06:58.949316 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 18:06:58.952284 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 18:06:58.977046 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 18:06:58.991527 systemd-resolved[248]: Positive Trust Anchors: Sep 12 18:06:58.992157 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 18:06:58.992655 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 18:06:58.997582 systemd-resolved[248]: Defaulting to hostname 'linux'. Sep 12 18:06:58.999168 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 18:06:58.999589 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 18:06:59.076055 kernel: SCSI subsystem initialized Sep 12 18:06:59.085168 kernel: Loading iSCSI transport class v2.0-870. Sep 12 18:06:59.096064 kernel: iscsi: registered transport (tcp) Sep 12 18:06:59.120066 kernel: iscsi: registered transport (qla4xxx) Sep 12 18:06:59.120144 kernel: QLogic iSCSI HBA Driver Sep 12 18:06:59.140039 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 18:06:59.155303 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 18:06:59.157387 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 18:06:59.210791 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 18:06:59.213422 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 18:06:59.265077 kernel: raid6: avx2x4 gen() 16869 MB/s Sep 12 18:06:59.282071 kernel: raid6: avx2x2 gen() 17156 MB/s Sep 12 18:06:59.299126 kernel: raid6: avx2x1 gen() 11143 MB/s Sep 12 18:06:59.299217 kernel: raid6: using algorithm avx2x2 gen() 17156 MB/s Sep 12 18:06:59.317225 kernel: raid6: .... xor() 13581 MB/s, rmw enabled Sep 12 18:06:59.317311 kernel: raid6: using avx2x2 recovery algorithm Sep 12 18:06:59.340104 kernel: xor: automatically using best checksumming function avx Sep 12 18:06:59.525083 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 18:06:59.533378 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 18:06:59.535639 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 18:06:59.566404 systemd-udevd[460]: Using default interface naming scheme 'v255'. Sep 12 18:06:59.575472 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 18:06:59.578249 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 18:06:59.603257 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Sep 12 18:06:59.635237 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 18:06:59.637308 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 18:06:59.700090 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 18:06:59.703692 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 18:06:59.786052 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Sep 12 18:06:59.796699 kernel: scsi host0: Virtio SCSI HBA Sep 12 18:06:59.816136 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 12 18:06:59.832067 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 12 18:06:59.850061 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 18:06:59.868015 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 18:06:59.868118 kernel: ACPI: bus type USB registered Sep 12 18:06:59.868153 kernel: GPT:9289727 != 125829119 Sep 12 18:06:59.868171 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 18:06:59.868189 kernel: usbcore: registered new interface driver usbfs Sep 12 18:06:59.868209 kernel: GPT:9289727 != 125829119 Sep 12 18:06:59.868226 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 18:06:59.868244 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 18:06:59.868263 kernel: usbcore: registered new interface driver hub Sep 12 18:06:59.872056 kernel: usbcore: registered new device driver usb Sep 12 18:06:59.876271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 18:06:59.877085 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:06:59.878556 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:06:59.881414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:06:59.883794 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 18:06:59.888053 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 12 18:06:59.893312 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 12 18:06:59.896047 kernel: libata version 3.00 loaded. Sep 12 18:06:59.900158 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 12 18:06:59.902203 kernel: AES CTR mode by8 optimization enabled Sep 12 18:06:59.902259 kernel: scsi host1: ata_piix Sep 12 18:06:59.908080 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 18:06:59.908141 kernel: scsi host2: ata_piix Sep 12 18:06:59.909816 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Sep 12 18:06:59.909881 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Sep 12 18:06:59.986832 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 18:07:00.010170 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 18:07:00.011021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:07:00.024949 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 18:07:00.032647 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 18:07:00.033195 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 18:07:00.034859 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 18:07:00.055108 disk-uuid[604]: Primary Header is updated. Sep 12 18:07:00.055108 disk-uuid[604]: Secondary Entries is updated. Sep 12 18:07:00.055108 disk-uuid[604]: Secondary Header is updated. Sep 12 18:07:00.061084 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 18:07:00.095074 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 12 18:07:00.099101 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 12 18:07:00.099331 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 12 18:07:00.099451 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 12 18:07:00.102611 kernel: hub 1-0:1.0: USB hub found Sep 12 18:07:00.102919 kernel: hub 1-0:1.0: 2 ports detected Sep 12 18:07:00.194441 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 18:07:00.205796 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 18:07:00.206716 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 18:07:00.207566 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 18:07:00.209310 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 18:07:00.235609 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 18:07:01.079762 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 18:07:01.081704 disk-uuid[605]: The operation has completed successfully. Sep 12 18:07:01.140761 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 18:07:01.140969 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 18:07:01.201856 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 18:07:01.221275 sh[638]: Success Sep 12 18:07:01.247281 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 18:07:01.247361 kernel: device-mapper: uevent: version 1.0.3 Sep 12 18:07:01.248595 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 18:07:01.263064 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 12 18:07:01.308548 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 18:07:01.313171 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 18:07:01.331661 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 18:07:01.344072 kernel: BTRFS: device fsid 74707491-1b86-4926-8bdb-c533ce2a0c32 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (650) Sep 12 18:07:01.346229 kernel: BTRFS info (device dm-0): first mount of filesystem 74707491-1b86-4926-8bdb-c533ce2a0c32 Sep 12 18:07:01.346309 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 18:07:01.354067 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 18:07:01.354161 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 18:07:01.356172 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 18:07:01.357543 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 18:07:01.358223 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 18:07:01.359434 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 18:07:01.361918 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 18:07:01.396074 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (687) Sep 12 18:07:01.398437 kernel: BTRFS info (device vda6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 18:07:01.398509 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 18:07:01.406193 kernel: BTRFS info (device vda6): turning on async discard Sep 12 18:07:01.406262 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 18:07:01.413075 kernel: BTRFS info (device vda6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 18:07:01.414402 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 18:07:01.418199 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 18:07:01.498126 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 18:07:01.501163 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 18:07:01.544973 systemd-networkd[820]: lo: Link UP Sep 12 18:07:01.545821 systemd-networkd[820]: lo: Gained carrier Sep 12 18:07:01.549179 systemd-networkd[820]: Enumeration completed Sep 12 18:07:01.549307 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 18:07:01.549995 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 18:07:01.549999 systemd-networkd[820]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 12 18:07:01.550171 systemd[1]: Reached target network.target - Network. Sep 12 18:07:01.553404 systemd-networkd[820]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 18:07:01.553409 systemd-networkd[820]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 18:07:01.553803 systemd-networkd[820]: eth0: Link UP Sep 12 18:07:01.554343 systemd-networkd[820]: eth1: Link UP Sep 12 18:07:01.554879 systemd-networkd[820]: eth0: Gained carrier Sep 12 18:07:01.554894 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 18:07:01.559601 systemd-networkd[820]: eth1: Gained carrier Sep 12 18:07:01.559991 systemd-networkd[820]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 18:07:01.576347 systemd-networkd[820]: eth0: DHCPv4 address 137.184.114.151/20, gateway 137.184.112.1 acquired from 169.254.169.253 Sep 12 18:07:01.597152 systemd-networkd[820]: eth1: DHCPv4 address 10.124.0.30/20 acquired from 169.254.169.253 Sep 12 18:07:01.620746 ignition[734]: Ignition 2.21.0 Sep 12 18:07:01.620769 ignition[734]: Stage: fetch-offline Sep 12 18:07:01.620850 ignition[734]: no configs at "/usr/lib/ignition/base.d" Sep 12 18:07:01.620861 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:07:01.620990 ignition[734]: parsed url from cmdline: "" Sep 12 18:07:01.620997 ignition[734]: no config URL provided Sep 12 18:07:01.621005 ignition[734]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 18:07:01.621014 ignition[734]: no config at "/usr/lib/ignition/user.ign" Sep 12 18:07:01.621020 ignition[734]: failed to fetch config: resource requires networking Sep 12 18:07:01.621535 ignition[734]: Ignition finished successfully Sep 12 18:07:01.625896 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 18:07:01.627820 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 18:07:01.660840 ignition[829]: Ignition 2.21.0 Sep 12 18:07:01.660855 ignition[829]: Stage: fetch Sep 12 18:07:01.661087 ignition[829]: no configs at "/usr/lib/ignition/base.d" Sep 12 18:07:01.661099 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:07:01.661398 ignition[829]: parsed url from cmdline: "" Sep 12 18:07:01.661405 ignition[829]: no config URL provided Sep 12 18:07:01.661411 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 18:07:01.661429 ignition[829]: no config at "/usr/lib/ignition/user.ign" Sep 12 18:07:01.661495 ignition[829]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 12 18:07:01.688509 ignition[829]: GET result: OK Sep 12 18:07:01.688764 ignition[829]: parsing config with SHA512: 1ac6e6ae1fe18390bceb20773aac9e6fee95045139a17b11aa313a48433c85280acb39844ba4df3ed05b28b8800e69f762d9db93adcd28d17ab3ae7309480893 Sep 12 18:07:01.693907 unknown[829]: fetched base config from "system" Sep 12 18:07:01.693921 unknown[829]: fetched base config from "system" Sep 12 18:07:01.693929 unknown[829]: fetched user config from "digitalocean" Sep 12 18:07:01.694437 ignition[829]: fetch: fetch complete Sep 12 18:07:01.694443 ignition[829]: fetch: fetch passed Sep 12 18:07:01.694497 ignition[829]: Ignition finished successfully Sep 12 18:07:01.697271 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 18:07:01.699184 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 18:07:01.732622 ignition[835]: Ignition 2.21.0 Sep 12 18:07:01.732636 ignition[835]: Stage: kargs Sep 12 18:07:01.732890 ignition[835]: no configs at "/usr/lib/ignition/base.d" Sep 12 18:07:01.732904 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:07:01.734674 ignition[835]: kargs: kargs passed Sep 12 18:07:01.735222 ignition[835]: Ignition finished successfully Sep 12 18:07:01.737914 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 18:07:01.739875 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 18:07:01.776075 ignition[842]: Ignition 2.21.0 Sep 12 18:07:01.776089 ignition[842]: Stage: disks Sep 12 18:07:01.776235 ignition[842]: no configs at "/usr/lib/ignition/base.d" Sep 12 18:07:01.776244 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:07:01.777976 ignition[842]: disks: disks passed Sep 12 18:07:01.778087 ignition[842]: Ignition finished successfully Sep 12 18:07:01.780420 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 18:07:01.781511 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 18:07:01.781916 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 18:07:01.782634 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 18:07:01.783374 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 18:07:01.783934 systemd[1]: Reached target basic.target - Basic System. Sep 12 18:07:01.785622 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 18:07:01.815241 systemd-fsck[851]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 18:07:01.818780 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 18:07:01.820992 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 18:07:01.945060 kernel: EXT4-fs (vda9): mounted filesystem 26739aba-b0be-4ce3-bfbd-ca4dbcbe2426 r/w with ordered data mode. Quota mode: none. Sep 12 18:07:01.945863 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 18:07:01.946742 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 18:07:01.948562 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 18:07:01.950431 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 18:07:01.954187 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Sep 12 18:07:01.960272 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 18:07:01.961909 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 18:07:01.962792 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 18:07:01.968089 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (859) Sep 12 18:07:01.968140 kernel: BTRFS info (device vda6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 18:07:01.969163 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 18:07:01.978610 kernel: BTRFS info (device vda6): turning on async discard Sep 12 18:07:01.978696 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 18:07:01.984953 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 18:07:01.985629 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 18:07:01.991213 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 18:07:02.066531 coreos-metadata[861]: Sep 12 18:07:02.066 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 18:07:02.071480 initrd-setup-root[889]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 18:07:02.077400 coreos-metadata[862]: Sep 12 18:07:02.077 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 18:07:02.078912 initrd-setup-root[896]: cut: /sysroot/etc/group: No such file or directory Sep 12 18:07:02.083054 coreos-metadata[861]: Sep 12 18:07:02.082 INFO Fetch successful Sep 12 18:07:02.087750 initrd-setup-root[903]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 18:07:02.089637 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Sep 12 18:07:02.090000 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Sep 12 18:07:02.091862 coreos-metadata[862]: Sep 12 18:07:02.090 INFO Fetch successful Sep 12 18:07:02.094994 initrd-setup-root[911]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 18:07:02.098190 coreos-metadata[862]: Sep 12 18:07:02.098 INFO wrote hostname ci-4426.1.0-6-3761596165 to /sysroot/etc/hostname Sep 12 18:07:02.102115 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 18:07:02.209215 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 18:07:02.211694 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 18:07:02.213120 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 18:07:02.231422 kernel: BTRFS info (device vda6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 18:07:02.250628 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 18:07:02.266811 ignition[981]: INFO : Ignition 2.21.0 Sep 12 18:07:02.266811 ignition[981]: INFO : Stage: mount Sep 12 18:07:02.266811 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 18:07:02.266811 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:07:02.270245 ignition[981]: INFO : mount: mount passed Sep 12 18:07:02.270245 ignition[981]: INFO : Ignition finished successfully Sep 12 18:07:02.272586 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 18:07:02.275001 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 18:07:02.344013 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 18:07:02.346380 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 18:07:02.370529 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (991) Sep 12 18:07:02.370606 kernel: BTRFS info (device vda6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 18:07:02.372074 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 18:07:02.377484 kernel: BTRFS info (device vda6): turning on async discard Sep 12 18:07:02.377585 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 18:07:02.379695 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 18:07:02.411055 ignition[1007]: INFO : Ignition 2.21.0 Sep 12 18:07:02.411055 ignition[1007]: INFO : Stage: files Sep 12 18:07:02.412019 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 18:07:02.412019 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:07:02.415286 ignition[1007]: DEBUG : files: compiled without relabeling support, skipping Sep 12 18:07:02.417316 ignition[1007]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 18:07:02.417316 ignition[1007]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 18:07:02.420506 ignition[1007]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 18:07:02.421301 ignition[1007]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 18:07:02.422341 ignition[1007]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 18:07:02.421687 unknown[1007]: wrote ssh authorized keys file for user: core Sep 12 18:07:02.423533 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 18:07:02.423533 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 18:07:02.460626 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 18:07:02.758229 systemd-networkd[820]: eth0: Gained IPv6LL Sep 12 18:07:02.997756 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 18:07:02.997756 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 18:07:02.997756 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 18:07:02.997756 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 18:07:02.997756 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 18:07:02.997756 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 18:07:02.997756 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 18:07:02.997756 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 18:07:02.997756 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 18:07:03.007511 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 18:07:03.007511 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 18:07:03.007511 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 18:07:03.007511 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 18:07:03.007511 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 18:07:03.007511 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 18:07:03.398251 systemd-networkd[820]: eth1: Gained IPv6LL Sep 12 18:07:03.442717 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 18:07:04.259783 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 18:07:04.259783 ignition[1007]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 18:07:04.261472 ignition[1007]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 18:07:04.262660 ignition[1007]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 18:07:04.262660 ignition[1007]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 18:07:04.265147 ignition[1007]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 18:07:04.265147 ignition[1007]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 18:07:04.265147 ignition[1007]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 18:07:04.265147 ignition[1007]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 18:07:04.265147 ignition[1007]: INFO : files: files passed Sep 12 18:07:04.265147 ignition[1007]: INFO : Ignition finished successfully Sep 12 18:07:04.266342 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 18:07:04.270293 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 18:07:04.272205 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 18:07:04.281985 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 18:07:04.282547 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 18:07:04.290058 initrd-setup-root-after-ignition[1038]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 18:07:04.292004 initrd-setup-root-after-ignition[1038]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 18:07:04.292846 initrd-setup-root-after-ignition[1042]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 18:07:04.294538 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 18:07:04.295565 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 18:07:04.297265 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 18:07:04.353593 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 18:07:04.353705 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 18:07:04.354613 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 18:07:04.355068 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 18:07:04.355864 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 18:07:04.356895 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 18:07:04.386085 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 18:07:04.388450 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 18:07:04.414468 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 18:07:04.415713 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 18:07:04.416723 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 18:07:04.417583 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 18:07:04.418172 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 18:07:04.418832 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 18:07:04.420044 systemd[1]: Stopped target basic.target - Basic System. Sep 12 18:07:04.420765 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 18:07:04.421222 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 18:07:04.422208 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 18:07:04.423107 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 18:07:04.423737 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 18:07:04.424517 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 18:07:04.425312 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 18:07:04.425983 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 18:07:04.426660 systemd[1]: Stopped target swap.target - Swaps. Sep 12 18:07:04.427172 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 18:07:04.427304 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 18:07:04.428163 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 18:07:04.428938 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 18:07:04.429553 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 18:07:04.429741 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 18:07:04.430322 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 18:07:04.430479 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 18:07:04.431427 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 18:07:04.431585 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 18:07:04.432402 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 18:07:04.432545 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 18:07:04.433137 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 18:07:04.433285 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 18:07:04.434648 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 18:07:04.436557 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 18:07:04.436795 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 18:07:04.439193 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 18:07:04.439558 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 18:07:04.440236 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 18:07:04.441411 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 18:07:04.442731 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 18:07:04.451838 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 18:07:04.453797 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 18:07:04.471055 ignition[1062]: INFO : Ignition 2.21.0 Sep 12 18:07:04.471055 ignition[1062]: INFO : Stage: umount Sep 12 18:07:04.471055 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 18:07:04.471055 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:07:04.473873 ignition[1062]: INFO : umount: umount passed Sep 12 18:07:04.473873 ignition[1062]: INFO : Ignition finished successfully Sep 12 18:07:04.472357 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 18:07:04.476882 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 18:07:04.477013 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 18:07:04.486644 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 18:07:04.486702 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 18:07:04.487119 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 18:07:04.487159 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 18:07:04.500230 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 18:07:04.500300 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 18:07:04.500687 systemd[1]: Stopped target network.target - Network. Sep 12 18:07:04.502607 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 18:07:04.502702 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 18:07:04.503194 systemd[1]: Stopped target paths.target - Path Units. Sep 12 18:07:04.503681 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 18:07:04.503839 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 18:07:04.504587 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 18:07:04.506404 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 18:07:04.507265 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 18:07:04.507330 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 18:07:04.508061 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 18:07:04.508111 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 18:07:04.508724 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 18:07:04.508809 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 18:07:04.509580 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 18:07:04.509642 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 18:07:04.510590 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 18:07:04.511372 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 18:07:04.512753 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 18:07:04.512886 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 18:07:04.514425 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 18:07:04.514528 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 18:07:04.518464 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 18:07:04.518683 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 18:07:04.524779 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 18:07:04.525270 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 18:07:04.525437 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 18:07:04.527875 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 18:07:04.528917 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 18:07:04.529556 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 18:07:04.529615 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 18:07:04.531563 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 18:07:04.532071 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 18:07:04.532143 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 18:07:04.532767 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 18:07:04.532829 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 18:07:04.536228 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 18:07:04.536294 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 18:07:04.537609 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 18:07:04.538099 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 18:07:04.539217 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 18:07:04.542427 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 18:07:04.543019 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 18:07:04.560757 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 18:07:04.560962 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 18:07:04.563537 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 18:07:04.564301 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 18:07:04.567004 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 18:07:04.567711 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 18:07:04.568870 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 18:07:04.568927 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 18:07:04.570886 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 18:07:04.570974 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 18:07:04.571672 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 18:07:04.571734 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 18:07:04.573326 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 18:07:04.573384 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 18:07:04.576153 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 18:07:04.576524 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 18:07:04.576578 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 18:07:04.578144 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 18:07:04.578192 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 18:07:04.580335 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 18:07:04.580381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:07:04.585374 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 18:07:04.586195 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 18:07:04.586260 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 18:07:04.593466 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 18:07:04.593612 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 18:07:04.594754 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 18:07:04.601215 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 18:07:04.632930 systemd[1]: Switching root. Sep 12 18:07:04.685397 systemd-journald[210]: Journal stopped Sep 12 18:07:05.791371 systemd-journald[210]: Received SIGTERM from PID 1 (systemd). Sep 12 18:07:05.791459 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 18:07:05.791482 kernel: SELinux: policy capability open_perms=1 Sep 12 18:07:05.791496 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 18:07:05.791515 kernel: SELinux: policy capability always_check_network=0 Sep 12 18:07:05.791528 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 18:07:05.791541 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 18:07:05.791560 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 18:07:05.791574 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 18:07:05.791587 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 18:07:05.791600 kernel: audit: type=1403 audit(1757700424.823:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 18:07:05.791615 systemd[1]: Successfully loaded SELinux policy in 66.748ms. Sep 12 18:07:05.791645 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.877ms. Sep 12 18:07:05.791660 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 18:07:05.791675 systemd[1]: Detected virtualization kvm. Sep 12 18:07:05.791692 systemd[1]: Detected architecture x86-64. Sep 12 18:07:05.791705 systemd[1]: Detected first boot. Sep 12 18:07:05.791720 systemd[1]: Hostname set to . Sep 12 18:07:05.791735 systemd[1]: Initializing machine ID from VM UUID. Sep 12 18:07:05.791754 zram_generator::config[1108]: No configuration found. Sep 12 18:07:05.791769 kernel: Guest personality initialized and is inactive Sep 12 18:07:05.791782 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 18:07:05.791795 kernel: Initialized host personality Sep 12 18:07:05.791808 kernel: NET: Registered PF_VSOCK protocol family Sep 12 18:07:05.791825 systemd[1]: Populated /etc with preset unit settings. Sep 12 18:07:05.791840 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 18:07:05.791854 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 18:07:05.791869 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 18:07:05.791883 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 18:07:05.791897 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 18:07:05.791912 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 18:07:05.791933 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 18:07:05.791959 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 18:07:05.791985 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 18:07:05.792006 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 18:07:05.792036 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 18:07:05.792055 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 18:07:05.792075 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 18:07:05.792095 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 18:07:05.792116 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 18:07:05.792143 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 18:07:05.792166 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 18:07:05.792189 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 18:07:05.792206 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 18:07:05.792220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 18:07:05.792235 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 18:07:05.792249 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 18:07:05.792262 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 18:07:05.792280 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 18:07:05.792295 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 18:07:05.792310 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 18:07:05.792324 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 18:07:05.792339 systemd[1]: Reached target slices.target - Slice Units. Sep 12 18:07:05.792353 systemd[1]: Reached target swap.target - Swaps. Sep 12 18:07:05.792367 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 18:07:05.792381 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 18:07:05.792408 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 18:07:05.792426 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 18:07:05.792441 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 18:07:05.792455 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 18:07:05.792469 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 18:07:05.792483 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 18:07:05.792498 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 18:07:05.792512 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 18:07:05.792527 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:07:05.792542 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 18:07:05.792559 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 18:07:05.792574 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 18:07:05.792610 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 18:07:05.792624 systemd[1]: Reached target machines.target - Containers. Sep 12 18:07:05.792639 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 18:07:05.792653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 18:07:05.792668 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 18:07:05.792682 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 18:07:05.792700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 18:07:05.792714 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 18:07:05.792728 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 18:07:05.792743 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 18:07:05.792757 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 18:07:05.792771 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 18:07:05.792785 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 18:07:05.792799 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 18:07:05.792816 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 18:07:05.792831 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 18:07:05.792846 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 18:07:05.792860 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 18:07:05.792878 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 18:07:05.792892 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 18:07:05.792909 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 18:07:05.792923 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 18:07:05.792937 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 18:07:05.792951 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 18:07:05.792966 systemd[1]: Stopped verity-setup.service. Sep 12 18:07:05.792984 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:07:05.792998 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 18:07:05.793012 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 18:07:05.797196 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 18:07:05.797235 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 18:07:05.797251 kernel: fuse: init (API version 7.41) Sep 12 18:07:05.797265 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 18:07:05.797280 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 18:07:05.797303 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 18:07:05.797317 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 18:07:05.797332 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 18:07:05.797346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 18:07:05.797360 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 18:07:05.797374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 18:07:05.797388 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 18:07:05.797402 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 18:07:05.797416 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 18:07:05.797433 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 18:07:05.797449 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 18:07:05.797462 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 18:07:05.797477 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 18:07:05.797490 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 18:07:05.797552 systemd-journald[1178]: Collecting audit messages is disabled. Sep 12 18:07:05.797581 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 18:07:05.797596 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 18:07:05.797615 systemd-journald[1178]: Journal started Sep 12 18:07:05.797645 systemd-journald[1178]: Runtime Journal (/run/log/journal/f6c8e331bfa84f80bab040a6030c03b1) is 4.9M, max 39.5M, 34.6M free. Sep 12 18:07:05.491947 systemd[1]: Queued start job for default target multi-user.target. Sep 12 18:07:05.516780 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 18:07:05.517306 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 18:07:05.800047 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 18:07:05.807293 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 18:07:05.807369 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 18:07:05.812125 kernel: loop: module loaded Sep 12 18:07:05.816059 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 18:07:05.816137 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 18:07:05.824069 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 18:07:05.834822 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 18:07:05.834903 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 18:07:05.837395 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 18:07:05.837588 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 18:07:05.839526 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 18:07:05.840149 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 18:07:05.841458 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 18:07:05.841919 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 18:07:05.875344 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 18:07:05.876054 kernel: ACPI: bus type drm_connector registered Sep 12 18:07:05.876224 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 18:07:05.881233 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 18:07:05.892378 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 18:07:05.893110 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 18:07:05.921849 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 18:07:05.922508 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 18:07:05.929248 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 18:07:05.938060 kernel: loop0: detected capacity change from 0 to 128016 Sep 12 18:07:05.969513 systemd-journald[1178]: Time spent on flushing to /var/log/journal/f6c8e331bfa84f80bab040a6030c03b1 is 70.756ms for 1013 entries. Sep 12 18:07:05.969513 systemd-journald[1178]: System Journal (/var/log/journal/f6c8e331bfa84f80bab040a6030c03b1) is 8M, max 195.6M, 187.6M free. Sep 12 18:07:06.056146 systemd-journald[1178]: Received client request to flush runtime journal. Sep 12 18:07:06.056196 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 18:07:06.056215 kernel: loop1: detected capacity change from 0 to 229808 Sep 12 18:07:05.973198 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 18:07:05.976948 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 18:07:05.995647 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 18:07:06.004495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 18:07:06.063420 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 18:07:06.070062 kernel: loop2: detected capacity change from 0 to 8 Sep 12 18:07:06.076139 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 18:07:06.090389 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 18:07:06.094190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 18:07:06.109935 kernel: loop3: detected capacity change from 0 to 111000 Sep 12 18:07:06.132446 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Sep 12 18:07:06.132467 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Sep 12 18:07:06.143547 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 18:07:06.154056 kernel: loop4: detected capacity change from 0 to 128016 Sep 12 18:07:06.165084 kernel: loop5: detected capacity change from 0 to 229808 Sep 12 18:07:06.194051 kernel: loop6: detected capacity change from 0 to 8 Sep 12 18:07:06.198060 kernel: loop7: detected capacity change from 0 to 111000 Sep 12 18:07:06.246809 (sd-merge)[1257]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 12 18:07:06.247480 (sd-merge)[1257]: Merged extensions into '/usr'. Sep 12 18:07:06.254226 systemd[1]: Reload requested from client PID 1206 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 18:07:06.255089 systemd[1]: Reloading... Sep 12 18:07:06.480915 zram_generator::config[1284]: No configuration found. Sep 12 18:07:06.632108 ldconfig[1200]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 18:07:06.846425 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 18:07:06.846778 systemd[1]: Reloading finished in 591 ms. Sep 12 18:07:06.864249 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 18:07:06.865416 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 18:07:06.878208 systemd[1]: Starting ensure-sysext.service... Sep 12 18:07:06.880345 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 18:07:06.910265 systemd[1]: Reload requested from client PID 1326 ('systemctl') (unit ensure-sysext.service)... Sep 12 18:07:06.911086 systemd[1]: Reloading... Sep 12 18:07:06.952609 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 18:07:06.952675 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 18:07:06.953168 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 18:07:06.953621 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 18:07:06.957257 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 18:07:06.957783 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Sep 12 18:07:06.957882 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Sep 12 18:07:06.967524 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 18:07:06.967543 systemd-tmpfiles[1327]: Skipping /boot Sep 12 18:07:07.013326 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 18:07:07.013346 systemd-tmpfiles[1327]: Skipping /boot Sep 12 18:07:07.037063 zram_generator::config[1354]: No configuration found. Sep 12 18:07:07.359541 systemd[1]: Reloading finished in 448 ms. Sep 12 18:07:07.384228 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 18:07:07.391889 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 18:07:07.400235 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 18:07:07.402759 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 18:07:07.406385 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 18:07:07.410654 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 18:07:07.412735 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 18:07:07.418615 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 18:07:07.424889 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:07:07.425135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 18:07:07.426512 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 18:07:07.431402 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 18:07:07.434341 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 18:07:07.434841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 18:07:07.435001 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 18:07:07.435142 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:07:07.437770 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:07:07.437963 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 18:07:07.438143 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 18:07:07.438225 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 18:07:07.438308 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:07:07.441694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:07:07.441947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 18:07:07.444377 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 18:07:07.444987 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 18:07:07.445148 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 18:07:07.445288 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:07:07.452739 systemd[1]: Finished ensure-sysext.service. Sep 12 18:07:07.456266 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 18:07:07.461347 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 18:07:07.492071 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 18:07:07.505483 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 18:07:07.517817 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 18:07:07.520105 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 18:07:07.521579 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 18:07:07.530589 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 18:07:07.532397 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 18:07:07.532604 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 18:07:07.533415 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 18:07:07.543964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 18:07:07.545112 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 18:07:07.545492 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 18:07:07.555475 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 18:07:07.555602 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 18:07:07.555639 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 18:07:07.560220 systemd-udevd[1403]: Using default interface naming scheme 'v255'. Sep 12 18:07:07.572260 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 18:07:07.580733 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 18:07:07.608781 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 18:07:07.610694 augenrules[1446]: No rules Sep 12 18:07:07.613267 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 18:07:07.613921 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 18:07:07.614693 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 18:07:07.706243 systemd-resolved[1402]: Positive Trust Anchors: Sep 12 18:07:07.706260 systemd-resolved[1402]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 18:07:07.706313 systemd-resolved[1402]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 18:07:07.723972 systemd-resolved[1402]: Using system hostname 'ci-4426.1.0-6-3761596165'. Sep 12 18:07:07.737788 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 18:07:07.739498 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 18:07:07.797411 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 18:07:07.797919 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 18:07:07.798340 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 18:07:07.798754 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 18:07:07.800433 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 18:07:07.801019 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 18:07:07.803074 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 18:07:07.803107 systemd[1]: Reached target paths.target - Path Units. Sep 12 18:07:07.803861 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 18:07:07.804361 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 18:07:07.805106 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 18:07:07.805742 systemd[1]: Reached target timers.target - Timer Units. Sep 12 18:07:07.807248 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 18:07:07.810810 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 18:07:07.814954 systemd-networkd[1452]: lo: Link UP Sep 12 18:07:07.815132 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 18:07:07.815323 systemd-networkd[1452]: lo: Gained carrier Sep 12 18:07:07.816208 systemd-networkd[1452]: Enumeration completed Sep 12 18:07:07.816285 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 18:07:07.817280 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 18:07:07.820252 systemd-timesyncd[1410]: No network connectivity, watching for changes. Sep 12 18:07:07.824955 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 18:07:07.826574 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 18:07:07.830049 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 18:07:07.830738 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 18:07:07.831727 systemd[1]: Reached target network.target - Network. Sep 12 18:07:07.833104 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 18:07:07.833416 systemd[1]: Reached target basic.target - Basic System. Sep 12 18:07:07.833792 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 18:07:07.833831 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 18:07:07.836485 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 18:07:07.840244 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 18:07:07.842515 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 18:07:07.847571 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 18:07:07.854249 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 18:07:07.860244 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 18:07:07.861117 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 18:07:07.867258 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 18:07:07.873215 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 18:07:07.873974 jq[1486]: false Sep 12 18:07:07.881254 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 18:07:07.888102 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 18:07:07.892147 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 18:07:07.898288 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 18:07:07.904791 extend-filesystems[1487]: Found /dev/vda6 Sep 12 18:07:07.911135 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 18:07:07.924910 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 18:07:07.926951 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 18:07:07.928678 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 18:07:07.934324 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 18:07:07.938549 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Refreshing passwd entry cache Sep 12 18:07:07.938554 oslogin_cache_refresh[1490]: Refreshing passwd entry cache Sep 12 18:07:07.944498 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 18:07:07.954482 coreos-metadata[1483]: Sep 12 18:07:07.954 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 18:07:07.955124 coreos-metadata[1483]: Sep 12 18:07:07.955 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Sep 12 18:07:07.956159 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 18:07:07.957445 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 18:07:07.959145 extend-filesystems[1487]: Found /dev/vda9 Sep 12 18:07:07.959102 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 18:07:07.960156 extend-filesystems[1487]: Checking size of /dev/vda9 Sep 12 18:07:07.981983 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 18:07:07.982919 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Failure getting users, quitting Sep 12 18:07:07.982910 oslogin_cache_refresh[1490]: Failure getting users, quitting Sep 12 18:07:07.983041 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 18:07:07.983041 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Refreshing group entry cache Sep 12 18:07:07.982933 oslogin_cache_refresh[1490]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 18:07:07.983006 oslogin_cache_refresh[1490]: Refreshing group entry cache Sep 12 18:07:07.984838 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Failure getting groups, quitting Sep 12 18:07:07.984838 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 18:07:07.984822 oslogin_cache_refresh[1490]: Failure getting groups, quitting Sep 12 18:07:07.984835 oslogin_cache_refresh[1490]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 18:07:07.991361 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 18:07:07.992599 extend-filesystems[1487]: Resized partition /dev/vda9 Sep 12 18:07:07.997571 extend-filesystems[1523]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 18:07:07.999774 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 18:07:08.001673 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 18:07:08.003292 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 18:07:08.004045 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 12 18:07:08.039513 (ntainerd)[1525]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 18:07:08.055085 jq[1503]: true Sep 12 18:07:08.068960 update_engine[1502]: I20250912 18:07:08.060768 1502 main.cc:92] Flatcar Update Engine starting Sep 12 18:07:08.075220 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 18:07:08.075481 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 18:07:08.089880 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 18:07:08.116640 dbus-daemon[1484]: [system] SELinux support is enabled Sep 12 18:07:08.116867 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 18:07:08.130279 tar[1508]: linux-amd64/LICENSE Sep 12 18:07:08.130279 tar[1508]: linux-amd64/helm Sep 12 18:07:08.119791 systemd-networkd[1452]: eth1: Configuring with /run/systemd/network/10-a2:26:91:f7:f8:a3.network. Sep 12 18:07:08.120652 systemd-networkd[1452]: eth1: Link UP Sep 12 18:07:08.120926 systemd-networkd[1452]: eth1: Gained carrier Sep 12 18:07:08.128407 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Sep 12 18:07:08.130056 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Sep 12 18:07:08.143086 jq[1535]: true Sep 12 18:07:08.143256 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 12 18:07:08.143673 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 18:07:08.143781 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 18:07:08.143806 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 18:07:08.145306 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 18:07:08.191175 update_engine[1502]: I20250912 18:07:08.184926 1502 update_check_scheduler.cc:74] Next update check in 11m56s Sep 12 18:07:08.188807 systemd[1]: Started update-engine.service - Update Engine. Sep 12 18:07:08.200644 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 12 18:07:08.207674 extend-filesystems[1523]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 18:07:08.207674 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 12 18:07:08.207674 extend-filesystems[1523]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 12 18:07:08.214955 extend-filesystems[1487]: Resized filesystem in /dev/vda9 Sep 12 18:07:08.226101 kernel: ISO 9660 Extensions: RRIP_1991A Sep 12 18:07:08.245589 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 18:07:08.249929 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 18:07:08.251184 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 18:07:08.251794 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 12 18:07:08.265889 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 18:07:08.270131 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 18:07:08.271185 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 12 18:07:08.271221 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 18:07:08.294821 bash[1563]: Updated "/home/core/.ssh/authorized_keys" Sep 12 18:07:08.301727 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 18:07:08.308484 systemd[1]: Starting sshkeys.service... Sep 12 18:07:08.360875 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 18:07:08.370761 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 18:07:08.376369 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 18:07:08.381401 systemd-logind[1498]: New seat seat0. Sep 12 18:07:08.388598 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 18:07:08.456259 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 18:07:08.475480 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 18:07:08.495257 kernel: ACPI: button: Power Button [PWRF] Sep 12 18:07:08.532586 coreos-metadata[1573]: Sep 12 18:07:08.532 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 18:07:08.535660 coreos-metadata[1573]: Sep 12 18:07:08.535 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Sep 12 18:07:08.540131 locksmithd[1546]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 18:07:08.547274 systemd-networkd[1452]: eth0: Configuring with /run/systemd/network/10-8e:ee:0b:13:71:ea.network. Sep 12 18:07:08.553192 systemd-networkd[1452]: eth0: Link UP Sep 12 18:07:08.553751 systemd-networkd[1452]: eth0: Gained carrier Sep 12 18:07:08.688065 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 12 18:07:08.712767 containerd[1525]: time="2025-09-12T18:07:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 18:07:08.716761 containerd[1525]: time="2025-09-12T18:07:08.714664145Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 18:07:08.722592 sshd_keygen[1533]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 18:07:08.757590 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 18:07:08.762240 containerd[1525]: time="2025-09-12T18:07:08.762192248Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.113µs" Sep 12 18:07:08.762370 containerd[1525]: time="2025-09-12T18:07:08.762355984Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 18:07:08.762424 containerd[1525]: time="2025-09-12T18:07:08.762413324Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 18:07:08.762711 containerd[1525]: time="2025-09-12T18:07:08.762681168Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 18:07:08.764106 containerd[1525]: time="2025-09-12T18:07:08.763352651Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 18:07:08.764106 containerd[1525]: time="2025-09-12T18:07:08.763394403Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 18:07:08.764106 containerd[1525]: time="2025-09-12T18:07:08.763472222Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 18:07:08.764106 containerd[1525]: time="2025-09-12T18:07:08.763483903Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 18:07:08.764106 containerd[1525]: time="2025-09-12T18:07:08.763731694Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 18:07:08.764106 containerd[1525]: time="2025-09-12T18:07:08.763744600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 18:07:08.764106 containerd[1525]: time="2025-09-12T18:07:08.763754770Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 18:07:08.764106 containerd[1525]: time="2025-09-12T18:07:08.763763529Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 18:07:08.764106 containerd[1525]: time="2025-09-12T18:07:08.763862173Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 18:07:08.764106 containerd[1525]: time="2025-09-12T18:07:08.764079945Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 18:07:08.765818 containerd[1525]: time="2025-09-12T18:07:08.765324146Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 18:07:08.765818 containerd[1525]: time="2025-09-12T18:07:08.765350237Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 18:07:08.765818 containerd[1525]: time="2025-09-12T18:07:08.765402792Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 18:07:08.765818 containerd[1525]: time="2025-09-12T18:07:08.765613021Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 18:07:08.765818 containerd[1525]: time="2025-09-12T18:07:08.765682285Z" level=info msg="metadata content store policy set" policy=shared Sep 12 18:07:08.768834 containerd[1525]: time="2025-09-12T18:07:08.768711723Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 18:07:08.768834 containerd[1525]: time="2025-09-12T18:07:08.768770651Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 18:07:08.768834 containerd[1525]: time="2025-09-12T18:07:08.768784672Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 18:07:08.768834 containerd[1525]: time="2025-09-12T18:07:08.768796838Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.768808641Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.768981350Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.768996278Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.769034601Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.769050006Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.769060237Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.769068975Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.769092134Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.769259045Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.769290601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.769307369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.769319080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.769339960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 18:07:08.769531 containerd[1525]: time="2025-09-12T18:07:08.769353585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 18:07:08.769826 containerd[1525]: time="2025-09-12T18:07:08.769364583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 18:07:08.769826 containerd[1525]: time="2025-09-12T18:07:08.769373891Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 18:07:08.769826 containerd[1525]: time="2025-09-12T18:07:08.769395917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 18:07:08.769826 containerd[1525]: time="2025-09-12T18:07:08.769409068Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 18:07:08.769826 containerd[1525]: time="2025-09-12T18:07:08.769418633Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 18:07:08.769826 containerd[1525]: time="2025-09-12T18:07:08.769480415Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 18:07:08.769826 containerd[1525]: time="2025-09-12T18:07:08.769492300Z" level=info msg="Start snapshots syncer" Sep 12 18:07:08.770394 containerd[1525]: time="2025-09-12T18:07:08.769995131Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 18:07:08.770394 containerd[1525]: time="2025-09-12T18:07:08.770300386Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 18:07:08.770570 containerd[1525]: time="2025-09-12T18:07:08.770347994Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 18:07:08.770677 containerd[1525]: time="2025-09-12T18:07:08.770654211Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 18:07:08.771396 containerd[1525]: time="2025-09-12T18:07:08.770855345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 18:07:08.771396 containerd[1525]: time="2025-09-12T18:07:08.770882544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 18:07:08.771396 containerd[1525]: time="2025-09-12T18:07:08.770893075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 18:07:08.771396 containerd[1525]: time="2025-09-12T18:07:08.770904196Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 18:07:08.771396 containerd[1525]: time="2025-09-12T18:07:08.770916824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 18:07:08.771396 containerd[1525]: time="2025-09-12T18:07:08.770927970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 18:07:08.771396 containerd[1525]: time="2025-09-12T18:07:08.770951789Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 18:07:08.771396 containerd[1525]: time="2025-09-12T18:07:08.770980598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 18:07:08.771396 containerd[1525]: time="2025-09-12T18:07:08.770990732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 18:07:08.771396 containerd[1525]: time="2025-09-12T18:07:08.771001608Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 18:07:08.771681 containerd[1525]: time="2025-09-12T18:07:08.771664143Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 18:07:08.771782 containerd[1525]: time="2025-09-12T18:07:08.771767379Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 18:07:08.771837 containerd[1525]: time="2025-09-12T18:07:08.771825830Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 18:07:08.771881 containerd[1525]: time="2025-09-12T18:07:08.771871104Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 18:07:08.771920 containerd[1525]: time="2025-09-12T18:07:08.771911495Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 18:07:08.771960 containerd[1525]: time="2025-09-12T18:07:08.771952020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 18:07:08.772002 containerd[1525]: time="2025-09-12T18:07:08.771993288Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 18:07:08.774918 containerd[1525]: time="2025-09-12T18:07:08.773427183Z" level=info msg="runtime interface created" Sep 12 18:07:08.774918 containerd[1525]: time="2025-09-12T18:07:08.773450367Z" level=info msg="created NRI interface" Sep 12 18:07:08.774918 containerd[1525]: time="2025-09-12T18:07:08.773486692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 18:07:08.774918 containerd[1525]: time="2025-09-12T18:07:08.773514789Z" level=info msg="Connect containerd service" Sep 12 18:07:08.774918 containerd[1525]: time="2025-09-12T18:07:08.773563326Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 18:07:08.774918 containerd[1525]: time="2025-09-12T18:07:08.774619282Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 18:07:08.837112 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 18:07:08.848418 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 18:07:08.943054 kernel: EDAC MC: Ver: 3.0.0 Sep 12 18:07:08.950269 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 18:07:08.950504 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 18:07:08.956521 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 18:07:08.958685 coreos-metadata[1483]: Sep 12 18:07:08.958 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Sep 12 18:07:08.974049 coreos-metadata[1483]: Sep 12 18:07:08.972 INFO Fetch successful Sep 12 18:07:09.084732 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 18:07:09.088798 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 18:07:09.096017 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 18:07:09.097996 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 18:07:09.104047 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 18:07:09.105759 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 18:07:09.111116 containerd[1525]: time="2025-09-12T18:07:09.108759130Z" level=info msg="Start subscribing containerd event" Sep 12 18:07:09.111116 containerd[1525]: time="2025-09-12T18:07:09.108821086Z" level=info msg="Start recovering state" Sep 12 18:07:09.111116 containerd[1525]: time="2025-09-12T18:07:09.108932936Z" level=info msg="Start event monitor" Sep 12 18:07:09.111116 containerd[1525]: time="2025-09-12T18:07:09.108946956Z" level=info msg="Start cni network conf syncer for default" Sep 12 18:07:09.111116 containerd[1525]: time="2025-09-12T18:07:09.108957483Z" level=info msg="Start streaming server" Sep 12 18:07:09.111116 containerd[1525]: time="2025-09-12T18:07:09.108970164Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 18:07:09.111116 containerd[1525]: time="2025-09-12T18:07:09.108977660Z" level=info msg="runtime interface starting up..." Sep 12 18:07:09.111116 containerd[1525]: time="2025-09-12T18:07:09.108984186Z" level=info msg="starting plugins..." Sep 12 18:07:09.111116 containerd[1525]: time="2025-09-12T18:07:09.108990185Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 18:07:09.111116 containerd[1525]: time="2025-09-12T18:07:09.109076910Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 18:07:09.111116 containerd[1525]: time="2025-09-12T18:07:09.108997755Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 18:07:09.111116 containerd[1525]: time="2025-09-12T18:07:09.109311865Z" level=info msg="containerd successfully booted in 0.399542s" Sep 12 18:07:09.110260 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 18:07:09.200892 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 12 18:07:09.200964 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 12 18:07:09.204834 kernel: Console: switching to colour dummy device 80x25 Sep 12 18:07:09.204900 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 18:07:09.204915 kernel: [drm] features: -context_init Sep 12 18:07:09.216505 kernel: [drm] number of scanouts: 1 Sep 12 18:07:09.216597 kernel: [drm] number of cap sets: 0 Sep 12 18:07:09.222643 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Sep 12 18:07:09.220406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:07:09.225914 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 12 18:07:09.226009 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 18:07:09.232066 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 18:07:09.285593 systemd-logind[1498]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 18:07:09.291397 systemd-logind[1498]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 18:07:09.322496 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 18:07:09.322969 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:07:09.328080 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 18:07:09.342784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:07:09.414177 systemd-networkd[1452]: eth1: Gained IPv6LL Sep 12 18:07:09.420513 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 18:07:09.424131 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 18:07:09.430257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:07:09.437223 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 18:07:09.447944 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:07:09.454916 tar[1508]: linux-amd64/README.md Sep 12 18:07:09.480448 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 18:07:09.490349 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 18:07:09.536582 coreos-metadata[1573]: Sep 12 18:07:09.536 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Sep 12 18:07:09.548330 coreos-metadata[1573]: Sep 12 18:07:09.548 INFO Fetch successful Sep 12 18:07:09.556486 unknown[1573]: wrote ssh authorized keys file for user: core Sep 12 18:07:09.578976 update-ssh-keys[1665]: Updated "/home/core/.ssh/authorized_keys" Sep 12 18:07:09.580475 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 18:07:09.584832 systemd[1]: Finished sshkeys.service. Sep 12 18:07:09.798248 systemd-networkd[1452]: eth0: Gained IPv6LL Sep 12 18:07:10.539782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:07:10.541696 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 18:07:10.543174 systemd[1]: Startup finished in 3.452s (kernel) + 6.163s (initrd) + 5.786s (userspace) = 15.402s. Sep 12 18:07:10.555787 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 18:07:11.214234 kubelet[1673]: E0912 18:07:11.214163 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 18:07:11.217591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 18:07:11.217974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 18:07:11.218493 systemd[1]: kubelet.service: Consumed 1.263s CPU time, 266.5M memory peak. Sep 12 18:07:11.942187 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 18:07:11.944357 systemd[1]: Started sshd@0-137.184.114.151:22-139.178.89.65:42948.service - OpenSSH per-connection server daemon (139.178.89.65:42948). Sep 12 18:07:12.034237 sshd[1685]: Accepted publickey for core from 139.178.89.65 port 42948 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:07:12.036454 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:07:12.044765 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 18:07:12.045809 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 18:07:12.055195 systemd-logind[1498]: New session 1 of user core. Sep 12 18:07:12.071230 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 18:07:12.074641 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 18:07:12.093282 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 18:07:12.096894 systemd-logind[1498]: New session c1 of user core. Sep 12 18:07:12.243824 systemd[1690]: Queued start job for default target default.target. Sep 12 18:07:12.266280 systemd[1690]: Created slice app.slice - User Application Slice. Sep 12 18:07:12.266317 systemd[1690]: Reached target paths.target - Paths. Sep 12 18:07:12.266371 systemd[1690]: Reached target timers.target - Timers. Sep 12 18:07:12.267781 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 18:07:12.286390 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 18:07:12.286564 systemd[1690]: Reached target sockets.target - Sockets. Sep 12 18:07:12.286629 systemd[1690]: Reached target basic.target - Basic System. Sep 12 18:07:12.286681 systemd[1690]: Reached target default.target - Main User Target. Sep 12 18:07:12.286724 systemd[1690]: Startup finished in 180ms. Sep 12 18:07:12.286944 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 18:07:12.308152 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 18:07:12.380480 systemd[1]: Started sshd@1-137.184.114.151:22-139.178.89.65:42958.service - OpenSSH per-connection server daemon (139.178.89.65:42958). Sep 12 18:07:12.450427 sshd[1701]: Accepted publickey for core from 139.178.89.65 port 42958 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:07:12.452641 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:07:12.459853 systemd-logind[1498]: New session 2 of user core. Sep 12 18:07:12.465311 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 18:07:12.528970 sshd[1704]: Connection closed by 139.178.89.65 port 42958 Sep 12 18:07:12.529709 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Sep 12 18:07:12.545226 systemd[1]: sshd@1-137.184.114.151:22-139.178.89.65:42958.service: Deactivated successfully. Sep 12 18:07:12.547959 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 18:07:12.549174 systemd-logind[1498]: Session 2 logged out. Waiting for processes to exit. Sep 12 18:07:12.553943 systemd[1]: Started sshd@2-137.184.114.151:22-139.178.89.65:42966.service - OpenSSH per-connection server daemon (139.178.89.65:42966). Sep 12 18:07:12.554720 systemd-logind[1498]: Removed session 2. Sep 12 18:07:12.625154 sshd[1710]: Accepted publickey for core from 139.178.89.65 port 42966 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:07:12.626695 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:07:12.632343 systemd-logind[1498]: New session 3 of user core. Sep 12 18:07:12.645314 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 18:07:12.702503 sshd[1713]: Connection closed by 139.178.89.65 port 42966 Sep 12 18:07:12.703222 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Sep 12 18:07:12.712838 systemd[1]: sshd@2-137.184.114.151:22-139.178.89.65:42966.service: Deactivated successfully. Sep 12 18:07:12.715589 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 18:07:12.717470 systemd-logind[1498]: Session 3 logged out. Waiting for processes to exit. Sep 12 18:07:12.720076 systemd[1]: Started sshd@3-137.184.114.151:22-139.178.89.65:42972.service - OpenSSH per-connection server daemon (139.178.89.65:42972). Sep 12 18:07:12.722678 systemd-logind[1498]: Removed session 3. Sep 12 18:07:12.786134 sshd[1719]: Accepted publickey for core from 139.178.89.65 port 42972 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:07:12.788096 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:07:12.794222 systemd-logind[1498]: New session 4 of user core. Sep 12 18:07:12.803371 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 18:07:12.864876 sshd[1722]: Connection closed by 139.178.89.65 port 42972 Sep 12 18:07:12.865678 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Sep 12 18:07:12.878910 systemd[1]: sshd@3-137.184.114.151:22-139.178.89.65:42972.service: Deactivated successfully. Sep 12 18:07:12.881471 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 18:07:12.882260 systemd-logind[1498]: Session 4 logged out. Waiting for processes to exit. Sep 12 18:07:12.885870 systemd[1]: Started sshd@4-137.184.114.151:22-139.178.89.65:42986.service - OpenSSH per-connection server daemon (139.178.89.65:42986). Sep 12 18:07:12.886560 systemd-logind[1498]: Removed session 4. Sep 12 18:07:12.947389 sshd[1728]: Accepted publickey for core from 139.178.89.65 port 42986 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:07:12.948866 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:07:12.955143 systemd-logind[1498]: New session 5 of user core. Sep 12 18:07:12.964350 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 18:07:13.034763 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 18:07:13.035109 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 18:07:13.050014 sudo[1732]: pam_unix(sudo:session): session closed for user root Sep 12 18:07:13.055165 sshd[1731]: Connection closed by 139.178.89.65 port 42986 Sep 12 18:07:13.053969 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Sep 12 18:07:13.065549 systemd[1]: sshd@4-137.184.114.151:22-139.178.89.65:42986.service: Deactivated successfully. Sep 12 18:07:13.068285 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 18:07:13.069720 systemd-logind[1498]: Session 5 logged out. Waiting for processes to exit. Sep 12 18:07:13.073711 systemd[1]: Started sshd@5-137.184.114.151:22-139.178.89.65:42990.service - OpenSSH per-connection server daemon (139.178.89.65:42990). Sep 12 18:07:13.075112 systemd-logind[1498]: Removed session 5. Sep 12 18:07:13.131865 sshd[1738]: Accepted publickey for core from 139.178.89.65 port 42990 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:07:13.133528 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:07:13.138875 systemd-logind[1498]: New session 6 of user core. Sep 12 18:07:13.148367 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 18:07:13.207650 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 18:07:13.208083 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 18:07:13.213115 sudo[1743]: pam_unix(sudo:session): session closed for user root Sep 12 18:07:13.219624 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 18:07:13.220350 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 18:07:13.232342 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 18:07:13.286691 augenrules[1765]: No rules Sep 12 18:07:13.288694 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 18:07:13.289348 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 18:07:13.291227 sudo[1742]: pam_unix(sudo:session): session closed for user root Sep 12 18:07:13.294346 sshd[1741]: Connection closed by 139.178.89.65 port 42990 Sep 12 18:07:13.295000 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Sep 12 18:07:13.307826 systemd[1]: sshd@5-137.184.114.151:22-139.178.89.65:42990.service: Deactivated successfully. Sep 12 18:07:13.311043 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 18:07:13.312353 systemd-logind[1498]: Session 6 logged out. Waiting for processes to exit. Sep 12 18:07:13.317647 systemd[1]: Started sshd@6-137.184.114.151:22-139.178.89.65:43004.service - OpenSSH per-connection server daemon (139.178.89.65:43004). Sep 12 18:07:13.319210 systemd-logind[1498]: Removed session 6. Sep 12 18:07:13.384752 sshd[1774]: Accepted publickey for core from 139.178.89.65 port 43004 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:07:13.386231 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:07:13.393140 systemd-logind[1498]: New session 7 of user core. Sep 12 18:07:13.402416 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 18:07:13.464334 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 18:07:13.464848 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 18:07:13.913717 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 18:07:13.937585 (dockerd)[1795]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 18:07:14.265727 dockerd[1795]: time="2025-09-12T18:07:14.264979857Z" level=info msg="Starting up" Sep 12 18:07:14.267950 dockerd[1795]: time="2025-09-12T18:07:14.267917153Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 18:07:14.287171 dockerd[1795]: time="2025-09-12T18:07:14.287020800Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 18:07:14.306178 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1110998446-merged.mount: Deactivated successfully. Sep 12 18:07:14.376569 dockerd[1795]: time="2025-09-12T18:07:14.376319427Z" level=info msg="Loading containers: start." Sep 12 18:07:14.390064 kernel: Initializing XFRM netlink socket Sep 12 18:07:14.672935 systemd-networkd[1452]: docker0: Link UP Sep 12 18:07:15.392739 systemd-resolved[1402]: Clock change detected. Flushing caches. Sep 12 18:07:15.392833 systemd-timesyncd[1410]: Contacted time server 45.77.126.122:123 (0.flatcar.pool.ntp.org). Sep 12 18:07:15.393834 systemd-timesyncd[1410]: Initial clock synchronization to Fri 2025-09-12 18:07:15.392612 UTC. Sep 12 18:07:15.395345 dockerd[1795]: time="2025-09-12T18:07:15.395261902Z" level=info msg="Loading containers: done." Sep 12 18:07:15.413209 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1307224032-merged.mount: Deactivated successfully. Sep 12 18:07:15.415350 dockerd[1795]: time="2025-09-12T18:07:15.415280751Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 18:07:15.415488 dockerd[1795]: time="2025-09-12T18:07:15.415377512Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 18:07:15.415488 dockerd[1795]: time="2025-09-12T18:07:15.415473322Z" level=info msg="Initializing buildkit" Sep 12 18:07:15.437849 dockerd[1795]: time="2025-09-12T18:07:15.437796933Z" level=info msg="Completed buildkit initialization" Sep 12 18:07:15.443687 dockerd[1795]: time="2025-09-12T18:07:15.443629614Z" level=info msg="Daemon has completed initialization" Sep 12 18:07:15.444061 dockerd[1795]: time="2025-09-12T18:07:15.443818018Z" level=info msg="API listen on /run/docker.sock" Sep 12 18:07:15.444302 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 18:07:16.340290 containerd[1525]: time="2025-09-12T18:07:16.340233486Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 18:07:16.900903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339885717.mount: Deactivated successfully. Sep 12 18:07:18.099052 containerd[1525]: time="2025-09-12T18:07:18.098761341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:18.099443 containerd[1525]: time="2025-09-12T18:07:18.099372905Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 18:07:18.100226 containerd[1525]: time="2025-09-12T18:07:18.100191508Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:18.102926 containerd[1525]: time="2025-09-12T18:07:18.102869366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:18.104034 containerd[1525]: time="2025-09-12T18:07:18.103711449Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.763439018s" Sep 12 18:07:18.104034 containerd[1525]: time="2025-09-12T18:07:18.103748540Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 18:07:18.104331 containerd[1525]: time="2025-09-12T18:07:18.104307025Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 18:07:19.539169 containerd[1525]: time="2025-09-12T18:07:19.539104658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:19.540564 containerd[1525]: time="2025-09-12T18:07:19.540518451Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 18:07:19.540925 containerd[1525]: time="2025-09-12T18:07:19.540896191Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:19.544290 containerd[1525]: time="2025-09-12T18:07:19.544223827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:19.545154 containerd[1525]: time="2025-09-12T18:07:19.545008745Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.440655405s" Sep 12 18:07:19.545154 containerd[1525]: time="2025-09-12T18:07:19.545068166Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 18:07:19.546149 containerd[1525]: time="2025-09-12T18:07:19.546119355Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 18:07:20.715701 containerd[1525]: time="2025-09-12T18:07:20.715606460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:20.716825 containerd[1525]: time="2025-09-12T18:07:20.716777262Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 18:07:20.717802 containerd[1525]: time="2025-09-12T18:07:20.717760468Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:20.720554 containerd[1525]: time="2025-09-12T18:07:20.720505667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:20.722795 containerd[1525]: time="2025-09-12T18:07:20.722725424Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.176571843s" Sep 12 18:07:20.722795 containerd[1525]: time="2025-09-12T18:07:20.722772910Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 18:07:20.723501 containerd[1525]: time="2025-09-12T18:07:20.723427709Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 18:07:21.159084 systemd-resolved[1402]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 12 18:07:21.901666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851599375.mount: Deactivated successfully. Sep 12 18:07:22.186487 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 18:07:22.191685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:07:22.365223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:07:22.376673 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 18:07:22.455481 kubelet[2097]: E0912 18:07:22.454811 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 18:07:22.460772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 18:07:22.460952 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 18:07:22.461859 systemd[1]: kubelet.service: Consumed 201ms CPU time, 109.2M memory peak. Sep 12 18:07:22.625184 containerd[1525]: time="2025-09-12T18:07:22.625116543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:22.626052 containerd[1525]: time="2025-09-12T18:07:22.625858065Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 18:07:22.626687 containerd[1525]: time="2025-09-12T18:07:22.626646560Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:22.628350 containerd[1525]: time="2025-09-12T18:07:22.628319518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:22.629139 containerd[1525]: time="2025-09-12T18:07:22.629101142Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.905631172s" Sep 12 18:07:22.629139 containerd[1525]: time="2025-09-12T18:07:22.629134615Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 18:07:22.629984 containerd[1525]: time="2025-09-12T18:07:22.629955505Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 18:07:23.150239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3179769457.mount: Deactivated successfully. Sep 12 18:07:24.101687 containerd[1525]: time="2025-09-12T18:07:24.101621572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:24.103081 containerd[1525]: time="2025-09-12T18:07:24.102866232Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 18:07:24.103695 containerd[1525]: time="2025-09-12T18:07:24.103653598Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:24.107068 containerd[1525]: time="2025-09-12T18:07:24.106721083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:24.108203 containerd[1525]: time="2025-09-12T18:07:24.108078824Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.478092997s" Sep 12 18:07:24.108203 containerd[1525]: time="2025-09-12T18:07:24.108113671Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 18:07:24.109647 containerd[1525]: time="2025-09-12T18:07:24.109602372Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 18:07:24.212321 systemd-resolved[1402]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 12 18:07:24.516738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1709252771.mount: Deactivated successfully. Sep 12 18:07:24.521077 containerd[1525]: time="2025-09-12T18:07:24.520998285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 18:07:24.522065 containerd[1525]: time="2025-09-12T18:07:24.522032478Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 18:07:24.522815 containerd[1525]: time="2025-09-12T18:07:24.522777008Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 18:07:24.525381 containerd[1525]: time="2025-09-12T18:07:24.525348996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 18:07:24.526664 containerd[1525]: time="2025-09-12T18:07:24.526611626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 416.962586ms" Sep 12 18:07:24.526664 containerd[1525]: time="2025-09-12T18:07:24.526648334Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 18:07:24.527453 containerd[1525]: time="2025-09-12T18:07:24.527099297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 18:07:25.068575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3253890636.mount: Deactivated successfully. Sep 12 18:07:26.976777 containerd[1525]: time="2025-09-12T18:07:26.976716667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:26.980450 containerd[1525]: time="2025-09-12T18:07:26.980385326Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 18:07:26.982549 containerd[1525]: time="2025-09-12T18:07:26.982499915Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:26.987363 containerd[1525]: time="2025-09-12T18:07:26.986113948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:26.987363 containerd[1525]: time="2025-09-12T18:07:26.987167684Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.460027777s" Sep 12 18:07:26.987363 containerd[1525]: time="2025-09-12T18:07:26.987215308Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 18:07:30.717507 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:07:30.718267 systemd[1]: kubelet.service: Consumed 201ms CPU time, 109.2M memory peak. Sep 12 18:07:30.720927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:07:30.754521 systemd[1]: Reload requested from client PID 2243 ('systemctl') (unit session-7.scope)... Sep 12 18:07:30.754736 systemd[1]: Reloading... Sep 12 18:07:30.928613 zram_generator::config[2295]: No configuration found. Sep 12 18:07:31.179178 systemd[1]: Reloading finished in 423 ms. Sep 12 18:07:31.242913 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 18:07:31.243007 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 18:07:31.243348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:07:31.243402 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.1M memory peak. Sep 12 18:07:31.245155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:07:31.432935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:07:31.446667 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 18:07:31.495724 kubelet[2340]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 18:07:31.495724 kubelet[2340]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 18:07:31.495724 kubelet[2340]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 18:07:31.496925 kubelet[2340]: I0912 18:07:31.496854 2340 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 18:07:32.928919 kubelet[2340]: I0912 18:07:32.928857 2340 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 18:07:32.928919 kubelet[2340]: I0912 18:07:32.928898 2340 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 18:07:32.929368 kubelet[2340]: I0912 18:07:32.929185 2340 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 18:07:32.954468 kubelet[2340]: I0912 18:07:32.953401 2340 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 18:07:32.956082 kubelet[2340]: E0912 18:07:32.956044 2340 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://137.184.114.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.114.151:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 18:07:32.968331 kubelet[2340]: I0912 18:07:32.968235 2340 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 18:07:32.979766 kubelet[2340]: I0912 18:07:32.979718 2340 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 18:07:32.980879 kubelet[2340]: I0912 18:07:32.980821 2340 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 18:07:32.982717 kubelet[2340]: I0912 18:07:32.980875 2340 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.1.0-6-3761596165","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 18:07:32.982717 kubelet[2340]: I0912 18:07:32.982715 2340 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 18:07:32.983067 kubelet[2340]: I0912 18:07:32.982732 2340 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 18:07:32.983691 kubelet[2340]: I0912 18:07:32.983660 2340 state_mem.go:36] "Initialized new in-memory state store" Sep 12 18:07:32.985906 kubelet[2340]: I0912 18:07:32.985885 2340 kubelet.go:480] "Attempting to sync node with API server" Sep 12 18:07:32.987235 kubelet[2340]: I0912 18:07:32.986878 2340 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 18:07:32.987235 kubelet[2340]: I0912 18:07:32.986922 2340 kubelet.go:386] "Adding apiserver pod source" Sep 12 18:07:32.987235 kubelet[2340]: I0912 18:07:32.986939 2340 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 18:07:32.994627 kubelet[2340]: E0912 18:07:32.994379 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://137.184.114.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.1.0-6-3761596165&limit=500&resourceVersion=0\": dial tcp 137.184.114.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 18:07:32.998596 kubelet[2340]: E0912 18:07:32.998542 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://137.184.114.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.114.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 18:07:32.998932 kubelet[2340]: I0912 18:07:32.998892 2340 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 18:07:32.999700 kubelet[2340]: I0912 18:07:32.999665 2340 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 18:07:33.000358 kubelet[2340]: W0912 18:07:33.000339 2340 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 18:07:33.004956 kubelet[2340]: I0912 18:07:33.004923 2340 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 18:07:33.005242 kubelet[2340]: I0912 18:07:33.005229 2340 server.go:1289] "Started kubelet" Sep 12 18:07:33.008634 kubelet[2340]: I0912 18:07:33.008585 2340 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 18:07:33.013932 kubelet[2340]: E0912 18:07:33.009861 2340 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.114.151:6443/api/v1/namespaces/default/events\": dial tcp 137.184.114.151:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426.1.0-6-3761596165.18649b3fd2ea75b0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.1.0-6-3761596165,UID:ci-4426.1.0-6-3761596165,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.1.0-6-3761596165,},FirstTimestamp:2025-09-12 18:07:33.005161904 +0000 UTC m=+1.552609666,LastTimestamp:2025-09-12 18:07:33.005161904 +0000 UTC m=+1.552609666,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.1.0-6-3761596165,}" Sep 12 18:07:33.013932 kubelet[2340]: I0912 18:07:33.013147 2340 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 18:07:33.018944 kubelet[2340]: I0912 18:07:33.018908 2340 server.go:317] "Adding debug handlers to kubelet server" Sep 12 18:07:33.022091 kubelet[2340]: I0912 18:07:33.022064 2340 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 18:07:33.022757 kubelet[2340]: E0912 18:07:33.022411 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-6-3761596165\" not found" Sep 12 18:07:33.024914 kubelet[2340]: I0912 18:07:33.024886 2340 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 18:07:33.025011 kubelet[2340]: I0912 18:07:33.024959 2340 reconciler.go:26] "Reconciler: start to sync state" Sep 12 18:07:33.025391 kubelet[2340]: I0912 18:07:33.025326 2340 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 18:07:33.025644 kubelet[2340]: I0912 18:07:33.025627 2340 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 18:07:33.025884 kubelet[2340]: I0912 18:07:33.025870 2340 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 18:07:33.027842 kubelet[2340]: E0912 18:07:33.027808 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.114.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-6-3761596165?timeout=10s\": dial tcp 137.184.114.151:6443: connect: connection refused" interval="200ms" Sep 12 18:07:33.028523 kubelet[2340]: E0912 18:07:33.028011 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://137.184.114.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.114.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 18:07:33.029044 kubelet[2340]: I0912 18:07:33.028849 2340 factory.go:223] Registration of the systemd container factory successfully Sep 12 18:07:33.029044 kubelet[2340]: I0912 18:07:33.028989 2340 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 18:07:33.030206 kubelet[2340]: E0912 18:07:33.030087 2340 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 18:07:33.031011 kubelet[2340]: I0912 18:07:33.030996 2340 factory.go:223] Registration of the containerd container factory successfully Sep 12 18:07:33.054435 kubelet[2340]: I0912 18:07:33.053953 2340 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 18:07:33.054435 kubelet[2340]: I0912 18:07:33.053978 2340 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 18:07:33.054435 kubelet[2340]: I0912 18:07:33.054079 2340 state_mem.go:36] "Initialized new in-memory state store" Sep 12 18:07:33.059820 kubelet[2340]: I0912 18:07:33.059626 2340 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 18:07:33.061577 kubelet[2340]: I0912 18:07:33.061253 2340 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 18:07:33.061577 kubelet[2340]: I0912 18:07:33.061288 2340 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 18:07:33.061577 kubelet[2340]: I0912 18:07:33.061334 2340 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 18:07:33.061577 kubelet[2340]: I0912 18:07:33.061342 2340 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 18:07:33.061577 kubelet[2340]: E0912 18:07:33.061384 2340 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 18:07:33.061864 kubelet[2340]: I0912 18:07:33.061852 2340 policy_none.go:49] "None policy: Start" Sep 12 18:07:33.062568 kubelet[2340]: I0912 18:07:33.062303 2340 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 18:07:33.062568 kubelet[2340]: I0912 18:07:33.062324 2340 state_mem.go:35] "Initializing new in-memory state store" Sep 12 18:07:33.066814 kubelet[2340]: E0912 18:07:33.066780 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://137.184.114.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.114.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 18:07:33.071330 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 18:07:33.083578 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 18:07:33.087911 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 18:07:33.096507 kubelet[2340]: E0912 18:07:33.096467 2340 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 18:07:33.096828 kubelet[2340]: I0912 18:07:33.096742 2340 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 18:07:33.096828 kubelet[2340]: I0912 18:07:33.096766 2340 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 18:07:33.097830 kubelet[2340]: I0912 18:07:33.097127 2340 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 18:07:33.098368 kubelet[2340]: E0912 18:07:33.098342 2340 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 18:07:33.098712 kubelet[2340]: E0912 18:07:33.098699 2340 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4426.1.0-6-3761596165\" not found" Sep 12 18:07:33.173874 systemd[1]: Created slice kubepods-burstable-pod486b7d06fe5b6899f4a1f0d181ff196c.slice - libcontainer container kubepods-burstable-pod486b7d06fe5b6899f4a1f0d181ff196c.slice. Sep 12 18:07:33.192205 kubelet[2340]: E0912 18:07:33.191159 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-6-3761596165\" not found" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:33.195719 systemd[1]: Created slice kubepods-burstable-pod148800094a17d8942a6585cc2c794f95.slice - libcontainer container kubepods-burstable-pod148800094a17d8942a6585cc2c794f95.slice. Sep 12 18:07:33.198327 kubelet[2340]: I0912 18:07:33.198296 2340 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:33.199047 kubelet[2340]: E0912 18:07:33.198790 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.114.151:6443/api/v1/nodes\": dial tcp 137.184.114.151:6443: connect: connection refused" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:33.200108 kubelet[2340]: E0912 18:07:33.200043 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-6-3761596165\" not found" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:33.203043 systemd[1]: Created slice kubepods-burstable-podd7b2ac560e826285b92094f58846c19e.slice - libcontainer container kubepods-burstable-podd7b2ac560e826285b92094f58846c19e.slice. Sep 12 18:07:33.205029 kubelet[2340]: E0912 18:07:33.204991 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-6-3761596165\" not found" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:33.226385 kubelet[2340]: I0912 18:07:33.226280 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d7b2ac560e826285b92094f58846c19e-kubeconfig\") pod \"kube-scheduler-ci-4426.1.0-6-3761596165\" (UID: \"d7b2ac560e826285b92094f58846c19e\") " pod="kube-system/kube-scheduler-ci-4426.1.0-6-3761596165" Sep 12 18:07:33.226715 kubelet[2340]: I0912 18:07:33.226341 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/486b7d06fe5b6899f4a1f0d181ff196c-k8s-certs\") pod \"kube-apiserver-ci-4426.1.0-6-3761596165\" (UID: \"486b7d06fe5b6899f4a1f0d181ff196c\") " pod="kube-system/kube-apiserver-ci-4426.1.0-6-3761596165" Sep 12 18:07:33.226715 kubelet[2340]: I0912 18:07:33.226596 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/486b7d06fe5b6899f4a1f0d181ff196c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.1.0-6-3761596165\" (UID: \"486b7d06fe5b6899f4a1f0d181ff196c\") " pod="kube-system/kube-apiserver-ci-4426.1.0-6-3761596165" Sep 12 18:07:33.226715 kubelet[2340]: I0912 18:07:33.226645 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/148800094a17d8942a6585cc2c794f95-k8s-certs\") pod \"kube-controller-manager-ci-4426.1.0-6-3761596165\" (UID: \"148800094a17d8942a6585cc2c794f95\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" Sep 12 18:07:33.226715 kubelet[2340]: I0912 18:07:33.226674 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/486b7d06fe5b6899f4a1f0d181ff196c-ca-certs\") pod \"kube-apiserver-ci-4426.1.0-6-3761596165\" (UID: \"486b7d06fe5b6899f4a1f0d181ff196c\") " pod="kube-system/kube-apiserver-ci-4426.1.0-6-3761596165" Sep 12 18:07:33.226715 kubelet[2340]: I0912 18:07:33.226695 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/148800094a17d8942a6585cc2c794f95-ca-certs\") pod \"kube-controller-manager-ci-4426.1.0-6-3761596165\" (UID: \"148800094a17d8942a6585cc2c794f95\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" Sep 12 18:07:33.227037 kubelet[2340]: I0912 18:07:33.226940 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/148800094a17d8942a6585cc2c794f95-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.1.0-6-3761596165\" (UID: \"148800094a17d8942a6585cc2c794f95\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" Sep 12 18:07:33.227037 kubelet[2340]: I0912 18:07:33.226966 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/148800094a17d8942a6585cc2c794f95-kubeconfig\") pod \"kube-controller-manager-ci-4426.1.0-6-3761596165\" (UID: \"148800094a17d8942a6585cc2c794f95\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" Sep 12 18:07:33.227037 kubelet[2340]: I0912 18:07:33.226995 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/148800094a17d8942a6585cc2c794f95-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.1.0-6-3761596165\" (UID: \"148800094a17d8942a6585cc2c794f95\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" Sep 12 18:07:33.228818 kubelet[2340]: E0912 18:07:33.228782 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.114.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-6-3761596165?timeout=10s\": dial tcp 137.184.114.151:6443: connect: connection refused" interval="400ms" Sep 12 18:07:33.399906 kubelet[2340]: I0912 18:07:33.399844 2340 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:33.400328 kubelet[2340]: E0912 18:07:33.400288 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.114.151:6443/api/v1/nodes\": dial tcp 137.184.114.151:6443: connect: connection refused" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:33.492789 kubelet[2340]: E0912 18:07:33.492648 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:33.493481 containerd[1525]: time="2025-09-12T18:07:33.493352360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.1.0-6-3761596165,Uid:486b7d06fe5b6899f4a1f0d181ff196c,Namespace:kube-system,Attempt:0,}" Sep 12 18:07:33.500887 kubelet[2340]: E0912 18:07:33.500832 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:33.501911 containerd[1525]: time="2025-09-12T18:07:33.501684146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.1.0-6-3761596165,Uid:148800094a17d8942a6585cc2c794f95,Namespace:kube-system,Attempt:0,}" Sep 12 18:07:33.506536 kubelet[2340]: E0912 18:07:33.506483 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:33.507058 containerd[1525]: time="2025-09-12T18:07:33.507002385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.1.0-6-3761596165,Uid:d7b2ac560e826285b92094f58846c19e,Namespace:kube-system,Attempt:0,}" Sep 12 18:07:33.625425 containerd[1525]: time="2025-09-12T18:07:33.625321196Z" level=info msg="connecting to shim 9d5b6deba1971913a84051bb0582b8f577d55fd5800fb8cc53c34ef3e4cb5221" address="unix:///run/containerd/s/05dfcc561d4313baf5310f6014e8be2320ea8a04429c4038e57bdbd2b187fb2a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:07:33.630412 kubelet[2340]: E0912 18:07:33.630238 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.114.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-6-3761596165?timeout=10s\": dial tcp 137.184.114.151:6443: connect: connection refused" interval="800ms" Sep 12 18:07:33.632665 containerd[1525]: time="2025-09-12T18:07:33.632401141Z" level=info msg="connecting to shim 56d5cc7d6443a6fd564325bc69816da48a492b921604b8942ea48e0d0dc48fc1" address="unix:///run/containerd/s/8ea773832d4431367dcfc10988068dec13f323a30c6a20dbd27fb4e77b78544d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:07:33.637457 containerd[1525]: time="2025-09-12T18:07:33.637401090Z" level=info msg="connecting to shim 370d1766a8f035ffdf879e4bd7bf9fd96da096aeffe7474c733b41d2304dc7bf" address="unix:///run/containerd/s/deb80e51abc6b233bbec2cb2852a80c86fded562f67fbd4340af53557aa104f0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:07:33.747740 systemd[1]: Started cri-containerd-370d1766a8f035ffdf879e4bd7bf9fd96da096aeffe7474c733b41d2304dc7bf.scope - libcontainer container 370d1766a8f035ffdf879e4bd7bf9fd96da096aeffe7474c733b41d2304dc7bf. Sep 12 18:07:33.751399 systemd[1]: Started cri-containerd-56d5cc7d6443a6fd564325bc69816da48a492b921604b8942ea48e0d0dc48fc1.scope - libcontainer container 56d5cc7d6443a6fd564325bc69816da48a492b921604b8942ea48e0d0dc48fc1. Sep 12 18:07:33.754716 systemd[1]: Started cri-containerd-9d5b6deba1971913a84051bb0582b8f577d55fd5800fb8cc53c34ef3e4cb5221.scope - libcontainer container 9d5b6deba1971913a84051bb0582b8f577d55fd5800fb8cc53c34ef3e4cb5221. Sep 12 18:07:33.802760 kubelet[2340]: I0912 18:07:33.802619 2340 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:33.804722 kubelet[2340]: E0912 18:07:33.804661 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.114.151:6443/api/v1/nodes\": dial tcp 137.184.114.151:6443: connect: connection refused" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:33.857201 containerd[1525]: time="2025-09-12T18:07:33.857114393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.1.0-6-3761596165,Uid:486b7d06fe5b6899f4a1f0d181ff196c,Namespace:kube-system,Attempt:0,} returns sandbox id \"370d1766a8f035ffdf879e4bd7bf9fd96da096aeffe7474c733b41d2304dc7bf\"" Sep 12 18:07:33.859495 kubelet[2340]: E0912 18:07:33.859427 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:33.865408 containerd[1525]: time="2025-09-12T18:07:33.865311891Z" level=info msg="CreateContainer within sandbox \"370d1766a8f035ffdf879e4bd7bf9fd96da096aeffe7474c733b41d2304dc7bf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 18:07:33.875920 containerd[1525]: time="2025-09-12T18:07:33.875827399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.1.0-6-3761596165,Uid:148800094a17d8942a6585cc2c794f95,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d5b6deba1971913a84051bb0582b8f577d55fd5800fb8cc53c34ef3e4cb5221\"" Sep 12 18:07:33.883160 kubelet[2340]: E0912 18:07:33.883058 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:33.890059 containerd[1525]: time="2025-09-12T18:07:33.889422811Z" level=info msg="CreateContainer within sandbox \"9d5b6deba1971913a84051bb0582b8f577d55fd5800fb8cc53c34ef3e4cb5221\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 18:07:33.892393 kubelet[2340]: E0912 18:07:33.892347 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://137.184.114.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.114.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 18:07:33.898789 containerd[1525]: time="2025-09-12T18:07:33.898728592Z" level=info msg="Container 978f1d8bdad55513e87cd9fa12f3ac5f3f398b61f62f0532d6df8eb507c5f06b: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:07:33.906798 containerd[1525]: time="2025-09-12T18:07:33.906400371Z" level=info msg="Container 6c410261b21c2aee827d8ad438d2100ecc1249bfbad3ba163b2b76d1299d2c43: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:07:33.912135 containerd[1525]: time="2025-09-12T18:07:33.912080449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.1.0-6-3761596165,Uid:d7b2ac560e826285b92094f58846c19e,Namespace:kube-system,Attempt:0,} returns sandbox id \"56d5cc7d6443a6fd564325bc69816da48a492b921604b8942ea48e0d0dc48fc1\"" Sep 12 18:07:33.913041 kubelet[2340]: E0912 18:07:33.912995 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:33.916129 containerd[1525]: time="2025-09-12T18:07:33.916093150Z" level=info msg="CreateContainer within sandbox \"56d5cc7d6443a6fd564325bc69816da48a492b921604b8942ea48e0d0dc48fc1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 18:07:33.918780 containerd[1525]: time="2025-09-12T18:07:33.918726919Z" level=info msg="CreateContainer within sandbox \"370d1766a8f035ffdf879e4bd7bf9fd96da096aeffe7474c733b41d2304dc7bf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"978f1d8bdad55513e87cd9fa12f3ac5f3f398b61f62f0532d6df8eb507c5f06b\"" Sep 12 18:07:33.920617 containerd[1525]: time="2025-09-12T18:07:33.920328286Z" level=info msg="StartContainer for \"978f1d8bdad55513e87cd9fa12f3ac5f3f398b61f62f0532d6df8eb507c5f06b\"" Sep 12 18:07:33.922668 containerd[1525]: time="2025-09-12T18:07:33.922619396Z" level=info msg="CreateContainer within sandbox \"9d5b6deba1971913a84051bb0582b8f577d55fd5800fb8cc53c34ef3e4cb5221\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6c410261b21c2aee827d8ad438d2100ecc1249bfbad3ba163b2b76d1299d2c43\"" Sep 12 18:07:33.923543 containerd[1525]: time="2025-09-12T18:07:33.923496522Z" level=info msg="connecting to shim 978f1d8bdad55513e87cd9fa12f3ac5f3f398b61f62f0532d6df8eb507c5f06b" address="unix:///run/containerd/s/deb80e51abc6b233bbec2cb2852a80c86fded562f67fbd4340af53557aa104f0" protocol=ttrpc version=3 Sep 12 18:07:33.924812 containerd[1525]: time="2025-09-12T18:07:33.923794337Z" level=info msg="StartContainer for \"6c410261b21c2aee827d8ad438d2100ecc1249bfbad3ba163b2b76d1299d2c43\"" Sep 12 18:07:33.927040 containerd[1525]: time="2025-09-12T18:07:33.926525104Z" level=info msg="Container 883ddf72e7fd6032ec409249d0b8b02b3e5a52cab47a43a79018aa7690674ceb: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:07:33.927416 containerd[1525]: time="2025-09-12T18:07:33.927388077Z" level=info msg="connecting to shim 6c410261b21c2aee827d8ad438d2100ecc1249bfbad3ba163b2b76d1299d2c43" address="unix:///run/containerd/s/05dfcc561d4313baf5310f6014e8be2320ea8a04429c4038e57bdbd2b187fb2a" protocol=ttrpc version=3 Sep 12 18:07:33.939008 containerd[1525]: time="2025-09-12T18:07:33.938958756Z" level=info msg="CreateContainer within sandbox \"56d5cc7d6443a6fd564325bc69816da48a492b921604b8942ea48e0d0dc48fc1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"883ddf72e7fd6032ec409249d0b8b02b3e5a52cab47a43a79018aa7690674ceb\"" Sep 12 18:07:33.940776 containerd[1525]: time="2025-09-12T18:07:33.940740486Z" level=info msg="StartContainer for \"883ddf72e7fd6032ec409249d0b8b02b3e5a52cab47a43a79018aa7690674ceb\"" Sep 12 18:07:33.944084 containerd[1525]: time="2025-09-12T18:07:33.944043077Z" level=info msg="connecting to shim 883ddf72e7fd6032ec409249d0b8b02b3e5a52cab47a43a79018aa7690674ceb" address="unix:///run/containerd/s/8ea773832d4431367dcfc10988068dec13f323a30c6a20dbd27fb4e77b78544d" protocol=ttrpc version=3 Sep 12 18:07:33.958280 systemd[1]: Started cri-containerd-978f1d8bdad55513e87cd9fa12f3ac5f3f398b61f62f0532d6df8eb507c5f06b.scope - libcontainer container 978f1d8bdad55513e87cd9fa12f3ac5f3f398b61f62f0532d6df8eb507c5f06b. Sep 12 18:07:33.972284 systemd[1]: Started cri-containerd-6c410261b21c2aee827d8ad438d2100ecc1249bfbad3ba163b2b76d1299d2c43.scope - libcontainer container 6c410261b21c2aee827d8ad438d2100ecc1249bfbad3ba163b2b76d1299d2c43. Sep 12 18:07:33.993321 systemd[1]: Started cri-containerd-883ddf72e7fd6032ec409249d0b8b02b3e5a52cab47a43a79018aa7690674ceb.scope - libcontainer container 883ddf72e7fd6032ec409249d0b8b02b3e5a52cab47a43a79018aa7690674ceb. Sep 12 18:07:34.074863 containerd[1525]: time="2025-09-12T18:07:34.073803517Z" level=info msg="StartContainer for \"978f1d8bdad55513e87cd9fa12f3ac5f3f398b61f62f0532d6df8eb507c5f06b\" returns successfully" Sep 12 18:07:34.095721 containerd[1525]: time="2025-09-12T18:07:34.095589397Z" level=info msg="StartContainer for \"6c410261b21c2aee827d8ad438d2100ecc1249bfbad3ba163b2b76d1299d2c43\" returns successfully" Sep 12 18:07:34.129377 containerd[1525]: time="2025-09-12T18:07:34.129338479Z" level=info msg="StartContainer for \"883ddf72e7fd6032ec409249d0b8b02b3e5a52cab47a43a79018aa7690674ceb\" returns successfully" Sep 12 18:07:34.166909 kubelet[2340]: E0912 18:07:34.166848 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://137.184.114.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.1.0-6-3761596165&limit=500&resourceVersion=0\": dial tcp 137.184.114.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 18:07:34.606713 kubelet[2340]: I0912 18:07:34.606624 2340 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:35.090823 kubelet[2340]: E0912 18:07:35.090518 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-6-3761596165\" not found" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:35.090823 kubelet[2340]: E0912 18:07:35.090657 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:35.094699 kubelet[2340]: E0912 18:07:35.094500 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-6-3761596165\" not found" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:35.094699 kubelet[2340]: E0912 18:07:35.094628 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:35.097163 kubelet[2340]: E0912 18:07:35.097135 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-6-3761596165\" not found" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:35.097448 kubelet[2340]: E0912 18:07:35.097433 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:36.098048 kubelet[2340]: I0912 18:07:36.096029 2340 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:36.098048 kubelet[2340]: E0912 18:07:36.096072 2340 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4426.1.0-6-3761596165\": node \"ci-4426.1.0-6-3761596165\" not found" Sep 12 18:07:36.100129 kubelet[2340]: E0912 18:07:36.100101 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-6-3761596165\" not found" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:36.101257 kubelet[2340]: E0912 18:07:36.101232 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:36.101478 kubelet[2340]: E0912 18:07:36.101455 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-6-3761596165\" not found" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:36.101617 kubelet[2340]: E0912 18:07:36.101602 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:36.101727 kubelet[2340]: E0912 18:07:36.101714 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-6-3761596165\" not found" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:36.101905 kubelet[2340]: E0912 18:07:36.101888 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:36.121944 kubelet[2340]: E0912 18:07:36.121904 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-6-3761596165\" not found" Sep 12 18:07:36.222136 kubelet[2340]: E0912 18:07:36.222082 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-6-3761596165\" not found" Sep 12 18:07:36.322918 kubelet[2340]: E0912 18:07:36.322840 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-6-3761596165\" not found" Sep 12 18:07:36.423652 kubelet[2340]: E0912 18:07:36.423574 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-6-3761596165\" not found" Sep 12 18:07:36.523836 kubelet[2340]: E0912 18:07:36.523740 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-6-3761596165\" not found" Sep 12 18:07:36.624762 kubelet[2340]: E0912 18:07:36.624702 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-6-3761596165\" not found" Sep 12 18:07:36.725609 kubelet[2340]: E0912 18:07:36.725426 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-6-3761596165\" not found" Sep 12 18:07:36.826171 kubelet[2340]: E0912 18:07:36.826053 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-6-3761596165\" not found" Sep 12 18:07:36.927364 kubelet[2340]: E0912 18:07:36.927284 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-6-3761596165\" not found" Sep 12 18:07:37.028042 kubelet[2340]: E0912 18:07:37.027885 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-6-3761596165\" not found" Sep 12 18:07:37.101530 kubelet[2340]: I0912 18:07:37.101493 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.1.0-6-3761596165" Sep 12 18:07:37.111452 kubelet[2340]: I0912 18:07:37.111404 2340 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 18:07:37.112431 kubelet[2340]: E0912 18:07:37.112394 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:37.125406 kubelet[2340]: I0912 18:07:37.125360 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.1.0-6-3761596165" Sep 12 18:07:37.136390 kubelet[2340]: I0912 18:07:37.136341 2340 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 18:07:37.136611 kubelet[2340]: I0912 18:07:37.136491 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" Sep 12 18:07:37.143353 kubelet[2340]: I0912 18:07:37.143315 2340 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 18:07:37.143505 kubelet[2340]: I0912 18:07:37.143464 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.1.0-6-3761596165" Sep 12 18:07:37.149050 kubelet[2340]: I0912 18:07:37.148975 2340 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 18:07:37.149216 kubelet[2340]: E0912 18:07:37.149072 2340 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4426.1.0-6-3761596165\" already exists" pod="kube-system/kube-scheduler-ci-4426.1.0-6-3761596165" Sep 12 18:07:37.997923 kubelet[2340]: I0912 18:07:37.997611 2340 apiserver.go:52] "Watching apiserver" Sep 12 18:07:38.000702 kubelet[2340]: E0912 18:07:38.000578 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:38.001046 kubelet[2340]: E0912 18:07:38.000795 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:38.026035 kubelet[2340]: I0912 18:07:38.025973 2340 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 18:07:38.102945 kubelet[2340]: E0912 18:07:38.102912 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:38.407547 systemd[1]: Reload requested from client PID 2623 ('systemctl') (unit session-7.scope)... Sep 12 18:07:38.407565 systemd[1]: Reloading... Sep 12 18:07:38.520052 zram_generator::config[2668]: No configuration found. Sep 12 18:07:38.783050 systemd[1]: Reloading finished in 374 ms. Sep 12 18:07:38.820092 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:07:38.832225 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 18:07:38.832507 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:07:38.832576 systemd[1]: kubelet.service: Consumed 1.977s CPU time, 127.2M memory peak. Sep 12 18:07:38.834870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:07:39.009849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:07:39.021511 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 18:07:39.104070 kubelet[2717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 18:07:39.104549 kubelet[2717]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 18:07:39.104549 kubelet[2717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 18:07:39.104549 kubelet[2717]: I0912 18:07:39.104629 2717 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 18:07:39.113741 kubelet[2717]: I0912 18:07:39.113687 2717 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 18:07:39.113929 kubelet[2717]: I0912 18:07:39.113915 2717 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 18:07:39.114341 kubelet[2717]: I0912 18:07:39.114319 2717 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 18:07:39.115843 kubelet[2717]: I0912 18:07:39.115820 2717 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 18:07:39.121069 kubelet[2717]: I0912 18:07:39.121034 2717 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 18:07:39.128076 kubelet[2717]: I0912 18:07:39.127987 2717 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 18:07:39.131932 kubelet[2717]: I0912 18:07:39.131902 2717 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 18:07:39.132192 kubelet[2717]: I0912 18:07:39.132160 2717 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 18:07:39.132348 kubelet[2717]: I0912 18:07:39.132187 2717 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.1.0-6-3761596165","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 18:07:39.132459 kubelet[2717]: I0912 18:07:39.132348 2717 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 18:07:39.132459 kubelet[2717]: I0912 18:07:39.132358 2717 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 18:07:39.132459 kubelet[2717]: I0912 18:07:39.132406 2717 state_mem.go:36] "Initialized new in-memory state store" Sep 12 18:07:39.132590 kubelet[2717]: I0912 18:07:39.132575 2717 kubelet.go:480] "Attempting to sync node with API server" Sep 12 18:07:39.132639 kubelet[2717]: I0912 18:07:39.132594 2717 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 18:07:39.132639 kubelet[2717]: I0912 18:07:39.132621 2717 kubelet.go:386] "Adding apiserver pod source" Sep 12 18:07:39.132639 kubelet[2717]: I0912 18:07:39.132635 2717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 18:07:39.135708 kubelet[2717]: I0912 18:07:39.135675 2717 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 18:07:39.136180 kubelet[2717]: I0912 18:07:39.136145 2717 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 18:07:39.141467 kubelet[2717]: I0912 18:07:39.141439 2717 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 18:07:39.141592 kubelet[2717]: I0912 18:07:39.141535 2717 server.go:1289] "Started kubelet" Sep 12 18:07:39.149616 kubelet[2717]: I0912 18:07:39.149565 2717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 18:07:39.160522 kubelet[2717]: I0912 18:07:39.160466 2717 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 18:07:39.162317 kubelet[2717]: I0912 18:07:39.162137 2717 server.go:317] "Adding debug handlers to kubelet server" Sep 12 18:07:39.170114 kubelet[2717]: I0912 18:07:39.170010 2717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 18:07:39.177135 kubelet[2717]: I0912 18:07:39.171597 2717 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 18:07:39.180259 kubelet[2717]: I0912 18:07:39.175539 2717 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 18:07:39.180259 kubelet[2717]: I0912 18:07:39.175553 2717 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 18:07:39.180259 kubelet[2717]: I0912 18:07:39.178872 2717 reconciler.go:26] "Reconciler: start to sync state" Sep 12 18:07:39.180259 kubelet[2717]: I0912 18:07:39.178968 2717 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 18:07:39.180823 kubelet[2717]: I0912 18:07:39.180798 2717 factory.go:223] Registration of the systemd container factory successfully Sep 12 18:07:39.181102 kubelet[2717]: I0912 18:07:39.180989 2717 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 18:07:39.184234 kubelet[2717]: I0912 18:07:39.183982 2717 factory.go:223] Registration of the containerd container factory successfully Sep 12 18:07:39.194785 kubelet[2717]: E0912 18:07:39.184363 2717 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 18:07:39.218528 kubelet[2717]: I0912 18:07:39.218455 2717 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 18:07:39.220785 kubelet[2717]: I0912 18:07:39.220756 2717 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 18:07:39.220928 kubelet[2717]: I0912 18:07:39.220920 2717 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 18:07:39.220990 kubelet[2717]: I0912 18:07:39.220981 2717 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 18:07:39.221056 kubelet[2717]: I0912 18:07:39.221049 2717 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 18:07:39.221325 kubelet[2717]: E0912 18:07:39.221304 2717 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 18:07:39.255388 kubelet[2717]: I0912 18:07:39.255348 2717 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 18:07:39.255388 kubelet[2717]: I0912 18:07:39.255373 2717 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 18:07:39.255551 kubelet[2717]: I0912 18:07:39.255400 2717 state_mem.go:36] "Initialized new in-memory state store" Sep 12 18:07:39.255620 kubelet[2717]: I0912 18:07:39.255605 2717 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 18:07:39.255651 kubelet[2717]: I0912 18:07:39.255622 2717 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 18:07:39.255651 kubelet[2717]: I0912 18:07:39.255643 2717 policy_none.go:49] "None policy: Start" Sep 12 18:07:39.255707 kubelet[2717]: I0912 18:07:39.255655 2717 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 18:07:39.255707 kubelet[2717]: I0912 18:07:39.255667 2717 state_mem.go:35] "Initializing new in-memory state store" Sep 12 18:07:39.255816 kubelet[2717]: I0912 18:07:39.255801 2717 state_mem.go:75] "Updated machine memory state" Sep 12 18:07:39.263922 kubelet[2717]: E0912 18:07:39.263083 2717 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 18:07:39.263922 kubelet[2717]: I0912 18:07:39.263514 2717 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 18:07:39.263922 kubelet[2717]: I0912 18:07:39.263529 2717 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 18:07:39.265416 kubelet[2717]: I0912 18:07:39.265397 2717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 18:07:39.266835 kubelet[2717]: E0912 18:07:39.266812 2717 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 18:07:39.322322 kubelet[2717]: I0912 18:07:39.322286 2717 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.322868 kubelet[2717]: I0912 18:07:39.322833 2717 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.323000 kubelet[2717]: I0912 18:07:39.322657 2717 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.329526 kubelet[2717]: I0912 18:07:39.329483 2717 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 18:07:39.329698 kubelet[2717]: E0912 18:07:39.329595 2717 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4426.1.0-6-3761596165\" already exists" pod="kube-system/kube-scheduler-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.329698 kubelet[2717]: I0912 18:07:39.329493 2717 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 18:07:39.329698 kubelet[2717]: E0912 18:07:39.329675 2717 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4426.1.0-6-3761596165\" already exists" pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.330339 kubelet[2717]: I0912 18:07:39.330298 2717 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 18:07:39.330515 kubelet[2717]: E0912 18:07:39.330475 2717 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.1.0-6-3761596165\" already exists" pod="kube-system/kube-apiserver-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.373193 kubelet[2717]: I0912 18:07:39.372882 2717 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:39.385654 kubelet[2717]: I0912 18:07:39.385619 2717 kubelet_node_status.go:124] "Node was previously registered" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:39.385783 kubelet[2717]: I0912 18:07:39.385716 2717 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.1.0-6-3761596165" Sep 12 18:07:39.480040 kubelet[2717]: I0912 18:07:39.479641 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/148800094a17d8942a6585cc2c794f95-k8s-certs\") pod \"kube-controller-manager-ci-4426.1.0-6-3761596165\" (UID: \"148800094a17d8942a6585cc2c794f95\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.480040 kubelet[2717]: I0912 18:07:39.479702 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/148800094a17d8942a6585cc2c794f95-kubeconfig\") pod \"kube-controller-manager-ci-4426.1.0-6-3761596165\" (UID: \"148800094a17d8942a6585cc2c794f95\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.480040 kubelet[2717]: I0912 18:07:39.479730 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/148800094a17d8942a6585cc2c794f95-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.1.0-6-3761596165\" (UID: \"148800094a17d8942a6585cc2c794f95\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.480040 kubelet[2717]: I0912 18:07:39.479768 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d7b2ac560e826285b92094f58846c19e-kubeconfig\") pod \"kube-scheduler-ci-4426.1.0-6-3761596165\" (UID: \"d7b2ac560e826285b92094f58846c19e\") " pod="kube-system/kube-scheduler-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.480040 kubelet[2717]: I0912 18:07:39.479790 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/486b7d06fe5b6899f4a1f0d181ff196c-ca-certs\") pod \"kube-apiserver-ci-4426.1.0-6-3761596165\" (UID: \"486b7d06fe5b6899f4a1f0d181ff196c\") " pod="kube-system/kube-apiserver-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.480517 kubelet[2717]: I0912 18:07:39.479816 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/486b7d06fe5b6899f4a1f0d181ff196c-k8s-certs\") pod \"kube-apiserver-ci-4426.1.0-6-3761596165\" (UID: \"486b7d06fe5b6899f4a1f0d181ff196c\") " pod="kube-system/kube-apiserver-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.480517 kubelet[2717]: I0912 18:07:39.479838 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/486b7d06fe5b6899f4a1f0d181ff196c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.1.0-6-3761596165\" (UID: \"486b7d06fe5b6899f4a1f0d181ff196c\") " pod="kube-system/kube-apiserver-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.480517 kubelet[2717]: I0912 18:07:39.479859 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/148800094a17d8942a6585cc2c794f95-ca-certs\") pod \"kube-controller-manager-ci-4426.1.0-6-3761596165\" (UID: \"148800094a17d8942a6585cc2c794f95\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.481290 kubelet[2717]: I0912 18:07:39.479888 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/148800094a17d8942a6585cc2c794f95-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.1.0-6-3761596165\" (UID: \"148800094a17d8942a6585cc2c794f95\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" Sep 12 18:07:39.630943 kubelet[2717]: E0912 18:07:39.630436 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:39.630943 kubelet[2717]: E0912 18:07:39.630514 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:39.630943 kubelet[2717]: E0912 18:07:39.630779 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:40.134442 kubelet[2717]: I0912 18:07:40.133833 2717 apiserver.go:52] "Watching apiserver" Sep 12 18:07:40.179257 kubelet[2717]: I0912 18:07:40.179222 2717 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 18:07:40.206338 kubelet[2717]: I0912 18:07:40.206271 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4426.1.0-6-3761596165" podStartSLOduration=3.206254332 podStartE2EDuration="3.206254332s" podCreationTimestamp="2025-09-12 18:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:07:40.204632187 +0000 UTC m=+1.173134275" watchObservedRunningTime="2025-09-12 18:07:40.206254332 +0000 UTC m=+1.174756414" Sep 12 18:07:40.231634 kubelet[2717]: I0912 18:07:40.231555 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4426.1.0-6-3761596165" podStartSLOduration=3.231538092 podStartE2EDuration="3.231538092s" podCreationTimestamp="2025-09-12 18:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:07:40.217610014 +0000 UTC m=+1.186112102" watchObservedRunningTime="2025-09-12 18:07:40.231538092 +0000 UTC m=+1.200040184" Sep 12 18:07:40.246060 kubelet[2717]: E0912 18:07:40.244464 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:40.246060 kubelet[2717]: E0912 18:07:40.245448 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:40.246060 kubelet[2717]: I0912 18:07:40.245650 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4426.1.0-6-3761596165" podStartSLOduration=3.2456392530000002 podStartE2EDuration="3.245639253s" podCreationTimestamp="2025-09-12 18:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:07:40.231904749 +0000 UTC m=+1.200406837" watchObservedRunningTime="2025-09-12 18:07:40.245639253 +0000 UTC m=+1.214141322" Sep 12 18:07:40.246876 kubelet[2717]: E0912 18:07:40.246844 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:41.246587 kubelet[2717]: E0912 18:07:41.246512 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:41.247509 kubelet[2717]: E0912 18:07:41.247406 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:43.092716 kubelet[2717]: E0912 18:07:43.092644 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:43.712781 kubelet[2717]: I0912 18:07:43.712735 2717 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 18:07:43.713737 containerd[1525]: time="2025-09-12T18:07:43.713699670Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 18:07:43.714414 kubelet[2717]: I0912 18:07:43.714360 2717 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 18:07:44.203330 systemd[1]: Created slice kubepods-besteffort-pod9025c974_9280_4ec3_bfe0_d58db076a9da.slice - libcontainer container kubepods-besteffort-pod9025c974_9280_4ec3_bfe0_d58db076a9da.slice. Sep 12 18:07:44.213346 kubelet[2717]: I0912 18:07:44.212629 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9025c974-9280-4ec3-bfe0-d58db076a9da-kube-proxy\") pod \"kube-proxy-c6qhn\" (UID: \"9025c974-9280-4ec3-bfe0-d58db076a9da\") " pod="kube-system/kube-proxy-c6qhn" Sep 12 18:07:44.213346 kubelet[2717]: I0912 18:07:44.212669 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9025c974-9280-4ec3-bfe0-d58db076a9da-xtables-lock\") pod \"kube-proxy-c6qhn\" (UID: \"9025c974-9280-4ec3-bfe0-d58db076a9da\") " pod="kube-system/kube-proxy-c6qhn" Sep 12 18:07:44.213346 kubelet[2717]: I0912 18:07:44.212688 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9025c974-9280-4ec3-bfe0-d58db076a9da-lib-modules\") pod \"kube-proxy-c6qhn\" (UID: \"9025c974-9280-4ec3-bfe0-d58db076a9da\") " pod="kube-system/kube-proxy-c6qhn" Sep 12 18:07:44.213346 kubelet[2717]: I0912 18:07:44.212704 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qkz5\" (UniqueName: \"kubernetes.io/projected/9025c974-9280-4ec3-bfe0-d58db076a9da-kube-api-access-5qkz5\") pod \"kube-proxy-c6qhn\" (UID: \"9025c974-9280-4ec3-bfe0-d58db076a9da\") " pod="kube-system/kube-proxy-c6qhn" Sep 12 18:07:44.293069 kubelet[2717]: E0912 18:07:44.293010 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:44.320377 kubelet[2717]: E0912 18:07:44.320333 2717 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 18:07:44.320377 kubelet[2717]: E0912 18:07:44.320366 2717 projected.go:194] Error preparing data for projected volume kube-api-access-5qkz5 for pod kube-system/kube-proxy-c6qhn: configmap "kube-root-ca.crt" not found Sep 12 18:07:44.320914 kubelet[2717]: E0912 18:07:44.320441 2717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9025c974-9280-4ec3-bfe0-d58db076a9da-kube-api-access-5qkz5 podName:9025c974-9280-4ec3-bfe0-d58db076a9da nodeName:}" failed. No retries permitted until 2025-09-12 18:07:44.820418253 +0000 UTC m=+5.788920321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5qkz5" (UniqueName: "kubernetes.io/projected/9025c974-9280-4ec3-bfe0-d58db076a9da-kube-api-access-5qkz5") pod "kube-proxy-c6qhn" (UID: "9025c974-9280-4ec3-bfe0-d58db076a9da") : configmap "kube-root-ca.crt" not found Sep 12 18:07:44.868055 systemd[1]: Created slice kubepods-besteffort-podf971b9f8_3f08_41f1_b522_413932ebf78f.slice - libcontainer container kubepods-besteffort-podf971b9f8_3f08_41f1_b522_413932ebf78f.slice. Sep 12 18:07:44.918317 kubelet[2717]: I0912 18:07:44.918231 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f971b9f8-3f08-41f1-b522-413932ebf78f-var-lib-calico\") pod \"tigera-operator-755d956888-bdbz8\" (UID: \"f971b9f8-3f08-41f1-b522-413932ebf78f\") " pod="tigera-operator/tigera-operator-755d956888-bdbz8" Sep 12 18:07:44.918783 kubelet[2717]: I0912 18:07:44.918558 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vc7s\" (UniqueName: \"kubernetes.io/projected/f971b9f8-3f08-41f1-b522-413932ebf78f-kube-api-access-5vc7s\") pod \"tigera-operator-755d956888-bdbz8\" (UID: \"f971b9f8-3f08-41f1-b522-413932ebf78f\") " pod="tigera-operator/tigera-operator-755d956888-bdbz8" Sep 12 18:07:45.037363 kubelet[2717]: E0912 18:07:45.037307 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:45.113045 kubelet[2717]: E0912 18:07:45.112965 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:45.114694 containerd[1525]: time="2025-09-12T18:07:45.114650027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c6qhn,Uid:9025c974-9280-4ec3-bfe0-d58db076a9da,Namespace:kube-system,Attempt:0,}" Sep 12 18:07:45.139328 containerd[1525]: time="2025-09-12T18:07:45.139148946Z" level=info msg="connecting to shim 507f04a2fc11dec2e817aad7ca88a5e4188f25a468305c22771d134311ebcc80" address="unix:///run/containerd/s/35da6a4c92a4919f148bbd9c3bf0ebf3c82a3b97bdc18a92455594b32ab19340" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:07:45.172368 containerd[1525]: time="2025-09-12T18:07:45.172306957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-bdbz8,Uid:f971b9f8-3f08-41f1-b522-413932ebf78f,Namespace:tigera-operator,Attempt:0,}" Sep 12 18:07:45.179637 systemd[1]: Started cri-containerd-507f04a2fc11dec2e817aad7ca88a5e4188f25a468305c22771d134311ebcc80.scope - libcontainer container 507f04a2fc11dec2e817aad7ca88a5e4188f25a468305c22771d134311ebcc80. Sep 12 18:07:45.204324 containerd[1525]: time="2025-09-12T18:07:45.203185301Z" level=info msg="connecting to shim d0e6fc4e59380d4b174582fefbe2884ec902a00cefcc7fa38fae7568cc01224c" address="unix:///run/containerd/s/1948a6ef59bae1011fa8e4797d38118a552ad8773830deaa28a37e646c25a318" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:07:45.227288 containerd[1525]: time="2025-09-12T18:07:45.227237617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c6qhn,Uid:9025c974-9280-4ec3-bfe0-d58db076a9da,Namespace:kube-system,Attempt:0,} returns sandbox id \"507f04a2fc11dec2e817aad7ca88a5e4188f25a468305c22771d134311ebcc80\"" Sep 12 18:07:45.228317 kubelet[2717]: E0912 18:07:45.228279 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:45.235300 containerd[1525]: time="2025-09-12T18:07:45.235248354Z" level=info msg="CreateContainer within sandbox \"507f04a2fc11dec2e817aad7ca88a5e4188f25a468305c22771d134311ebcc80\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 18:07:45.246223 containerd[1525]: time="2025-09-12T18:07:45.246170114Z" level=info msg="Container 7989cff9d6625145a6e8f0074a69619e62300c5a6dd623fa8dcd2f1366ed2606: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:07:45.252422 systemd[1]: Started cri-containerd-d0e6fc4e59380d4b174582fefbe2884ec902a00cefcc7fa38fae7568cc01224c.scope - libcontainer container d0e6fc4e59380d4b174582fefbe2884ec902a00cefcc7fa38fae7568cc01224c. Sep 12 18:07:45.255115 containerd[1525]: time="2025-09-12T18:07:45.255064464Z" level=info msg="CreateContainer within sandbox \"507f04a2fc11dec2e817aad7ca88a5e4188f25a468305c22771d134311ebcc80\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7989cff9d6625145a6e8f0074a69619e62300c5a6dd623fa8dcd2f1366ed2606\"" Sep 12 18:07:45.257656 containerd[1525]: time="2025-09-12T18:07:45.257618294Z" level=info msg="StartContainer for \"7989cff9d6625145a6e8f0074a69619e62300c5a6dd623fa8dcd2f1366ed2606\"" Sep 12 18:07:45.261611 kubelet[2717]: E0912 18:07:45.261579 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:45.263053 kubelet[2717]: E0912 18:07:45.261998 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:45.265436 containerd[1525]: time="2025-09-12T18:07:45.265391708Z" level=info msg="connecting to shim 7989cff9d6625145a6e8f0074a69619e62300c5a6dd623fa8dcd2f1366ed2606" address="unix:///run/containerd/s/35da6a4c92a4919f148bbd9c3bf0ebf3c82a3b97bdc18a92455594b32ab19340" protocol=ttrpc version=3 Sep 12 18:07:45.306294 systemd[1]: Started cri-containerd-7989cff9d6625145a6e8f0074a69619e62300c5a6dd623fa8dcd2f1366ed2606.scope - libcontainer container 7989cff9d6625145a6e8f0074a69619e62300c5a6dd623fa8dcd2f1366ed2606. Sep 12 18:07:45.355959 containerd[1525]: time="2025-09-12T18:07:45.355904736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-bdbz8,Uid:f971b9f8-3f08-41f1-b522-413932ebf78f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d0e6fc4e59380d4b174582fefbe2884ec902a00cefcc7fa38fae7568cc01224c\"" Sep 12 18:07:45.360962 containerd[1525]: time="2025-09-12T18:07:45.360921916Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 18:07:45.363671 systemd-resolved[1402]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Sep 12 18:07:45.376193 containerd[1525]: time="2025-09-12T18:07:45.376145324Z" level=info msg="StartContainer for \"7989cff9d6625145a6e8f0074a69619e62300c5a6dd623fa8dcd2f1366ed2606\" returns successfully" Sep 12 18:07:46.265398 kubelet[2717]: E0912 18:07:46.265344 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:46.278383 kubelet[2717]: I0912 18:07:46.278209 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c6qhn" podStartSLOduration=2.278150392 podStartE2EDuration="2.278150392s" podCreationTimestamp="2025-09-12 18:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:07:46.278061903 +0000 UTC m=+7.246563994" watchObservedRunningTime="2025-09-12 18:07:46.278150392 +0000 UTC m=+7.246652493" Sep 12 18:07:47.021790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3175089010.mount: Deactivated successfully. Sep 12 18:07:47.272260 kubelet[2717]: E0912 18:07:47.272096 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:47.681113 containerd[1525]: time="2025-09-12T18:07:47.680976235Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:47.682624 containerd[1525]: time="2025-09-12T18:07:47.682581084Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 18:07:47.684043 containerd[1525]: time="2025-09-12T18:07:47.683326346Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:47.685052 containerd[1525]: time="2025-09-12T18:07:47.685006361Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:07:47.685817 containerd[1525]: time="2025-09-12T18:07:47.685790027Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.324831674s" Sep 12 18:07:47.685932 containerd[1525]: time="2025-09-12T18:07:47.685917194Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 18:07:47.690287 containerd[1525]: time="2025-09-12T18:07:47.690186192Z" level=info msg="CreateContainer within sandbox \"d0e6fc4e59380d4b174582fefbe2884ec902a00cefcc7fa38fae7568cc01224c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 18:07:47.699258 containerd[1525]: time="2025-09-12T18:07:47.698552417Z" level=info msg="Container ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:07:47.707120 containerd[1525]: time="2025-09-12T18:07:47.707049061Z" level=info msg="CreateContainer within sandbox \"d0e6fc4e59380d4b174582fefbe2884ec902a00cefcc7fa38fae7568cc01224c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da\"" Sep 12 18:07:47.708229 containerd[1525]: time="2025-09-12T18:07:47.708188006Z" level=info msg="StartContainer for \"ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da\"" Sep 12 18:07:47.710289 containerd[1525]: time="2025-09-12T18:07:47.710254172Z" level=info msg="connecting to shim ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da" address="unix:///run/containerd/s/1948a6ef59bae1011fa8e4797d38118a552ad8773830deaa28a37e646c25a318" protocol=ttrpc version=3 Sep 12 18:07:47.733243 systemd[1]: Started cri-containerd-ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da.scope - libcontainer container ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da. Sep 12 18:07:47.767634 containerd[1525]: time="2025-09-12T18:07:47.767593538Z" level=info msg="StartContainer for \"ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da\" returns successfully" Sep 12 18:07:50.879638 systemd[1]: cri-containerd-ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da.scope: Deactivated successfully. Sep 12 18:07:50.892070 containerd[1525]: time="2025-09-12T18:07:50.891955394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da\" id:\"ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da\" pid:3039 exit_status:1 exited_at:{seconds:1757700470 nanos:891189756}" Sep 12 18:07:50.893528 containerd[1525]: time="2025-09-12T18:07:50.892253298Z" level=info msg="received exit event container_id:\"ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da\" id:\"ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da\" pid:3039 exit_status:1 exited_at:{seconds:1757700470 nanos:891189756}" Sep 12 18:07:50.961743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da-rootfs.mount: Deactivated successfully. Sep 12 18:07:51.285851 kubelet[2717]: I0912 18:07:51.285675 2717 scope.go:117] "RemoveContainer" containerID="ae2dda000bed6df8d5c0be2b7e6f8985b3a965aa8d5bff36cb29c73fe12ae9da" Sep 12 18:07:51.289227 containerd[1525]: time="2025-09-12T18:07:51.289175352Z" level=info msg="CreateContainer within sandbox \"d0e6fc4e59380d4b174582fefbe2884ec902a00cefcc7fa38fae7568cc01224c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 18:07:51.308042 containerd[1525]: time="2025-09-12T18:07:51.306249048Z" level=info msg="Container ef6bf58d7263cf2b19ddb8b0d292732e99cb8c565de3d6e827dc3e1afffb2a1f: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:07:51.311705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1569192868.mount: Deactivated successfully. Sep 12 18:07:51.319788 containerd[1525]: time="2025-09-12T18:07:51.319731596Z" level=info msg="CreateContainer within sandbox \"d0e6fc4e59380d4b174582fefbe2884ec902a00cefcc7fa38fae7568cc01224c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ef6bf58d7263cf2b19ddb8b0d292732e99cb8c565de3d6e827dc3e1afffb2a1f\"" Sep 12 18:07:51.321858 containerd[1525]: time="2025-09-12T18:07:51.321800467Z" level=info msg="StartContainer for \"ef6bf58d7263cf2b19ddb8b0d292732e99cb8c565de3d6e827dc3e1afffb2a1f\"" Sep 12 18:07:51.324412 containerd[1525]: time="2025-09-12T18:07:51.324318038Z" level=info msg="connecting to shim ef6bf58d7263cf2b19ddb8b0d292732e99cb8c565de3d6e827dc3e1afffb2a1f" address="unix:///run/containerd/s/1948a6ef59bae1011fa8e4797d38118a552ad8773830deaa28a37e646c25a318" protocol=ttrpc version=3 Sep 12 18:07:51.371329 systemd[1]: Started cri-containerd-ef6bf58d7263cf2b19ddb8b0d292732e99cb8c565de3d6e827dc3e1afffb2a1f.scope - libcontainer container ef6bf58d7263cf2b19ddb8b0d292732e99cb8c565de3d6e827dc3e1afffb2a1f. Sep 12 18:07:51.425757 containerd[1525]: time="2025-09-12T18:07:51.425718514Z" level=info msg="StartContainer for \"ef6bf58d7263cf2b19ddb8b0d292732e99cb8c565de3d6e827dc3e1afffb2a1f\" returns successfully" Sep 12 18:07:52.302683 kubelet[2717]: I0912 18:07:52.302616 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-bdbz8" podStartSLOduration=5.97511587 podStartE2EDuration="8.302593377s" podCreationTimestamp="2025-09-12 18:07:44 +0000 UTC" firstStartedPulling="2025-09-12 18:07:45.359595684 +0000 UTC m=+6.328097751" lastFinishedPulling="2025-09-12 18:07:47.687073179 +0000 UTC m=+8.655575258" observedRunningTime="2025-09-12 18:07:48.286886591 +0000 UTC m=+9.255388679" watchObservedRunningTime="2025-09-12 18:07:52.302593377 +0000 UTC m=+13.271095463" Sep 12 18:07:53.102536 kubelet[2717]: E0912 18:07:53.102328 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:53.293060 kubelet[2717]: E0912 18:07:53.292972 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:53.862108 update_engine[1502]: I20250912 18:07:53.861531 1502 update_attempter.cc:509] Updating boot flags... Sep 12 18:07:54.472135 sudo[1778]: pam_unix(sudo:session): session closed for user root Sep 12 18:07:54.475489 sshd[1777]: Connection closed by 139.178.89.65 port 43004 Sep 12 18:07:54.476222 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Sep 12 18:07:54.482650 systemd-logind[1498]: Session 7 logged out. Waiting for processes to exit. Sep 12 18:07:54.483324 systemd[1]: sshd@6-137.184.114.151:22-139.178.89.65:43004.service: Deactivated successfully. Sep 12 18:07:54.486246 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 18:07:54.486526 systemd[1]: session-7.scope: Consumed 6.030s CPU time, 166.8M memory peak. Sep 12 18:07:54.490230 systemd-logind[1498]: Removed session 7. Sep 12 18:07:59.090721 systemd[1]: Created slice kubepods-besteffort-pod3fe09c65_bf0e_42b4_b6e3_52b6e080ebbc.slice - libcontainer container kubepods-besteffort-pod3fe09c65_bf0e_42b4_b6e3_52b6e080ebbc.slice. Sep 12 18:07:59.117529 kubelet[2717]: I0912 18:07:59.117470 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3fe09c65-bf0e-42b4-b6e3-52b6e080ebbc-typha-certs\") pod \"calico-typha-797f668d87-jvvkq\" (UID: \"3fe09c65-bf0e-42b4-b6e3-52b6e080ebbc\") " pod="calico-system/calico-typha-797f668d87-jvvkq" Sep 12 18:07:59.117529 kubelet[2717]: I0912 18:07:59.117521 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q57gq\" (UniqueName: \"kubernetes.io/projected/3fe09c65-bf0e-42b4-b6e3-52b6e080ebbc-kube-api-access-q57gq\") pod \"calico-typha-797f668d87-jvvkq\" (UID: \"3fe09c65-bf0e-42b4-b6e3-52b6e080ebbc\") " pod="calico-system/calico-typha-797f668d87-jvvkq" Sep 12 18:07:59.117529 kubelet[2717]: I0912 18:07:59.117544 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fe09c65-bf0e-42b4-b6e3-52b6e080ebbc-tigera-ca-bundle\") pod \"calico-typha-797f668d87-jvvkq\" (UID: \"3fe09c65-bf0e-42b4-b6e3-52b6e080ebbc\") " pod="calico-system/calico-typha-797f668d87-jvvkq" Sep 12 18:07:59.365304 systemd[1]: Created slice kubepods-besteffort-poda71834da_79a8_47cb_abb9_f740fbe95505.slice - libcontainer container kubepods-besteffort-poda71834da_79a8_47cb_abb9_f740fbe95505.slice. Sep 12 18:07:59.398874 kubelet[2717]: E0912 18:07:59.398750 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:59.400281 containerd[1525]: time="2025-09-12T18:07:59.400172426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-797f668d87-jvvkq,Uid:3fe09c65-bf0e-42b4-b6e3-52b6e080ebbc,Namespace:calico-system,Attempt:0,}" Sep 12 18:07:59.420502 kubelet[2717]: I0912 18:07:59.420355 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a71834da-79a8-47cb-abb9-f740fbe95505-flexvol-driver-host\") pod \"calico-node-r6z88\" (UID: \"a71834da-79a8-47cb-abb9-f740fbe95505\") " pod="calico-system/calico-node-r6z88" Sep 12 18:07:59.420502 kubelet[2717]: I0912 18:07:59.420400 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a71834da-79a8-47cb-abb9-f740fbe95505-policysync\") pod \"calico-node-r6z88\" (UID: \"a71834da-79a8-47cb-abb9-f740fbe95505\") " pod="calico-system/calico-node-r6z88" Sep 12 18:07:59.420502 kubelet[2717]: I0912 18:07:59.420422 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a71834da-79a8-47cb-abb9-f740fbe95505-var-lib-calico\") pod \"calico-node-r6z88\" (UID: \"a71834da-79a8-47cb-abb9-f740fbe95505\") " pod="calico-system/calico-node-r6z88" Sep 12 18:07:59.420502 kubelet[2717]: I0912 18:07:59.420439 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a71834da-79a8-47cb-abb9-f740fbe95505-cni-net-dir\") pod \"calico-node-r6z88\" (UID: \"a71834da-79a8-47cb-abb9-f740fbe95505\") " pod="calico-system/calico-node-r6z88" Sep 12 18:07:59.420502 kubelet[2717]: I0912 18:07:59.420493 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a71834da-79a8-47cb-abb9-f740fbe95505-lib-modules\") pod \"calico-node-r6z88\" (UID: \"a71834da-79a8-47cb-abb9-f740fbe95505\") " pod="calico-system/calico-node-r6z88" Sep 12 18:07:59.420747 kubelet[2717]: I0912 18:07:59.420536 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a71834da-79a8-47cb-abb9-f740fbe95505-node-certs\") pod \"calico-node-r6z88\" (UID: \"a71834da-79a8-47cb-abb9-f740fbe95505\") " pod="calico-system/calico-node-r6z88" Sep 12 18:07:59.420747 kubelet[2717]: I0912 18:07:59.420556 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a71834da-79a8-47cb-abb9-f740fbe95505-var-run-calico\") pod \"calico-node-r6z88\" (UID: \"a71834da-79a8-47cb-abb9-f740fbe95505\") " pod="calico-system/calico-node-r6z88" Sep 12 18:07:59.420747 kubelet[2717]: I0912 18:07:59.420576 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a71834da-79a8-47cb-abb9-f740fbe95505-cni-log-dir\") pod \"calico-node-r6z88\" (UID: \"a71834da-79a8-47cb-abb9-f740fbe95505\") " pod="calico-system/calico-node-r6z88" Sep 12 18:07:59.420747 kubelet[2717]: I0912 18:07:59.420591 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a71834da-79a8-47cb-abb9-f740fbe95505-tigera-ca-bundle\") pod \"calico-node-r6z88\" (UID: \"a71834da-79a8-47cb-abb9-f740fbe95505\") " pod="calico-system/calico-node-r6z88" Sep 12 18:07:59.420747 kubelet[2717]: I0912 18:07:59.420605 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a71834da-79a8-47cb-abb9-f740fbe95505-xtables-lock\") pod \"calico-node-r6z88\" (UID: \"a71834da-79a8-47cb-abb9-f740fbe95505\") " pod="calico-system/calico-node-r6z88" Sep 12 18:07:59.420884 kubelet[2717]: I0912 18:07:59.420624 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a71834da-79a8-47cb-abb9-f740fbe95505-cni-bin-dir\") pod \"calico-node-r6z88\" (UID: \"a71834da-79a8-47cb-abb9-f740fbe95505\") " pod="calico-system/calico-node-r6z88" Sep 12 18:07:59.420884 kubelet[2717]: I0912 18:07:59.420640 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zql9w\" (UniqueName: \"kubernetes.io/projected/a71834da-79a8-47cb-abb9-f740fbe95505-kube-api-access-zql9w\") pod \"calico-node-r6z88\" (UID: \"a71834da-79a8-47cb-abb9-f740fbe95505\") " pod="calico-system/calico-node-r6z88" Sep 12 18:07:59.429094 containerd[1525]: time="2025-09-12T18:07:59.429047251Z" level=info msg="connecting to shim 8175e7e93828646eb6a9d1453c2598d90ce5a4f2069f28bf1c2bd06877d1dec7" address="unix:///run/containerd/s/f537be332834a4f1e791364d0f174d99dbc65d14fd6acb6c0d06ae1caa2f858a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:07:59.465369 systemd[1]: Started cri-containerd-8175e7e93828646eb6a9d1453c2598d90ce5a4f2069f28bf1c2bd06877d1dec7.scope - libcontainer container 8175e7e93828646eb6a9d1453c2598d90ce5a4f2069f28bf1c2bd06877d1dec7. Sep 12 18:07:59.513962 kubelet[2717]: E0912 18:07:59.513788 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76f77" podUID="4cff656a-7bb3-4e69-b0de-ea6ad3f24730" Sep 12 18:07:59.554938 kubelet[2717]: E0912 18:07:59.554593 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.554938 kubelet[2717]: W0912 18:07:59.554629 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.554938 kubelet[2717]: E0912 18:07:59.554670 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.557050 kubelet[2717]: E0912 18:07:59.555814 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.557050 kubelet[2717]: W0912 18:07:59.555836 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.557050 kubelet[2717]: E0912 18:07:59.555858 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.600424 kubelet[2717]: E0912 18:07:59.600389 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.600610 kubelet[2717]: W0912 18:07:59.600501 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.600610 kubelet[2717]: E0912 18:07:59.600527 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.601343 kubelet[2717]: E0912 18:07:59.601315 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.601343 kubelet[2717]: W0912 18:07:59.601331 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.601520 kubelet[2717]: E0912 18:07:59.601354 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.602491 kubelet[2717]: E0912 18:07:59.601816 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.602491 kubelet[2717]: W0912 18:07:59.601831 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.602491 kubelet[2717]: E0912 18:07:59.601843 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.605804 kubelet[2717]: E0912 18:07:59.604132 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.606624 kubelet[2717]: W0912 18:07:59.606051 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.606624 kubelet[2717]: E0912 18:07:59.606093 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.607562 kubelet[2717]: E0912 18:07:59.607097 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.607562 kubelet[2717]: W0912 18:07:59.607114 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.607562 kubelet[2717]: E0912 18:07:59.607130 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.608312 kubelet[2717]: E0912 18:07:59.608290 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.608435 kubelet[2717]: W0912 18:07:59.608414 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.608797 kubelet[2717]: E0912 18:07:59.608588 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.610044 kubelet[2717]: E0912 18:07:59.609937 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.610149 kubelet[2717]: W0912 18:07:59.610135 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.610219 kubelet[2717]: E0912 18:07:59.610208 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.611285 kubelet[2717]: E0912 18:07:59.611125 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.611285 kubelet[2717]: W0912 18:07:59.611148 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.611285 kubelet[2717]: E0912 18:07:59.611167 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.611937 kubelet[2717]: E0912 18:07:59.611902 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.613173 kubelet[2717]: W0912 18:07:59.613055 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.613173 kubelet[2717]: E0912 18:07:59.613085 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.613439 kubelet[2717]: E0912 18:07:59.613421 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.613555 kubelet[2717]: W0912 18:07:59.613537 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.613752 kubelet[2717]: E0912 18:07:59.613650 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.614048 kubelet[2717]: E0912 18:07:59.613938 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.615077 kubelet[2717]: W0912 18:07:59.614925 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.615077 kubelet[2717]: E0912 18:07:59.614945 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.615485 kubelet[2717]: E0912 18:07:59.615306 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.615485 kubelet[2717]: W0912 18:07:59.615318 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.615485 kubelet[2717]: E0912 18:07:59.615329 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.616757 kubelet[2717]: E0912 18:07:59.616741 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.618231 kubelet[2717]: W0912 18:07:59.618068 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.618231 kubelet[2717]: E0912 18:07:59.618101 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.619856 kubelet[2717]: E0912 18:07:59.619468 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.619856 kubelet[2717]: W0912 18:07:59.619487 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.619856 kubelet[2717]: E0912 18:07:59.619504 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.620133 kubelet[2717]: E0912 18:07:59.620113 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.620274 kubelet[2717]: W0912 18:07:59.620258 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.621072 kubelet[2717]: E0912 18:07:59.620357 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.621239 kubelet[2717]: E0912 18:07:59.621221 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.621327 kubelet[2717]: W0912 18:07:59.621315 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.621403 kubelet[2717]: E0912 18:07:59.621391 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.623371 kubelet[2717]: E0912 18:07:59.623171 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.623371 kubelet[2717]: W0912 18:07:59.623237 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.623371 kubelet[2717]: E0912 18:07:59.623257 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.623867 kubelet[2717]: E0912 18:07:59.623732 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.623867 kubelet[2717]: W0912 18:07:59.623751 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.623867 kubelet[2717]: E0912 18:07:59.623766 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.624120 kubelet[2717]: E0912 18:07:59.624101 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.624362 kubelet[2717]: W0912 18:07:59.624207 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.624362 kubelet[2717]: E0912 18:07:59.624230 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.625185 kubelet[2717]: E0912 18:07:59.625170 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.625268 kubelet[2717]: W0912 18:07:59.625258 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.625401 kubelet[2717]: E0912 18:07:59.625308 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.625813 kubelet[2717]: E0912 18:07:59.625789 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.625813 kubelet[2717]: W0912 18:07:59.625807 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.625907 kubelet[2717]: E0912 18:07:59.625820 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.625907 kubelet[2717]: I0912 18:07:59.625850 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb6mn\" (UniqueName: \"kubernetes.io/projected/4cff656a-7bb3-4e69-b0de-ea6ad3f24730-kube-api-access-xb6mn\") pod \"csi-node-driver-76f77\" (UID: \"4cff656a-7bb3-4e69-b0de-ea6ad3f24730\") " pod="calico-system/csi-node-driver-76f77" Sep 12 18:07:59.627723 kubelet[2717]: E0912 18:07:59.627662 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.627723 kubelet[2717]: W0912 18:07:59.627684 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.627723 kubelet[2717]: E0912 18:07:59.627701 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.628309 kubelet[2717]: I0912 18:07:59.628282 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4cff656a-7bb3-4e69-b0de-ea6ad3f24730-socket-dir\") pod \"csi-node-driver-76f77\" (UID: \"4cff656a-7bb3-4e69-b0de-ea6ad3f24730\") " pod="calico-system/csi-node-driver-76f77" Sep 12 18:07:59.628645 kubelet[2717]: E0912 18:07:59.628581 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.628645 kubelet[2717]: W0912 18:07:59.628594 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.628645 kubelet[2717]: E0912 18:07:59.628606 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.629175 kubelet[2717]: E0912 18:07:59.629157 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.629265 kubelet[2717]: W0912 18:07:59.629170 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.629295 kubelet[2717]: E0912 18:07:59.629270 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.629957 kubelet[2717]: E0912 18:07:59.629935 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.629957 kubelet[2717]: W0912 18:07:59.629948 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.629957 kubelet[2717]: E0912 18:07:59.629962 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.630400 kubelet[2717]: I0912 18:07:59.630373 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4cff656a-7bb3-4e69-b0de-ea6ad3f24730-registration-dir\") pod \"csi-node-driver-76f77\" (UID: \"4cff656a-7bb3-4e69-b0de-ea6ad3f24730\") " pod="calico-system/csi-node-driver-76f77" Sep 12 18:07:59.633966 kubelet[2717]: E0912 18:07:59.633754 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.633966 kubelet[2717]: W0912 18:07:59.633783 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.633966 kubelet[2717]: E0912 18:07:59.633808 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.635527 kubelet[2717]: E0912 18:07:59.635380 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.635527 kubelet[2717]: W0912 18:07:59.635400 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.635872 kubelet[2717]: E0912 18:07:59.635834 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.637390 kubelet[2717]: E0912 18:07:59.637375 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.638649 kubelet[2717]: W0912 18:07:59.637458 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.638649 kubelet[2717]: E0912 18:07:59.637476 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.638649 kubelet[2717]: I0912 18:07:59.637575 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cff656a-7bb3-4e69-b0de-ea6ad3f24730-kubelet-dir\") pod \"csi-node-driver-76f77\" (UID: \"4cff656a-7bb3-4e69-b0de-ea6ad3f24730\") " pod="calico-system/csi-node-driver-76f77" Sep 12 18:07:59.639878 kubelet[2717]: E0912 18:07:59.639749 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.639878 kubelet[2717]: W0912 18:07:59.639771 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.639878 kubelet[2717]: E0912 18:07:59.639792 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.641508 containerd[1525]: time="2025-09-12T18:07:59.640655147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-797f668d87-jvvkq,Uid:3fe09c65-bf0e-42b4-b6e3-52b6e080ebbc,Namespace:calico-system,Attempt:0,} returns sandbox id \"8175e7e93828646eb6a9d1453c2598d90ce5a4f2069f28bf1c2bd06877d1dec7\"" Sep 12 18:07:59.641639 kubelet[2717]: E0912 18:07:59.641387 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.641639 kubelet[2717]: W0912 18:07:59.641403 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.641639 kubelet[2717]: E0912 18:07:59.641421 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.643202 kubelet[2717]: E0912 18:07:59.643178 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.643371 kubelet[2717]: W0912 18:07:59.643346 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.643602 kubelet[2717]: E0912 18:07:59.643550 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.644042 kubelet[2717]: I0912 18:07:59.643864 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4cff656a-7bb3-4e69-b0de-ea6ad3f24730-varrun\") pod \"csi-node-driver-76f77\" (UID: \"4cff656a-7bb3-4e69-b0de-ea6ad3f24730\") " pod="calico-system/csi-node-driver-76f77" Sep 12 18:07:59.644612 kubelet[2717]: E0912 18:07:59.644598 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.645032 kubelet[2717]: W0912 18:07:59.644972 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.645032 kubelet[2717]: E0912 18:07:59.644993 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.646077 kubelet[2717]: E0912 18:07:59.645513 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:07:59.646327 kubelet[2717]: E0912 18:07:59.646231 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.646533 kubelet[2717]: W0912 18:07:59.646465 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.646533 kubelet[2717]: E0912 18:07:59.646489 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.648470 kubelet[2717]: E0912 18:07:59.648439 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.648994 kubelet[2717]: W0912 18:07:59.648921 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.649177 kubelet[2717]: E0912 18:07:59.648941 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.649425 kubelet[2717]: E0912 18:07:59.649413 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.649571 kubelet[2717]: W0912 18:07:59.649477 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.649571 kubelet[2717]: E0912 18:07:59.649491 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.650387 containerd[1525]: time="2025-09-12T18:07:59.650333684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 18:07:59.673100 containerd[1525]: time="2025-09-12T18:07:59.672656502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r6z88,Uid:a71834da-79a8-47cb-abb9-f740fbe95505,Namespace:calico-system,Attempt:0,}" Sep 12 18:07:59.695274 containerd[1525]: time="2025-09-12T18:07:59.695225506Z" level=info msg="connecting to shim 6593f5b25e6b0850d1b3118dc2f53a0a1f551908ba79f703c6eb186742868e4f" address="unix:///run/containerd/s/03f7006c07f078f03e212de9ea00baeeb09a552ea0f19dd136d96865bdff1766" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:07:59.728237 systemd[1]: Started cri-containerd-6593f5b25e6b0850d1b3118dc2f53a0a1f551908ba79f703c6eb186742868e4f.scope - libcontainer container 6593f5b25e6b0850d1b3118dc2f53a0a1f551908ba79f703c6eb186742868e4f. Sep 12 18:07:59.746404 kubelet[2717]: E0912 18:07:59.745430 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.746404 kubelet[2717]: W0912 18:07:59.745558 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.746404 kubelet[2717]: E0912 18:07:59.745606 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.746404 kubelet[2717]: E0912 18:07:59.746185 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.746404 kubelet[2717]: W0912 18:07:59.746197 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.746404 kubelet[2717]: E0912 18:07:59.746212 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.747531 kubelet[2717]: E0912 18:07:59.747284 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.747531 kubelet[2717]: W0912 18:07:59.747298 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.747531 kubelet[2717]: E0912 18:07:59.747312 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.748452 kubelet[2717]: E0912 18:07:59.748290 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.748452 kubelet[2717]: W0912 18:07:59.748403 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.748452 kubelet[2717]: E0912 18:07:59.748429 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.749326 kubelet[2717]: E0912 18:07:59.749130 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.749326 kubelet[2717]: W0912 18:07:59.749144 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.749326 kubelet[2717]: E0912 18:07:59.749157 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.751212 kubelet[2717]: E0912 18:07:59.750720 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.751212 kubelet[2717]: W0912 18:07:59.750744 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.751212 kubelet[2717]: E0912 18:07:59.750758 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.752251 kubelet[2717]: E0912 18:07:59.751466 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.752251 kubelet[2717]: W0912 18:07:59.751490 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.752251 kubelet[2717]: E0912 18:07:59.751503 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.752251 kubelet[2717]: E0912 18:07:59.751821 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.752251 kubelet[2717]: W0912 18:07:59.751836 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.752251 kubelet[2717]: E0912 18:07:59.751846 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.752773 kubelet[2717]: E0912 18:07:59.752615 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.752773 kubelet[2717]: W0912 18:07:59.752636 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.752773 kubelet[2717]: E0912 18:07:59.752649 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.753371 kubelet[2717]: E0912 18:07:59.753359 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.753655 kubelet[2717]: W0912 18:07:59.753463 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.753655 kubelet[2717]: E0912 18:07:59.753482 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.754089 kubelet[2717]: E0912 18:07:59.754076 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.754272 kubelet[2717]: W0912 18:07:59.754241 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.754272 kubelet[2717]: E0912 18:07:59.754260 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.755583 kubelet[2717]: E0912 18:07:59.755081 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.755583 kubelet[2717]: W0912 18:07:59.755100 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.755583 kubelet[2717]: E0912 18:07:59.755117 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.759284 kubelet[2717]: E0912 18:07:59.757707 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.759284 kubelet[2717]: W0912 18:07:59.757748 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.759284 kubelet[2717]: E0912 18:07:59.757771 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.759284 kubelet[2717]: E0912 18:07:59.758277 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.759284 kubelet[2717]: W0912 18:07:59.758292 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.759284 kubelet[2717]: E0912 18:07:59.758307 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.759284 kubelet[2717]: E0912 18:07:59.758741 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.759284 kubelet[2717]: W0912 18:07:59.758755 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.759284 kubelet[2717]: E0912 18:07:59.758770 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.760485 kubelet[2717]: E0912 18:07:59.760004 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.760485 kubelet[2717]: W0912 18:07:59.760045 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.760485 kubelet[2717]: E0912 18:07:59.760061 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.760868 kubelet[2717]: E0912 18:07:59.760722 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.760868 kubelet[2717]: W0912 18:07:59.760773 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.760868 kubelet[2717]: E0912 18:07:59.760788 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.762417 kubelet[2717]: E0912 18:07:59.762310 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.762417 kubelet[2717]: W0912 18:07:59.762323 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.762417 kubelet[2717]: E0912 18:07:59.762336 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.762832 kubelet[2717]: E0912 18:07:59.762726 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.762832 kubelet[2717]: W0912 18:07:59.762748 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.762832 kubelet[2717]: E0912 18:07:59.762761 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.765118 kubelet[2717]: E0912 18:07:59.765082 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.765385 kubelet[2717]: W0912 18:07:59.765365 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.765575 kubelet[2717]: E0912 18:07:59.765556 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.766793 kubelet[2717]: E0912 18:07:59.766717 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.766793 kubelet[2717]: W0912 18:07:59.766747 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.766793 kubelet[2717]: E0912 18:07:59.766771 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.768149 kubelet[2717]: E0912 18:07:59.768088 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.768149 kubelet[2717]: W0912 18:07:59.768105 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.768149 kubelet[2717]: E0912 18:07:59.768120 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.768825 kubelet[2717]: E0912 18:07:59.768708 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.768825 kubelet[2717]: W0912 18:07:59.768723 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.768825 kubelet[2717]: E0912 18:07:59.768740 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.770733 kubelet[2717]: E0912 18:07:59.770182 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.770733 kubelet[2717]: W0912 18:07:59.770199 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.770733 kubelet[2717]: E0912 18:07:59.770239 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.771998 kubelet[2717]: E0912 18:07:59.771913 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.771998 kubelet[2717]: W0912 18:07:59.771933 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.771998 kubelet[2717]: E0912 18:07:59.771952 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.791487 kubelet[2717]: E0912 18:07:59.791452 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:07:59.791487 kubelet[2717]: W0912 18:07:59.791477 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:07:59.792146 kubelet[2717]: E0912 18:07:59.791502 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:07:59.834962 containerd[1525]: time="2025-09-12T18:07:59.834905105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r6z88,Uid:a71834da-79a8-47cb-abb9-f740fbe95505,Namespace:calico-system,Attempt:0,} returns sandbox id \"6593f5b25e6b0850d1b3118dc2f53a0a1f551908ba79f703c6eb186742868e4f\"" Sep 12 18:08:01.135249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326605386.mount: Deactivated successfully. Sep 12 18:08:01.226127 kubelet[2717]: E0912 18:08:01.224280 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76f77" podUID="4cff656a-7bb3-4e69-b0de-ea6ad3f24730" Sep 12 18:08:02.154335 containerd[1525]: time="2025-09-12T18:08:02.154274824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:02.155071 containerd[1525]: time="2025-09-12T18:08:02.155038406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 18:08:02.156057 containerd[1525]: time="2025-09-12T18:08:02.155773149Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:02.158111 containerd[1525]: time="2025-09-12T18:08:02.157763864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:02.158552 containerd[1525]: time="2025-09-12T18:08:02.158524000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.508148723s" Sep 12 18:08:02.160052 containerd[1525]: time="2025-09-12T18:08:02.158978349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 18:08:02.162566 containerd[1525]: time="2025-09-12T18:08:02.162524480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 18:08:02.192980 containerd[1525]: time="2025-09-12T18:08:02.192933792Z" level=info msg="CreateContainer within sandbox \"8175e7e93828646eb6a9d1453c2598d90ce5a4f2069f28bf1c2bd06877d1dec7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 18:08:02.252394 containerd[1525]: time="2025-09-12T18:08:02.251677646Z" level=info msg="Container ed39f6ca279990cad8f14667ead77055fbca01db87a3492e4572b7111e068512: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:02.296030 containerd[1525]: time="2025-09-12T18:08:02.295929802Z" level=info msg="CreateContainer within sandbox \"8175e7e93828646eb6a9d1453c2598d90ce5a4f2069f28bf1c2bd06877d1dec7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ed39f6ca279990cad8f14667ead77055fbca01db87a3492e4572b7111e068512\"" Sep 12 18:08:02.299202 containerd[1525]: time="2025-09-12T18:08:02.297230673Z" level=info msg="StartContainer for \"ed39f6ca279990cad8f14667ead77055fbca01db87a3492e4572b7111e068512\"" Sep 12 18:08:02.299568 containerd[1525]: time="2025-09-12T18:08:02.299527225Z" level=info msg="connecting to shim ed39f6ca279990cad8f14667ead77055fbca01db87a3492e4572b7111e068512" address="unix:///run/containerd/s/f537be332834a4f1e791364d0f174d99dbc65d14fd6acb6c0d06ae1caa2f858a" protocol=ttrpc version=3 Sep 12 18:08:02.331252 systemd[1]: Started cri-containerd-ed39f6ca279990cad8f14667ead77055fbca01db87a3492e4572b7111e068512.scope - libcontainer container ed39f6ca279990cad8f14667ead77055fbca01db87a3492e4572b7111e068512. Sep 12 18:08:02.401733 containerd[1525]: time="2025-09-12T18:08:02.401695427Z" level=info msg="StartContainer for \"ed39f6ca279990cad8f14667ead77055fbca01db87a3492e4572b7111e068512\" returns successfully" Sep 12 18:08:03.223460 kubelet[2717]: E0912 18:08:03.223378 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76f77" podUID="4cff656a-7bb3-4e69-b0de-ea6ad3f24730" Sep 12 18:08:03.326739 kubelet[2717]: E0912 18:08:03.326626 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:03.353685 kubelet[2717]: E0912 18:08:03.353642 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.354082 kubelet[2717]: W0912 18:08:03.353889 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.354082 kubelet[2717]: E0912 18:08:03.353929 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.354437 kubelet[2717]: E0912 18:08:03.354227 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.354437 kubelet[2717]: W0912 18:08:03.354238 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.354437 kubelet[2717]: E0912 18:08:03.354249 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.354613 kubelet[2717]: E0912 18:08:03.354600 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.354679 kubelet[2717]: W0912 18:08:03.354667 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.354738 kubelet[2717]: E0912 18:08:03.354726 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.354980 kubelet[2717]: E0912 18:08:03.354963 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.355093 kubelet[2717]: W0912 18:08:03.355081 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.355279 kubelet[2717]: E0912 18:08:03.355155 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.355414 kubelet[2717]: E0912 18:08:03.355402 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.355473 kubelet[2717]: W0912 18:08:03.355461 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.355555 kubelet[2717]: E0912 18:08:03.355540 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.355890 kubelet[2717]: E0912 18:08:03.355777 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.355890 kubelet[2717]: W0912 18:08:03.355789 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.355890 kubelet[2717]: E0912 18:08:03.355801 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.356082 kubelet[2717]: E0912 18:08:03.356070 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.356145 kubelet[2717]: W0912 18:08:03.356136 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.356328 kubelet[2717]: E0912 18:08:03.356192 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.356495 kubelet[2717]: E0912 18:08:03.356426 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.356566 kubelet[2717]: W0912 18:08:03.356554 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.356615 kubelet[2717]: E0912 18:08:03.356608 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.356939 kubelet[2717]: E0912 18:08:03.356823 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.356939 kubelet[2717]: W0912 18:08:03.356833 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.356939 kubelet[2717]: E0912 18:08:03.356841 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.357128 kubelet[2717]: E0912 18:08:03.357117 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.357186 kubelet[2717]: W0912 18:08:03.357177 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.357241 kubelet[2717]: E0912 18:08:03.357229 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.357621 kubelet[2717]: E0912 18:08:03.357446 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.357621 kubelet[2717]: W0912 18:08:03.357457 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.357621 kubelet[2717]: E0912 18:08:03.357466 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.357822 kubelet[2717]: E0912 18:08:03.357809 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.357909 kubelet[2717]: W0912 18:08:03.357896 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.357980 kubelet[2717]: E0912 18:08:03.357969 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.358439 kubelet[2717]: E0912 18:08:03.358319 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.358439 kubelet[2717]: W0912 18:08:03.358333 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.358439 kubelet[2717]: E0912 18:08:03.358343 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.358625 kubelet[2717]: E0912 18:08:03.358613 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.358783 kubelet[2717]: W0912 18:08:03.358679 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.358783 kubelet[2717]: E0912 18:08:03.358693 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.358918 kubelet[2717]: E0912 18:08:03.358907 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.358986 kubelet[2717]: W0912 18:08:03.358976 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.359081 kubelet[2717]: E0912 18:08:03.359067 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.378566 kubelet[2717]: E0912 18:08:03.378528 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.378566 kubelet[2717]: W0912 18:08:03.378551 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.378566 kubelet[2717]: E0912 18:08:03.378573 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.378949 kubelet[2717]: E0912 18:08:03.378818 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.378949 kubelet[2717]: W0912 18:08:03.378829 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.378949 kubelet[2717]: E0912 18:08:03.378841 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.379325 kubelet[2717]: E0912 18:08:03.379101 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.379325 kubelet[2717]: W0912 18:08:03.379111 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.379325 kubelet[2717]: E0912 18:08:03.379121 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.379571 kubelet[2717]: E0912 18:08:03.379552 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.379652 kubelet[2717]: W0912 18:08:03.379637 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.379732 kubelet[2717]: E0912 18:08:03.379718 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.380135 kubelet[2717]: E0912 18:08:03.380079 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.380135 kubelet[2717]: W0912 18:08:03.380096 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.380135 kubelet[2717]: E0912 18:08:03.380111 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.380632 kubelet[2717]: E0912 18:08:03.380546 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.380632 kubelet[2717]: W0912 18:08:03.380560 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.380632 kubelet[2717]: E0912 18:08:03.380571 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.380992 kubelet[2717]: E0912 18:08:03.380918 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.380992 kubelet[2717]: W0912 18:08:03.380930 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.380992 kubelet[2717]: E0912 18:08:03.380941 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.381346 kubelet[2717]: E0912 18:08:03.381333 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.381346 kubelet[2717]: W0912 18:08:03.381425 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.381346 kubelet[2717]: E0912 18:08:03.381439 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.381872 kubelet[2717]: E0912 18:08:03.381858 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.382051 kubelet[2717]: W0912 18:08:03.381939 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.382051 kubelet[2717]: E0912 18:08:03.381955 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.382244 kubelet[2717]: E0912 18:08:03.382224 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.382244 kubelet[2717]: W0912 18:08:03.382239 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.382344 kubelet[2717]: E0912 18:08:03.382250 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.382491 kubelet[2717]: E0912 18:08:03.382413 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.382491 kubelet[2717]: W0912 18:08:03.382423 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.382491 kubelet[2717]: E0912 18:08:03.382432 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.382689 kubelet[2717]: E0912 18:08:03.382607 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.382689 kubelet[2717]: W0912 18:08:03.382614 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.382689 kubelet[2717]: E0912 18:08:03.382622 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.383055 kubelet[2717]: E0912 18:08:03.382961 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.383055 kubelet[2717]: W0912 18:08:03.382974 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.383055 kubelet[2717]: E0912 18:08:03.382986 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.383342 kubelet[2717]: E0912 18:08:03.383306 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.383342 kubelet[2717]: W0912 18:08:03.383318 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.383342 kubelet[2717]: E0912 18:08:03.383329 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.383696 kubelet[2717]: E0912 18:08:03.383649 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.383696 kubelet[2717]: W0912 18:08:03.383665 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.383696 kubelet[2717]: E0912 18:08:03.383679 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.384232 kubelet[2717]: E0912 18:08:03.384136 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.384232 kubelet[2717]: W0912 18:08:03.384148 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.384232 kubelet[2717]: E0912 18:08:03.384161 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.384689 kubelet[2717]: E0912 18:08:03.384659 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.384951 kubelet[2717]: W0912 18:08:03.384768 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.384951 kubelet[2717]: E0912 18:08:03.384794 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:03.385169 kubelet[2717]: E0912 18:08:03.385153 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 18:08:03.385286 kubelet[2717]: W0912 18:08:03.385236 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 18:08:03.385286 kubelet[2717]: E0912 18:08:03.385252 2717 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 18:08:04.110468 containerd[1525]: time="2025-09-12T18:08:04.110333127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:04.112243 containerd[1525]: time="2025-09-12T18:08:04.111641767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 18:08:04.113125 containerd[1525]: time="2025-09-12T18:08:04.113036466Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:04.115563 containerd[1525]: time="2025-09-12T18:08:04.115490290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:04.116515 containerd[1525]: time="2025-09-12T18:08:04.116354018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.953793645s" Sep 12 18:08:04.116515 containerd[1525]: time="2025-09-12T18:08:04.116400053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 18:08:04.123379 containerd[1525]: time="2025-09-12T18:08:04.123297295Z" level=info msg="CreateContainer within sandbox \"6593f5b25e6b0850d1b3118dc2f53a0a1f551908ba79f703c6eb186742868e4f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 18:08:04.131044 containerd[1525]: time="2025-09-12T18:08:04.130874444Z" level=info msg="Container 6c447e5a5e615b8e5c80919d740c35b91d461535e30cf498178e0a75e084b9e5: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:04.154751 containerd[1525]: time="2025-09-12T18:08:04.154653700Z" level=info msg="CreateContainer within sandbox \"6593f5b25e6b0850d1b3118dc2f53a0a1f551908ba79f703c6eb186742868e4f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6c447e5a5e615b8e5c80919d740c35b91d461535e30cf498178e0a75e084b9e5\"" Sep 12 18:08:04.157432 containerd[1525]: time="2025-09-12T18:08:04.157370913Z" level=info msg="StartContainer for \"6c447e5a5e615b8e5c80919d740c35b91d461535e30cf498178e0a75e084b9e5\"" Sep 12 18:08:04.160093 containerd[1525]: time="2025-09-12T18:08:04.159604856Z" level=info msg="connecting to shim 6c447e5a5e615b8e5c80919d740c35b91d461535e30cf498178e0a75e084b9e5" address="unix:///run/containerd/s/03f7006c07f078f03e212de9ea00baeeb09a552ea0f19dd136d96865bdff1766" protocol=ttrpc version=3 Sep 12 18:08:04.197382 systemd[1]: Started cri-containerd-6c447e5a5e615b8e5c80919d740c35b91d461535e30cf498178e0a75e084b9e5.scope - libcontainer container 6c447e5a5e615b8e5c80919d740c35b91d461535e30cf498178e0a75e084b9e5. Sep 12 18:08:04.271559 containerd[1525]: time="2025-09-12T18:08:04.271449405Z" level=info msg="StartContainer for \"6c447e5a5e615b8e5c80919d740c35b91d461535e30cf498178e0a75e084b9e5\" returns successfully" Sep 12 18:08:04.288766 systemd[1]: cri-containerd-6c447e5a5e615b8e5c80919d740c35b91d461535e30cf498178e0a75e084b9e5.scope: Deactivated successfully. Sep 12 18:08:04.293905 containerd[1525]: time="2025-09-12T18:08:04.293805701Z" level=info msg="received exit event container_id:\"6c447e5a5e615b8e5c80919d740c35b91d461535e30cf498178e0a75e084b9e5\" id:\"6c447e5a5e615b8e5c80919d740c35b91d461535e30cf498178e0a75e084b9e5\" pid:3436 exited_at:{seconds:1757700484 nanos:293368243}" Sep 12 18:08:04.294608 containerd[1525]: time="2025-09-12T18:08:04.294232169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c447e5a5e615b8e5c80919d740c35b91d461535e30cf498178e0a75e084b9e5\" id:\"6c447e5a5e615b8e5c80919d740c35b91d461535e30cf498178e0a75e084b9e5\" pid:3436 exited_at:{seconds:1757700484 nanos:293368243}" Sep 12 18:08:04.338767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c447e5a5e615b8e5c80919d740c35b91d461535e30cf498178e0a75e084b9e5-rootfs.mount: Deactivated successfully. Sep 12 18:08:04.349717 kubelet[2717]: I0912 18:08:04.348776 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 18:08:04.351370 kubelet[2717]: E0912 18:08:04.351010 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:04.383049 kubelet[2717]: I0912 18:08:04.382786 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-797f668d87-jvvkq" podStartSLOduration=2.870392498 podStartE2EDuration="5.382748979s" podCreationTimestamp="2025-09-12 18:07:59 +0000 UTC" firstStartedPulling="2025-09-12 18:07:59.649788014 +0000 UTC m=+20.618290081" lastFinishedPulling="2025-09-12 18:08:02.162144482 +0000 UTC m=+23.130646562" observedRunningTime="2025-09-12 18:08:03.341166344 +0000 UTC m=+24.309668432" watchObservedRunningTime="2025-09-12 18:08:04.382748979 +0000 UTC m=+25.351251069" Sep 12 18:08:05.221664 kubelet[2717]: E0912 18:08:05.221555 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76f77" podUID="4cff656a-7bb3-4e69-b0de-ea6ad3f24730" Sep 12 18:08:05.337317 containerd[1525]: time="2025-09-12T18:08:05.337124111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 18:08:07.223453 kubelet[2717]: E0912 18:08:07.223392 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76f77" podUID="4cff656a-7bb3-4e69-b0de-ea6ad3f24730" Sep 12 18:08:09.223073 kubelet[2717]: E0912 18:08:09.222962 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76f77" podUID="4cff656a-7bb3-4e69-b0de-ea6ad3f24730" Sep 12 18:08:09.650056 containerd[1525]: time="2025-09-12T18:08:09.649957166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:09.651264 containerd[1525]: time="2025-09-12T18:08:09.651216231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 18:08:09.652128 containerd[1525]: time="2025-09-12T18:08:09.652084600Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:09.654195 containerd[1525]: time="2025-09-12T18:08:09.654157155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:09.655419 containerd[1525]: time="2025-09-12T18:08:09.655387440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.318120332s" Sep 12 18:08:09.655467 containerd[1525]: time="2025-09-12T18:08:09.655419982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 18:08:09.660477 containerd[1525]: time="2025-09-12T18:08:09.660433794Z" level=info msg="CreateContainer within sandbox \"6593f5b25e6b0850d1b3118dc2f53a0a1f551908ba79f703c6eb186742868e4f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 18:08:09.669518 containerd[1525]: time="2025-09-12T18:08:09.668173284Z" level=info msg="Container 84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:09.712451 containerd[1525]: time="2025-09-12T18:08:09.712388065Z" level=info msg="CreateContainer within sandbox \"6593f5b25e6b0850d1b3118dc2f53a0a1f551908ba79f703c6eb186742868e4f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047\"" Sep 12 18:08:09.713748 containerd[1525]: time="2025-09-12T18:08:09.713702004Z" level=info msg="StartContainer for \"84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047\"" Sep 12 18:08:09.715635 containerd[1525]: time="2025-09-12T18:08:09.715589960Z" level=info msg="connecting to shim 84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047" address="unix:///run/containerd/s/03f7006c07f078f03e212de9ea00baeeb09a552ea0f19dd136d96865bdff1766" protocol=ttrpc version=3 Sep 12 18:08:09.750309 systemd[1]: Started cri-containerd-84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047.scope - libcontainer container 84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047. Sep 12 18:08:09.801095 containerd[1525]: time="2025-09-12T18:08:09.801054518Z" level=info msg="StartContainer for \"84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047\" returns successfully" Sep 12 18:08:10.353700 systemd[1]: cri-containerd-84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047.scope: Deactivated successfully. Sep 12 18:08:10.354029 systemd[1]: cri-containerd-84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047.scope: Consumed 575ms CPU time, 162.8M memory peak, 6M read from disk, 171.3M written to disk. Sep 12 18:08:10.362033 containerd[1525]: time="2025-09-12T18:08:10.361939134Z" level=info msg="received exit event container_id:\"84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047\" id:\"84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047\" pid:3495 exited_at:{seconds:1757700490 nanos:358609826}" Sep 12 18:08:10.363337 containerd[1525]: time="2025-09-12T18:08:10.362647472Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047\" id:\"84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047\" pid:3495 exited_at:{seconds:1757700490 nanos:358609826}" Sep 12 18:08:10.404600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84c9a41dec94a2f0b3e9cc7baff830aa3782538ef9a0e7bb9589e7ae736be047-rootfs.mount: Deactivated successfully. Sep 12 18:08:10.427277 kubelet[2717]: I0912 18:08:10.427233 2717 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 18:08:10.481888 systemd[1]: Created slice kubepods-burstable-poda58b589e_1ab9_4796_8577_44c87f89297d.slice - libcontainer container kubepods-burstable-poda58b589e_1ab9_4796_8577_44c87f89297d.slice. Sep 12 18:08:10.493863 systemd[1]: Created slice kubepods-burstable-pod67f73566_3e08_467e_ad6c_9763df385f9b.slice - libcontainer container kubepods-burstable-pod67f73566_3e08_467e_ad6c_9763df385f9b.slice. Sep 12 18:08:10.511404 systemd[1]: Created slice kubepods-besteffort-pod8397ff1e_892f_4ca6_94b4_3339b968f3e8.slice - libcontainer container kubepods-besteffort-pod8397ff1e_892f_4ca6_94b4_3339b968f3e8.slice. Sep 12 18:08:10.526090 systemd[1]: Created slice kubepods-besteffort-pod6b1ff3f0_812f_44c5_9187_9cedfb780f33.slice - libcontainer container kubepods-besteffort-pod6b1ff3f0_812f_44c5_9187_9cedfb780f33.slice. Sep 12 18:08:10.527828 kubelet[2717]: I0912 18:08:10.527554 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6b1ff3f0-812f-44c5-9187-9cedfb780f33-calico-apiserver-certs\") pod \"calico-apiserver-7b67d4c7b9-bf2ch\" (UID: \"6b1ff3f0-812f-44c5-9187-9cedfb780f33\") " pod="calico-apiserver/calico-apiserver-7b67d4c7b9-bf2ch" Sep 12 18:08:10.528783 kubelet[2717]: I0912 18:08:10.527810 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67f73566-3e08-467e-ad6c-9763df385f9b-config-volume\") pod \"coredns-674b8bbfcf-wq2qd\" (UID: \"67f73566-3e08-467e-ad6c-9763df385f9b\") " pod="kube-system/coredns-674b8bbfcf-wq2qd" Sep 12 18:08:10.529617 kubelet[2717]: I0912 18:08:10.529353 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8397ff1e-892f-4ca6-94b4-3339b968f3e8-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-9q8nh\" (UID: \"8397ff1e-892f-4ca6-94b4-3339b968f3e8\") " pod="calico-system/goldmane-54d579b49d-9q8nh" Sep 12 18:08:10.529617 kubelet[2717]: I0912 18:08:10.529438 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8397ff1e-892f-4ca6-94b4-3339b968f3e8-goldmane-key-pair\") pod \"goldmane-54d579b49d-9q8nh\" (UID: \"8397ff1e-892f-4ca6-94b4-3339b968f3e8\") " pod="calico-system/goldmane-54d579b49d-9q8nh" Sep 12 18:08:10.529617 kubelet[2717]: I0912 18:08:10.529461 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8397ff1e-892f-4ca6-94b4-3339b968f3e8-config\") pod \"goldmane-54d579b49d-9q8nh\" (UID: \"8397ff1e-892f-4ca6-94b4-3339b968f3e8\") " pod="calico-system/goldmane-54d579b49d-9q8nh" Sep 12 18:08:10.529617 kubelet[2717]: I0912 18:08:10.529478 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxl2t\" (UniqueName: \"kubernetes.io/projected/8397ff1e-892f-4ca6-94b4-3339b968f3e8-kube-api-access-hxl2t\") pod \"goldmane-54d579b49d-9q8nh\" (UID: \"8397ff1e-892f-4ca6-94b4-3339b968f3e8\") " pod="calico-system/goldmane-54d579b49d-9q8nh" Sep 12 18:08:10.529617 kubelet[2717]: I0912 18:08:10.529508 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b2617e8-97e9-4630-885d-6d0cae65c523-tigera-ca-bundle\") pod \"calico-kube-controllers-75d6b98655-l2l88\" (UID: \"0b2617e8-97e9-4630-885d-6d0cae65c523\") " pod="calico-system/calico-kube-controllers-75d6b98655-l2l88" Sep 12 18:08:10.529828 kubelet[2717]: I0912 18:08:10.529539 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzfg2\" (UniqueName: \"kubernetes.io/projected/6b1ff3f0-812f-44c5-9187-9cedfb780f33-kube-api-access-lzfg2\") pod \"calico-apiserver-7b67d4c7b9-bf2ch\" (UID: \"6b1ff3f0-812f-44c5-9187-9cedfb780f33\") " pod="calico-apiserver/calico-apiserver-7b67d4c7b9-bf2ch" Sep 12 18:08:10.530624 kubelet[2717]: I0912 18:08:10.530598 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a58b589e-1ab9-4796-8577-44c87f89297d-config-volume\") pod \"coredns-674b8bbfcf-w94jp\" (UID: \"a58b589e-1ab9-4796-8577-44c87f89297d\") " pod="kube-system/coredns-674b8bbfcf-w94jp" Sep 12 18:08:10.530741 kubelet[2717]: I0912 18:08:10.530634 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqw72\" (UniqueName: \"kubernetes.io/projected/0b2617e8-97e9-4630-885d-6d0cae65c523-kube-api-access-dqw72\") pod \"calico-kube-controllers-75d6b98655-l2l88\" (UID: \"0b2617e8-97e9-4630-885d-6d0cae65c523\") " pod="calico-system/calico-kube-controllers-75d6b98655-l2l88" Sep 12 18:08:10.530741 kubelet[2717]: I0912 18:08:10.530652 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxjwn\" (UniqueName: \"kubernetes.io/projected/a58b589e-1ab9-4796-8577-44c87f89297d-kube-api-access-mxjwn\") pod \"coredns-674b8bbfcf-w94jp\" (UID: \"a58b589e-1ab9-4796-8577-44c87f89297d\") " pod="kube-system/coredns-674b8bbfcf-w94jp" Sep 12 18:08:10.530741 kubelet[2717]: I0912 18:08:10.530688 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92cmm\" (UniqueName: \"kubernetes.io/projected/67f73566-3e08-467e-ad6c-9763df385f9b-kube-api-access-92cmm\") pod \"coredns-674b8bbfcf-wq2qd\" (UID: \"67f73566-3e08-467e-ad6c-9763df385f9b\") " pod="kube-system/coredns-674b8bbfcf-wq2qd" Sep 12 18:08:10.537313 systemd[1]: Created slice kubepods-besteffort-pod0b2617e8_97e9_4630_885d_6d0cae65c523.slice - libcontainer container kubepods-besteffort-pod0b2617e8_97e9_4630_885d_6d0cae65c523.slice. Sep 12 18:08:10.547615 systemd[1]: Created slice kubepods-besteffort-pod48216269_d18a_4bff_8dfe_c7d3ca783ebd.slice - libcontainer container kubepods-besteffort-pod48216269_d18a_4bff_8dfe_c7d3ca783ebd.slice. Sep 12 18:08:10.560807 systemd[1]: Created slice kubepods-besteffort-podc1dcc2d0_2db1_44d7_a487_378c0b3b156e.slice - libcontainer container kubepods-besteffort-podc1dcc2d0_2db1_44d7_a487_378c0b3b156e.slice. Sep 12 18:08:10.631816 kubelet[2717]: I0912 18:08:10.631684 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/48216269-d18a-4bff-8dfe-c7d3ca783ebd-whisker-backend-key-pair\") pod \"whisker-574cc78cc-dm6kr\" (UID: \"48216269-d18a-4bff-8dfe-c7d3ca783ebd\") " pod="calico-system/whisker-574cc78cc-dm6kr" Sep 12 18:08:10.631816 kubelet[2717]: I0912 18:08:10.631729 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xtwk\" (UniqueName: \"kubernetes.io/projected/c1dcc2d0-2db1-44d7-a487-378c0b3b156e-kube-api-access-5xtwk\") pod \"calico-apiserver-7b67d4c7b9-5mx2d\" (UID: \"c1dcc2d0-2db1-44d7-a487-378c0b3b156e\") " pod="calico-apiserver/calico-apiserver-7b67d4c7b9-5mx2d" Sep 12 18:08:10.631816 kubelet[2717]: I0912 18:08:10.631766 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48216269-d18a-4bff-8dfe-c7d3ca783ebd-whisker-ca-bundle\") pod \"whisker-574cc78cc-dm6kr\" (UID: \"48216269-d18a-4bff-8dfe-c7d3ca783ebd\") " pod="calico-system/whisker-574cc78cc-dm6kr" Sep 12 18:08:10.631816 kubelet[2717]: I0912 18:08:10.631787 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c1dcc2d0-2db1-44d7-a487-378c0b3b156e-calico-apiserver-certs\") pod \"calico-apiserver-7b67d4c7b9-5mx2d\" (UID: \"c1dcc2d0-2db1-44d7-a487-378c0b3b156e\") " pod="calico-apiserver/calico-apiserver-7b67d4c7b9-5mx2d" Sep 12 18:08:10.633277 kubelet[2717]: I0912 18:08:10.633252 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s94zj\" (UniqueName: \"kubernetes.io/projected/48216269-d18a-4bff-8dfe-c7d3ca783ebd-kube-api-access-s94zj\") pod \"whisker-574cc78cc-dm6kr\" (UID: \"48216269-d18a-4bff-8dfe-c7d3ca783ebd\") " pod="calico-system/whisker-574cc78cc-dm6kr" Sep 12 18:08:10.789169 kubelet[2717]: E0912 18:08:10.789105 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:10.791578 containerd[1525]: time="2025-09-12T18:08:10.791530223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w94jp,Uid:a58b589e-1ab9-4796-8577-44c87f89297d,Namespace:kube-system,Attempt:0,}" Sep 12 18:08:10.807156 kubelet[2717]: E0912 18:08:10.805879 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:10.807647 containerd[1525]: time="2025-09-12T18:08:10.807605107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wq2qd,Uid:67f73566-3e08-467e-ad6c-9763df385f9b,Namespace:kube-system,Attempt:0,}" Sep 12 18:08:10.820453 containerd[1525]: time="2025-09-12T18:08:10.820405956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9q8nh,Uid:8397ff1e-892f-4ca6-94b4-3339b968f3e8,Namespace:calico-system,Attempt:0,}" Sep 12 18:08:10.853602 containerd[1525]: time="2025-09-12T18:08:10.852257978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d6b98655-l2l88,Uid:0b2617e8-97e9-4630-885d-6d0cae65c523,Namespace:calico-system,Attempt:0,}" Sep 12 18:08:10.859911 containerd[1525]: time="2025-09-12T18:08:10.859873020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-574cc78cc-dm6kr,Uid:48216269-d18a-4bff-8dfe-c7d3ca783ebd,Namespace:calico-system,Attempt:0,}" Sep 12 18:08:10.865076 containerd[1525]: time="2025-09-12T18:08:10.864914049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b67d4c7b9-bf2ch,Uid:6b1ff3f0-812f-44c5-9187-9cedfb780f33,Namespace:calico-apiserver,Attempt:0,}" Sep 12 18:08:10.872812 containerd[1525]: time="2025-09-12T18:08:10.872770736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b67d4c7b9-5mx2d,Uid:c1dcc2d0-2db1-44d7-a487-378c0b3b156e,Namespace:calico-apiserver,Attempt:0,}" Sep 12 18:08:11.068047 containerd[1525]: time="2025-09-12T18:08:11.067906255Z" level=error msg="Failed to destroy network for sandbox \"a800af62a61958f8e130e12ff5a4d95214864f6055714969944febe7b2d22173\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.072492 containerd[1525]: time="2025-09-12T18:08:11.072416538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d6b98655-l2l88,Uid:0b2617e8-97e9-4630-885d-6d0cae65c523,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a800af62a61958f8e130e12ff5a4d95214864f6055714969944febe7b2d22173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.074273 kubelet[2717]: E0912 18:08:11.074171 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a800af62a61958f8e130e12ff5a4d95214864f6055714969944febe7b2d22173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.074493 kubelet[2717]: E0912 18:08:11.074470 2717 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a800af62a61958f8e130e12ff5a4d95214864f6055714969944febe7b2d22173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75d6b98655-l2l88" Sep 12 18:08:11.074551 kubelet[2717]: E0912 18:08:11.074500 2717 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a800af62a61958f8e130e12ff5a4d95214864f6055714969944febe7b2d22173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75d6b98655-l2l88" Sep 12 18:08:11.075851 kubelet[2717]: E0912 18:08:11.074572 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75d6b98655-l2l88_calico-system(0b2617e8-97e9-4630-885d-6d0cae65c523)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75d6b98655-l2l88_calico-system(0b2617e8-97e9-4630-885d-6d0cae65c523)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a800af62a61958f8e130e12ff5a4d95214864f6055714969944febe7b2d22173\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75d6b98655-l2l88" podUID="0b2617e8-97e9-4630-885d-6d0cae65c523" Sep 12 18:08:11.106409 containerd[1525]: time="2025-09-12T18:08:11.106325981Z" level=error msg="Failed to destroy network for sandbox \"460a50fabe3afe74f0951fcd9b1d79971c7164cc7839ab545ad22644c6be8c25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.113045 containerd[1525]: time="2025-09-12T18:08:11.112974502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9q8nh,Uid:8397ff1e-892f-4ca6-94b4-3339b968f3e8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"460a50fabe3afe74f0951fcd9b1d79971c7164cc7839ab545ad22644c6be8c25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.113295 kubelet[2717]: E0912 18:08:11.113253 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460a50fabe3afe74f0951fcd9b1d79971c7164cc7839ab545ad22644c6be8c25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.113369 kubelet[2717]: E0912 18:08:11.113308 2717 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460a50fabe3afe74f0951fcd9b1d79971c7164cc7839ab545ad22644c6be8c25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-9q8nh" Sep 12 18:08:11.113369 kubelet[2717]: E0912 18:08:11.113338 2717 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460a50fabe3afe74f0951fcd9b1d79971c7164cc7839ab545ad22644c6be8c25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-9q8nh" Sep 12 18:08:11.113439 kubelet[2717]: E0912 18:08:11.113416 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-9q8nh_calico-system(8397ff1e-892f-4ca6-94b4-3339b968f3e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-9q8nh_calico-system(8397ff1e-892f-4ca6-94b4-3339b968f3e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"460a50fabe3afe74f0951fcd9b1d79971c7164cc7839ab545ad22644c6be8c25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-9q8nh" podUID="8397ff1e-892f-4ca6-94b4-3339b968f3e8" Sep 12 18:08:11.122671 containerd[1525]: time="2025-09-12T18:08:11.122619105Z" level=error msg="Failed to destroy network for sandbox \"e7a81c2aa95d7cd76ca29f10d9e0bd26f258fc09513dd38e39409e95ecfa3575\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.125484 containerd[1525]: time="2025-09-12T18:08:11.125417129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b67d4c7b9-5mx2d,Uid:c1dcc2d0-2db1-44d7-a487-378c0b3b156e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7a81c2aa95d7cd76ca29f10d9e0bd26f258fc09513dd38e39409e95ecfa3575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.125943 kubelet[2717]: E0912 18:08:11.125894 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7a81c2aa95d7cd76ca29f10d9e0bd26f258fc09513dd38e39409e95ecfa3575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.126044 kubelet[2717]: E0912 18:08:11.125978 2717 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7a81c2aa95d7cd76ca29f10d9e0bd26f258fc09513dd38e39409e95ecfa3575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b67d4c7b9-5mx2d" Sep 12 18:08:11.126044 kubelet[2717]: E0912 18:08:11.126001 2717 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7a81c2aa95d7cd76ca29f10d9e0bd26f258fc09513dd38e39409e95ecfa3575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b67d4c7b9-5mx2d" Sep 12 18:08:11.127126 kubelet[2717]: E0912 18:08:11.127063 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b67d4c7b9-5mx2d_calico-apiserver(c1dcc2d0-2db1-44d7-a487-378c0b3b156e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b67d4c7b9-5mx2d_calico-apiserver(c1dcc2d0-2db1-44d7-a487-378c0b3b156e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7a81c2aa95d7cd76ca29f10d9e0bd26f258fc09513dd38e39409e95ecfa3575\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b67d4c7b9-5mx2d" podUID="c1dcc2d0-2db1-44d7-a487-378c0b3b156e" Sep 12 18:08:11.159586 containerd[1525]: time="2025-09-12T18:08:11.142764334Z" level=error msg="Failed to destroy network for sandbox \"2f80746ce56abd218fbd77f41dc928d0ffad68f8c5b75c585b126780472f3473\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.160874 containerd[1525]: time="2025-09-12T18:08:11.160830611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w94jp,Uid:a58b589e-1ab9-4796-8577-44c87f89297d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f80746ce56abd218fbd77f41dc928d0ffad68f8c5b75c585b126780472f3473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.161128 containerd[1525]: time="2025-09-12T18:08:11.145702966Z" level=error msg="Failed to destroy network for sandbox \"32409b5f0fc92bb1614663be9e6775d7775ed5432effa5d908765f7efa8a2593\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.161849 kubelet[2717]: E0912 18:08:11.161801 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f80746ce56abd218fbd77f41dc928d0ffad68f8c5b75c585b126780472f3473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.162066 containerd[1525]: time="2025-09-12T18:08:11.162039870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b67d4c7b9-bf2ch,Uid:6b1ff3f0-812f-44c5-9187-9cedfb780f33,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32409b5f0fc92bb1614663be9e6775d7775ed5432effa5d908765f7efa8a2593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.162543 kubelet[2717]: E0912 18:08:11.162179 2717 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f80746ce56abd218fbd77f41dc928d0ffad68f8c5b75c585b126780472f3473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w94jp" Sep 12 18:08:11.162543 kubelet[2717]: E0912 18:08:11.162212 2717 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f80746ce56abd218fbd77f41dc928d0ffad68f8c5b75c585b126780472f3473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w94jp" Sep 12 18:08:11.162543 kubelet[2717]: E0912 18:08:11.162277 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-w94jp_kube-system(a58b589e-1ab9-4796-8577-44c87f89297d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-w94jp_kube-system(a58b589e-1ab9-4796-8577-44c87f89297d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f80746ce56abd218fbd77f41dc928d0ffad68f8c5b75c585b126780472f3473\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-w94jp" podUID="a58b589e-1ab9-4796-8577-44c87f89297d" Sep 12 18:08:11.162960 kubelet[2717]: E0912 18:08:11.162933 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32409b5f0fc92bb1614663be9e6775d7775ed5432effa5d908765f7efa8a2593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.163066 kubelet[2717]: E0912 18:08:11.163051 2717 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32409b5f0fc92bb1614663be9e6775d7775ed5432effa5d908765f7efa8a2593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b67d4c7b9-bf2ch" Sep 12 18:08:11.163105 kubelet[2717]: E0912 18:08:11.163078 2717 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32409b5f0fc92bb1614663be9e6775d7775ed5432effa5d908765f7efa8a2593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b67d4c7b9-bf2ch" Sep 12 18:08:11.163250 kubelet[2717]: E0912 18:08:11.163228 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b67d4c7b9-bf2ch_calico-apiserver(6b1ff3f0-812f-44c5-9187-9cedfb780f33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b67d4c7b9-bf2ch_calico-apiserver(6b1ff3f0-812f-44c5-9187-9cedfb780f33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32409b5f0fc92bb1614663be9e6775d7775ed5432effa5d908765f7efa8a2593\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b67d4c7b9-bf2ch" podUID="6b1ff3f0-812f-44c5-9187-9cedfb780f33" Sep 12 18:08:11.169073 containerd[1525]: time="2025-09-12T18:08:11.168469984Z" level=error msg="Failed to destroy network for sandbox \"de920b5285371f3c989b051e800d364fb5f890aa65fa0d9186d366bdb615c663\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.169516 containerd[1525]: time="2025-09-12T18:08:11.169436401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wq2qd,Uid:67f73566-3e08-467e-ad6c-9763df385f9b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de920b5285371f3c989b051e800d364fb5f890aa65fa0d9186d366bdb615c663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.170476 kubelet[2717]: E0912 18:08:11.169896 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de920b5285371f3c989b051e800d364fb5f890aa65fa0d9186d366bdb615c663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.170476 kubelet[2717]: E0912 18:08:11.169971 2717 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de920b5285371f3c989b051e800d364fb5f890aa65fa0d9186d366bdb615c663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wq2qd" Sep 12 18:08:11.170476 kubelet[2717]: E0912 18:08:11.169997 2717 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de920b5285371f3c989b051e800d364fb5f890aa65fa0d9186d366bdb615c663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wq2qd" Sep 12 18:08:11.170680 kubelet[2717]: E0912 18:08:11.170440 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wq2qd_kube-system(67f73566-3e08-467e-ad6c-9763df385f9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wq2qd_kube-system(67f73566-3e08-467e-ad6c-9763df385f9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de920b5285371f3c989b051e800d364fb5f890aa65fa0d9186d366bdb615c663\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wq2qd" podUID="67f73566-3e08-467e-ad6c-9763df385f9b" Sep 12 18:08:11.182339 containerd[1525]: time="2025-09-12T18:08:11.182274279Z" level=error msg="Failed to destroy network for sandbox \"16e03fe07012eba980519b8d1ca04cbd149a1118f7d5840e705ebca2a2c26311\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.183513 containerd[1525]: time="2025-09-12T18:08:11.183419026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-574cc78cc-dm6kr,Uid:48216269-d18a-4bff-8dfe-c7d3ca783ebd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16e03fe07012eba980519b8d1ca04cbd149a1118f7d5840e705ebca2a2c26311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.183961 kubelet[2717]: E0912 18:08:11.183898 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16e03fe07012eba980519b8d1ca04cbd149a1118f7d5840e705ebca2a2c26311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.184148 kubelet[2717]: E0912 18:08:11.184093 2717 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16e03fe07012eba980519b8d1ca04cbd149a1118f7d5840e705ebca2a2c26311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-574cc78cc-dm6kr" Sep 12 18:08:11.184148 kubelet[2717]: E0912 18:08:11.184124 2717 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16e03fe07012eba980519b8d1ca04cbd149a1118f7d5840e705ebca2a2c26311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-574cc78cc-dm6kr" Sep 12 18:08:11.184417 kubelet[2717]: E0912 18:08:11.184331 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-574cc78cc-dm6kr_calico-system(48216269-d18a-4bff-8dfe-c7d3ca783ebd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-574cc78cc-dm6kr_calico-system(48216269-d18a-4bff-8dfe-c7d3ca783ebd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16e03fe07012eba980519b8d1ca04cbd149a1118f7d5840e705ebca2a2c26311\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-574cc78cc-dm6kr" podUID="48216269-d18a-4bff-8dfe-c7d3ca783ebd" Sep 12 18:08:11.231221 systemd[1]: Created slice kubepods-besteffort-pod4cff656a_7bb3_4e69_b0de_ea6ad3f24730.slice - libcontainer container kubepods-besteffort-pod4cff656a_7bb3_4e69_b0de_ea6ad3f24730.slice. Sep 12 18:08:11.235036 containerd[1525]: time="2025-09-12T18:08:11.234985558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76f77,Uid:4cff656a-7bb3-4e69-b0de-ea6ad3f24730,Namespace:calico-system,Attempt:0,}" Sep 12 18:08:11.292887 containerd[1525]: time="2025-09-12T18:08:11.292818638Z" level=error msg="Failed to destroy network for sandbox \"79e4daaaa82e5ebda0ec818fb95bd97102455746ea584073bd53827ba1594e86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.293842 containerd[1525]: time="2025-09-12T18:08:11.293792634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76f77,Uid:4cff656a-7bb3-4e69-b0de-ea6ad3f24730,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79e4daaaa82e5ebda0ec818fb95bd97102455746ea584073bd53827ba1594e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.294230 kubelet[2717]: E0912 18:08:11.294184 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79e4daaaa82e5ebda0ec818fb95bd97102455746ea584073bd53827ba1594e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 18:08:11.294300 kubelet[2717]: E0912 18:08:11.294262 2717 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79e4daaaa82e5ebda0ec818fb95bd97102455746ea584073bd53827ba1594e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76f77" Sep 12 18:08:11.294300 kubelet[2717]: E0912 18:08:11.294285 2717 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79e4daaaa82e5ebda0ec818fb95bd97102455746ea584073bd53827ba1594e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76f77" Sep 12 18:08:11.294399 kubelet[2717]: E0912 18:08:11.294354 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-76f77_calico-system(4cff656a-7bb3-4e69-b0de-ea6ad3f24730)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-76f77_calico-system(4cff656a-7bb3-4e69-b0de-ea6ad3f24730)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79e4daaaa82e5ebda0ec818fb95bd97102455746ea584073bd53827ba1594e86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76f77" podUID="4cff656a-7bb3-4e69-b0de-ea6ad3f24730" Sep 12 18:08:11.374103 containerd[1525]: time="2025-09-12T18:08:11.374062933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 18:08:16.919422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079643762.mount: Deactivated successfully. Sep 12 18:08:16.950443 containerd[1525]: time="2025-09-12T18:08:16.950285389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:16.951507 containerd[1525]: time="2025-09-12T18:08:16.951413564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 18:08:16.952199 containerd[1525]: time="2025-09-12T18:08:16.952150523Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:16.954043 containerd[1525]: time="2025-09-12T18:08:16.953633683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:16.954264 containerd[1525]: time="2025-09-12T18:08:16.954242930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 5.580141112s" Sep 12 18:08:16.954528 containerd[1525]: time="2025-09-12T18:08:16.954369829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 18:08:16.990241 containerd[1525]: time="2025-09-12T18:08:16.990191061Z" level=info msg="CreateContainer within sandbox \"6593f5b25e6b0850d1b3118dc2f53a0a1f551908ba79f703c6eb186742868e4f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 18:08:17.006066 containerd[1525]: time="2025-09-12T18:08:17.005514188Z" level=info msg="Container 7bef7db5284ee35c284678307a30cc54996944249d806d9092a40f284b9cb95a: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:17.012818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966234500.mount: Deactivated successfully. Sep 12 18:08:17.023038 containerd[1525]: time="2025-09-12T18:08:17.022957699Z" level=info msg="CreateContainer within sandbox \"6593f5b25e6b0850d1b3118dc2f53a0a1f551908ba79f703c6eb186742868e4f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7bef7db5284ee35c284678307a30cc54996944249d806d9092a40f284b9cb95a\"" Sep 12 18:08:17.024448 containerd[1525]: time="2025-09-12T18:08:17.024206939Z" level=info msg="StartContainer for \"7bef7db5284ee35c284678307a30cc54996944249d806d9092a40f284b9cb95a\"" Sep 12 18:08:17.026961 containerd[1525]: time="2025-09-12T18:08:17.026914359Z" level=info msg="connecting to shim 7bef7db5284ee35c284678307a30cc54996944249d806d9092a40f284b9cb95a" address="unix:///run/containerd/s/03f7006c07f078f03e212de9ea00baeeb09a552ea0f19dd136d96865bdff1766" protocol=ttrpc version=3 Sep 12 18:08:17.146431 systemd[1]: Started cri-containerd-7bef7db5284ee35c284678307a30cc54996944249d806d9092a40f284b9cb95a.scope - libcontainer container 7bef7db5284ee35c284678307a30cc54996944249d806d9092a40f284b9cb95a. Sep 12 18:08:17.317312 containerd[1525]: time="2025-09-12T18:08:17.316675265Z" level=info msg="StartContainer for \"7bef7db5284ee35c284678307a30cc54996944249d806d9092a40f284b9cb95a\" returns successfully" Sep 12 18:08:17.462000 kubelet[2717]: I0912 18:08:17.461661 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r6z88" podStartSLOduration=1.344086901 podStartE2EDuration="18.461641978s" podCreationTimestamp="2025-09-12 18:07:59 +0000 UTC" firstStartedPulling="2025-09-12 18:07:59.837358657 +0000 UTC m=+20.805860738" lastFinishedPulling="2025-09-12 18:08:16.954913747 +0000 UTC m=+37.923415815" observedRunningTime="2025-09-12 18:08:17.457010324 +0000 UTC m=+38.425512412" watchObservedRunningTime="2025-09-12 18:08:17.461641978 +0000 UTC m=+38.430144095" Sep 12 18:08:17.522487 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 18:08:17.523706 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 18:08:17.896859 kubelet[2717]: I0912 18:08:17.896452 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/48216269-d18a-4bff-8dfe-c7d3ca783ebd-whisker-backend-key-pair\") pod \"48216269-d18a-4bff-8dfe-c7d3ca783ebd\" (UID: \"48216269-d18a-4bff-8dfe-c7d3ca783ebd\") " Sep 12 18:08:17.896859 kubelet[2717]: I0912 18:08:17.896508 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s94zj\" (UniqueName: \"kubernetes.io/projected/48216269-d18a-4bff-8dfe-c7d3ca783ebd-kube-api-access-s94zj\") pod \"48216269-d18a-4bff-8dfe-c7d3ca783ebd\" (UID: \"48216269-d18a-4bff-8dfe-c7d3ca783ebd\") " Sep 12 18:08:17.896859 kubelet[2717]: I0912 18:08:17.896530 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48216269-d18a-4bff-8dfe-c7d3ca783ebd-whisker-ca-bundle\") pod \"48216269-d18a-4bff-8dfe-c7d3ca783ebd\" (UID: \"48216269-d18a-4bff-8dfe-c7d3ca783ebd\") " Sep 12 18:08:17.904971 kubelet[2717]: I0912 18:08:17.904872 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48216269-d18a-4bff-8dfe-c7d3ca783ebd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "48216269-d18a-4bff-8dfe-c7d3ca783ebd" (UID: "48216269-d18a-4bff-8dfe-c7d3ca783ebd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 18:08:17.908789 kubelet[2717]: I0912 18:08:17.908641 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48216269-d18a-4bff-8dfe-c7d3ca783ebd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "48216269-d18a-4bff-8dfe-c7d3ca783ebd" (UID: "48216269-d18a-4bff-8dfe-c7d3ca783ebd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 18:08:17.909686 kubelet[2717]: I0912 18:08:17.909627 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48216269-d18a-4bff-8dfe-c7d3ca783ebd-kube-api-access-s94zj" (OuterVolumeSpecName: "kube-api-access-s94zj") pod "48216269-d18a-4bff-8dfe-c7d3ca783ebd" (UID: "48216269-d18a-4bff-8dfe-c7d3ca783ebd"). InnerVolumeSpecName "kube-api-access-s94zj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 18:08:17.924545 systemd[1]: var-lib-kubelet-pods-48216269\x2dd18a\x2d4bff\x2d8dfe\x2dc7d3ca783ebd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds94zj.mount: Deactivated successfully. Sep 12 18:08:17.926217 systemd[1]: var-lib-kubelet-pods-48216269\x2dd18a\x2d4bff\x2d8dfe\x2dc7d3ca783ebd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 18:08:17.962271 containerd[1525]: time="2025-09-12T18:08:17.962214943Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bef7db5284ee35c284678307a30cc54996944249d806d9092a40f284b9cb95a\" id:\"3394ec8bacf62b5bfef3e0289ebd62f99ce73a90d2261c77588cc5659e17c1e4\" pid:3803 exit_status:1 exited_at:{seconds:1757700497 nanos:946234446}" Sep 12 18:08:17.997421 kubelet[2717]: I0912 18:08:17.997355 2717 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/48216269-d18a-4bff-8dfe-c7d3ca783ebd-whisker-backend-key-pair\") on node \"ci-4426.1.0-6-3761596165\" DevicePath \"\"" Sep 12 18:08:17.997421 kubelet[2717]: I0912 18:08:17.997409 2717 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s94zj\" (UniqueName: \"kubernetes.io/projected/48216269-d18a-4bff-8dfe-c7d3ca783ebd-kube-api-access-s94zj\") on node \"ci-4426.1.0-6-3761596165\" DevicePath \"\"" Sep 12 18:08:17.997421 kubelet[2717]: I0912 18:08:17.997426 2717 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48216269-d18a-4bff-8dfe-c7d3ca783ebd-whisker-ca-bundle\") on node \"ci-4426.1.0-6-3761596165\" DevicePath \"\"" Sep 12 18:08:18.436762 systemd[1]: Removed slice kubepods-besteffort-pod48216269_d18a_4bff_8dfe_c7d3ca783ebd.slice - libcontainer container kubepods-besteffort-pod48216269_d18a_4bff_8dfe_c7d3ca783ebd.slice. Sep 12 18:08:18.458803 kubelet[2717]: I0912 18:08:18.458651 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 18:08:18.459739 kubelet[2717]: E0912 18:08:18.459715 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:18.534216 systemd[1]: Created slice kubepods-besteffort-pod52fdacd5_6126_4862_a236_41df9e2bf20b.slice - libcontainer container kubepods-besteffort-pod52fdacd5_6126_4862_a236_41df9e2bf20b.slice. Sep 12 18:08:18.602312 kubelet[2717]: I0912 18:08:18.602245 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/52fdacd5-6126-4862-a236-41df9e2bf20b-whisker-backend-key-pair\") pod \"whisker-57c9f588-mfqrw\" (UID: \"52fdacd5-6126-4862-a236-41df9e2bf20b\") " pod="calico-system/whisker-57c9f588-mfqrw" Sep 12 18:08:18.602999 kubelet[2717]: I0912 18:08:18.602826 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqkm6\" (UniqueName: \"kubernetes.io/projected/52fdacd5-6126-4862-a236-41df9e2bf20b-kube-api-access-bqkm6\") pod \"whisker-57c9f588-mfqrw\" (UID: \"52fdacd5-6126-4862-a236-41df9e2bf20b\") " pod="calico-system/whisker-57c9f588-mfqrw" Sep 12 18:08:18.602999 kubelet[2717]: I0912 18:08:18.602914 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52fdacd5-6126-4862-a236-41df9e2bf20b-whisker-ca-bundle\") pod \"whisker-57c9f588-mfqrw\" (UID: \"52fdacd5-6126-4862-a236-41df9e2bf20b\") " pod="calico-system/whisker-57c9f588-mfqrw" Sep 12 18:08:18.630728 containerd[1525]: time="2025-09-12T18:08:18.630687960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bef7db5284ee35c284678307a30cc54996944249d806d9092a40f284b9cb95a\" id:\"74461f4796d667ec0ee83c6b664cca22acf35572cd5e88ce6e96a046a3fd43f3\" pid:3848 exit_status:1 exited_at:{seconds:1757700498 nanos:630339136}" Sep 12 18:08:18.840246 containerd[1525]: time="2025-09-12T18:08:18.840124072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57c9f588-mfqrw,Uid:52fdacd5-6126-4862-a236-41df9e2bf20b,Namespace:calico-system,Attempt:0,}" Sep 12 18:08:19.226070 systemd-networkd[1452]: cali3287a965e7f: Link UP Sep 12 18:08:19.226655 systemd-networkd[1452]: cali3287a965e7f: Gained carrier Sep 12 18:08:19.233735 kubelet[2717]: I0912 18:08:19.233535 2717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48216269-d18a-4bff-8dfe-c7d3ca783ebd" path="/var/lib/kubelet/pods/48216269-d18a-4bff-8dfe-c7d3ca783ebd/volumes" Sep 12 18:08:19.265379 containerd[1525]: 2025-09-12 18:08:18.926 [INFO][3863] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 18:08:19.265379 containerd[1525]: 2025-09-12 18:08:18.960 [INFO][3863] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-eth0 whisker-57c9f588- calico-system 52fdacd5-6126-4862-a236-41df9e2bf20b 925 0 2025-09-12 18:08:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57c9f588 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4426.1.0-6-3761596165 whisker-57c9f588-mfqrw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3287a965e7f [] [] }} ContainerID="676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" Namespace="calico-system" Pod="whisker-57c9f588-mfqrw" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-" Sep 12 18:08:19.265379 containerd[1525]: 2025-09-12 18:08:18.960 [INFO][3863] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" Namespace="calico-system" Pod="whisker-57c9f588-mfqrw" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-eth0" Sep 12 18:08:19.265379 containerd[1525]: 2025-09-12 18:08:19.123 [INFO][3874] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" HandleID="k8s-pod-network.676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" Workload="ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-eth0" Sep 12 18:08:19.267314 containerd[1525]: 2025-09-12 18:08:19.126 [INFO][3874] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" HandleID="k8s-pod-network.676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" Workload="ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a61b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426.1.0-6-3761596165", "pod":"whisker-57c9f588-mfqrw", "timestamp":"2025-09-12 18:08:19.123275461 +0000 UTC"}, Hostname:"ci-4426.1.0-6-3761596165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 18:08:19.267314 containerd[1525]: 2025-09-12 18:08:19.126 [INFO][3874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 18:08:19.267314 containerd[1525]: 2025-09-12 18:08:19.126 [INFO][3874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 18:08:19.267314 containerd[1525]: 2025-09-12 18:08:19.127 [INFO][3874] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-6-3761596165' Sep 12 18:08:19.267314 containerd[1525]: 2025-09-12 18:08:19.147 [INFO][3874] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:19.267314 containerd[1525]: 2025-09-12 18:08:19.157 [INFO][3874] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-6-3761596165" Sep 12 18:08:19.267314 containerd[1525]: 2025-09-12 18:08:19.165 [INFO][3874] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:19.267314 containerd[1525]: 2025-09-12 18:08:19.168 [INFO][3874] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:19.267314 containerd[1525]: 2025-09-12 18:08:19.171 [INFO][3874] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:19.267591 containerd[1525]: 2025-09-12 18:08:19.171 [INFO][3874] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:19.267591 containerd[1525]: 2025-09-12 18:08:19.174 [INFO][3874] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3 Sep 12 18:08:19.267591 containerd[1525]: 2025-09-12 18:08:19.179 [INFO][3874] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:19.267591 containerd[1525]: 2025-09-12 18:08:19.195 [INFO][3874] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.1/26] block=192.168.51.0/26 handle="k8s-pod-network.676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:19.267591 containerd[1525]: 2025-09-12 18:08:19.195 [INFO][3874] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.1/26] handle="k8s-pod-network.676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:19.267591 containerd[1525]: 2025-09-12 18:08:19.195 [INFO][3874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 18:08:19.267591 containerd[1525]: 2025-09-12 18:08:19.195 [INFO][3874] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.1/26] IPv6=[] ContainerID="676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" HandleID="k8s-pod-network.676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" Workload="ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-eth0" Sep 12 18:08:19.267761 containerd[1525]: 2025-09-12 18:08:19.202 [INFO][3863] cni-plugin/k8s.go 418: Populated endpoint ContainerID="676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" Namespace="calico-system" Pod="whisker-57c9f588-mfqrw" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-eth0", GenerateName:"whisker-57c9f588-", Namespace:"calico-system", SelfLink:"", UID:"52fdacd5-6126-4862-a236-41df9e2bf20b", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 8, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57c9f588", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"", Pod:"whisker-57c9f588-mfqrw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3287a965e7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:19.267761 containerd[1525]: 2025-09-12 18:08:19.202 [INFO][3863] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.1/32] ContainerID="676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" Namespace="calico-system" Pod="whisker-57c9f588-mfqrw" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-eth0" Sep 12 18:08:19.267885 containerd[1525]: 2025-09-12 18:08:19.202 [INFO][3863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3287a965e7f ContainerID="676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" Namespace="calico-system" Pod="whisker-57c9f588-mfqrw" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-eth0" Sep 12 18:08:19.267885 containerd[1525]: 2025-09-12 18:08:19.226 [INFO][3863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" Namespace="calico-system" Pod="whisker-57c9f588-mfqrw" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-eth0" Sep 12 18:08:19.268701 containerd[1525]: 2025-09-12 18:08:19.228 [INFO][3863] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" Namespace="calico-system" Pod="whisker-57c9f588-mfqrw" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-eth0", GenerateName:"whisker-57c9f588-", Namespace:"calico-system", SelfLink:"", UID:"52fdacd5-6126-4862-a236-41df9e2bf20b", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 8, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57c9f588", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3", Pod:"whisker-57c9f588-mfqrw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3287a965e7f", MAC:"8a:7d:ef:56:ce:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:19.268781 containerd[1525]: 2025-09-12 18:08:19.254 [INFO][3863] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" Namespace="calico-system" Pod="whisker-57c9f588-mfqrw" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-whisker--57c9f588--mfqrw-eth0" Sep 12 18:08:19.435758 kubelet[2717]: E0912 18:08:19.435587 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:19.460323 containerd[1525]: time="2025-09-12T18:08:19.460086902Z" level=info msg="connecting to shim 676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3" address="unix:///run/containerd/s/6aab333966e6a9e634db1e4e619e42084a91d3ae6f97511ad303cd5092b56dba" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:08:19.528388 systemd[1]: Started cri-containerd-676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3.scope - libcontainer container 676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3. Sep 12 18:08:19.683501 containerd[1525]: time="2025-09-12T18:08:19.683417674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57c9f588-mfqrw,Uid:52fdacd5-6126-4862-a236-41df9e2bf20b,Namespace:calico-system,Attempt:0,} returns sandbox id \"676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3\"" Sep 12 18:08:19.687137 containerd[1525]: time="2025-09-12T18:08:19.686850379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 18:08:20.204220 systemd-networkd[1452]: vxlan.calico: Link UP Sep 12 18:08:20.204229 systemd-networkd[1452]: vxlan.calico: Gained carrier Sep 12 18:08:21.105065 containerd[1525]: time="2025-09-12T18:08:21.104699640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:21.106793 containerd[1525]: time="2025-09-12T18:08:21.106722876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 18:08:21.107108 containerd[1525]: time="2025-09-12T18:08:21.107035986Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:21.110427 containerd[1525]: time="2025-09-12T18:08:21.110376170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:21.111881 containerd[1525]: time="2025-09-12T18:08:21.111836326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.424514131s" Sep 12 18:08:21.112138 containerd[1525]: time="2025-09-12T18:08:21.111998943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 18:08:21.116640 containerd[1525]: time="2025-09-12T18:08:21.116175555Z" level=info msg="CreateContainer within sandbox \"676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 18:08:21.123966 containerd[1525]: time="2025-09-12T18:08:21.123927874Z" level=info msg="Container 8e6945e4906db2d735a7736803905a094529048fadfa1fa93d75f4b58642cdd8: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:21.133576 containerd[1525]: time="2025-09-12T18:08:21.133526065Z" level=info msg="CreateContainer within sandbox \"676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8e6945e4906db2d735a7736803905a094529048fadfa1fa93d75f4b58642cdd8\"" Sep 12 18:08:21.134511 containerd[1525]: time="2025-09-12T18:08:21.134266385Z" level=info msg="StartContainer for \"8e6945e4906db2d735a7736803905a094529048fadfa1fa93d75f4b58642cdd8\"" Sep 12 18:08:21.135909 containerd[1525]: time="2025-09-12T18:08:21.135839698Z" level=info msg="connecting to shim 8e6945e4906db2d735a7736803905a094529048fadfa1fa93d75f4b58642cdd8" address="unix:///run/containerd/s/6aab333966e6a9e634db1e4e619e42084a91d3ae6f97511ad303cd5092b56dba" protocol=ttrpc version=3 Sep 12 18:08:21.162232 systemd[1]: Started cri-containerd-8e6945e4906db2d735a7736803905a094529048fadfa1fa93d75f4b58642cdd8.scope - libcontainer container 8e6945e4906db2d735a7736803905a094529048fadfa1fa93d75f4b58642cdd8. Sep 12 18:08:21.172962 systemd-networkd[1452]: cali3287a965e7f: Gained IPv6LL Sep 12 18:08:21.229462 containerd[1525]: time="2025-09-12T18:08:21.229420337Z" level=info msg="StartContainer for \"8e6945e4906db2d735a7736803905a094529048fadfa1fa93d75f4b58642cdd8\" returns successfully" Sep 12 18:08:21.231624 containerd[1525]: time="2025-09-12T18:08:21.231582667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 18:08:21.684369 systemd-networkd[1452]: vxlan.calico: Gained IPv6LL Sep 12 18:08:23.437388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987415916.mount: Deactivated successfully. Sep 12 18:08:23.452142 containerd[1525]: time="2025-09-12T18:08:23.452077950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:23.458057 containerd[1525]: time="2025-09-12T18:08:23.457204668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 18:08:23.458057 containerd[1525]: time="2025-09-12T18:08:23.457371563Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:23.462959 containerd[1525]: time="2025-09-12T18:08:23.462891613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:23.465324 containerd[1525]: time="2025-09-12T18:08:23.465245314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.233613445s" Sep 12 18:08:23.465324 containerd[1525]: time="2025-09-12T18:08:23.465319796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 18:08:23.473276 containerd[1525]: time="2025-09-12T18:08:23.473176706Z" level=info msg="CreateContainer within sandbox \"676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 18:08:23.480237 containerd[1525]: time="2025-09-12T18:08:23.480169474Z" level=info msg="Container e7af2193d21ac9996d41f13643679f780009ed6be95f5634c8a84d0684df6c7c: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:23.492998 containerd[1525]: time="2025-09-12T18:08:23.492869553Z" level=info msg="CreateContainer within sandbox \"676a0103e500d13267fc14456c6e02e88866ff7d3fc897c74c8a118ab9f7a0b3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e7af2193d21ac9996d41f13643679f780009ed6be95f5634c8a84d0684df6c7c\"" Sep 12 18:08:23.496148 containerd[1525]: time="2025-09-12T18:08:23.493849838Z" level=info msg="StartContainer for \"e7af2193d21ac9996d41f13643679f780009ed6be95f5634c8a84d0684df6c7c\"" Sep 12 18:08:23.499376 containerd[1525]: time="2025-09-12T18:08:23.496324137Z" level=info msg="connecting to shim e7af2193d21ac9996d41f13643679f780009ed6be95f5634c8a84d0684df6c7c" address="unix:///run/containerd/s/6aab333966e6a9e634db1e4e619e42084a91d3ae6f97511ad303cd5092b56dba" protocol=ttrpc version=3 Sep 12 18:08:23.543277 systemd[1]: Started cri-containerd-e7af2193d21ac9996d41f13643679f780009ed6be95f5634c8a84d0684df6c7c.scope - libcontainer container e7af2193d21ac9996d41f13643679f780009ed6be95f5634c8a84d0684df6c7c. Sep 12 18:08:23.623675 containerd[1525]: time="2025-09-12T18:08:23.623637585Z" level=info msg="StartContainer for \"e7af2193d21ac9996d41f13643679f780009ed6be95f5634c8a84d0684df6c7c\" returns successfully" Sep 12 18:08:24.223121 kubelet[2717]: E0912 18:08:24.223058 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:24.224353 kubelet[2717]: E0912 18:08:24.223781 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:24.224571 containerd[1525]: time="2025-09-12T18:08:24.224533330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d6b98655-l2l88,Uid:0b2617e8-97e9-4630-885d-6d0cae65c523,Namespace:calico-system,Attempt:0,}" Sep 12 18:08:24.224847 containerd[1525]: time="2025-09-12T18:08:24.224826931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w94jp,Uid:a58b589e-1ab9-4796-8577-44c87f89297d,Namespace:kube-system,Attempt:0,}" Sep 12 18:08:24.225325 containerd[1525]: time="2025-09-12T18:08:24.225293867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b67d4c7b9-5mx2d,Uid:c1dcc2d0-2db1-44d7-a487-378c0b3b156e,Namespace:calico-apiserver,Attempt:0,}" Sep 12 18:08:24.225397 containerd[1525]: time="2025-09-12T18:08:24.224583039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wq2qd,Uid:67f73566-3e08-467e-ad6c-9763df385f9b,Namespace:kube-system,Attempt:0,}" Sep 12 18:08:24.225479 containerd[1525]: time="2025-09-12T18:08:24.224622627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b67d4c7b9-bf2ch,Uid:6b1ff3f0-812f-44c5-9187-9cedfb780f33,Namespace:calico-apiserver,Attempt:0,}" Sep 12 18:08:24.489399 kubelet[2717]: I0912 18:08:24.489251 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-57c9f588-mfqrw" podStartSLOduration=2.7089591950000003 podStartE2EDuration="6.489214051s" podCreationTimestamp="2025-09-12 18:08:18 +0000 UTC" firstStartedPulling="2025-09-12 18:08:19.686290602 +0000 UTC m=+40.654792669" lastFinishedPulling="2025-09-12 18:08:23.466545441 +0000 UTC m=+44.435047525" observedRunningTime="2025-09-12 18:08:24.487551936 +0000 UTC m=+45.456054019" watchObservedRunningTime="2025-09-12 18:08:24.489214051 +0000 UTC m=+45.457716139" Sep 12 18:08:24.632123 systemd-networkd[1452]: calieb2710229c3: Link UP Sep 12 18:08:24.633770 systemd-networkd[1452]: calieb2710229c3: Gained carrier Sep 12 18:08:24.656718 containerd[1525]: 2025-09-12 18:08:24.385 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-eth0 coredns-674b8bbfcf- kube-system a58b589e-1ab9-4796-8577-44c87f89297d 842 0 2025-09-12 18:07:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4426.1.0-6-3761596165 coredns-674b8bbfcf-w94jp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieb2710229c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-w94jp" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-" Sep 12 18:08:24.656718 containerd[1525]: 2025-09-12 18:08:24.385 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-w94jp" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-eth0" Sep 12 18:08:24.656718 containerd[1525]: 2025-09-12 18:08:24.573 [INFO][4268] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" HandleID="k8s-pod-network.c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" Workload="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-eth0" Sep 12 18:08:24.658558 containerd[1525]: 2025-09-12 18:08:24.575 [INFO][4268] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" HandleID="k8s-pod-network.c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" Workload="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e730), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4426.1.0-6-3761596165", "pod":"coredns-674b8bbfcf-w94jp", "timestamp":"2025-09-12 18:08:24.573657653 +0000 UTC"}, Hostname:"ci-4426.1.0-6-3761596165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 18:08:24.658558 containerd[1525]: 2025-09-12 18:08:24.575 [INFO][4268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 18:08:24.658558 containerd[1525]: 2025-09-12 18:08:24.575 [INFO][4268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 18:08:24.658558 containerd[1525]: 2025-09-12 18:08:24.575 [INFO][4268] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-6-3761596165' Sep 12 18:08:24.658558 containerd[1525]: 2025-09-12 18:08:24.585 [INFO][4268] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.658558 containerd[1525]: 2025-09-12 18:08:24.595 [INFO][4268] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.658558 containerd[1525]: 2025-09-12 18:08:24.601 [INFO][4268] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.658558 containerd[1525]: 2025-09-12 18:08:24.603 [INFO][4268] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.658558 containerd[1525]: 2025-09-12 18:08:24.605 [INFO][4268] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.660152 containerd[1525]: 2025-09-12 18:08:24.605 [INFO][4268] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.660152 containerd[1525]: 2025-09-12 18:08:24.607 [INFO][4268] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1 Sep 12 18:08:24.660152 containerd[1525]: 2025-09-12 18:08:24.611 [INFO][4268] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.660152 containerd[1525]: 2025-09-12 18:08:24.618 [INFO][4268] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.2/26] block=192.168.51.0/26 handle="k8s-pod-network.c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.660152 containerd[1525]: 2025-09-12 18:08:24.618 [INFO][4268] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.2/26] handle="k8s-pod-network.c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.660152 containerd[1525]: 2025-09-12 18:08:24.618 [INFO][4268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 18:08:24.660152 containerd[1525]: 2025-09-12 18:08:24.618 [INFO][4268] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.2/26] IPv6=[] ContainerID="c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" HandleID="k8s-pod-network.c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" Workload="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-eth0" Sep 12 18:08:24.660323 containerd[1525]: 2025-09-12 18:08:24.626 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-w94jp" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a58b589e-1ab9-4796-8577-44c87f89297d", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"", Pod:"coredns-674b8bbfcf-w94jp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb2710229c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:24.660323 containerd[1525]: 2025-09-12 18:08:24.627 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.2/32] ContainerID="c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-w94jp" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-eth0" Sep 12 18:08:24.660323 containerd[1525]: 2025-09-12 18:08:24.627 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb2710229c3 ContainerID="c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-w94jp" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-eth0" Sep 12 18:08:24.660323 containerd[1525]: 2025-09-12 18:08:24.635 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-w94jp" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-eth0" Sep 12 18:08:24.660323 containerd[1525]: 2025-09-12 18:08:24.635 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-w94jp" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a58b589e-1ab9-4796-8577-44c87f89297d", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1", Pod:"coredns-674b8bbfcf-w94jp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb2710229c3", MAC:"72:2a:45:fc:9d:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:24.660323 containerd[1525]: 2025-09-12 18:08:24.650 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-w94jp" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--w94jp-eth0" Sep 12 18:08:24.706046 containerd[1525]: time="2025-09-12T18:08:24.704977091Z" level=info msg="connecting to shim c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1" address="unix:///run/containerd/s/eadc749e489dadc8a8fd1aae2812d09b4f1bbc91bc9878c89755119f48ca65db" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:08:24.748221 systemd[1]: Started cri-containerd-c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1.scope - libcontainer container c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1. Sep 12 18:08:24.771430 systemd-networkd[1452]: cali66c8643a099: Link UP Sep 12 18:08:24.772536 systemd-networkd[1452]: cali66c8643a099: Gained carrier Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.380 [INFO][4205] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-eth0 calico-kube-controllers-75d6b98655- calico-system 0b2617e8-97e9-4630-885d-6d0cae65c523 852 0 2025-09-12 18:07:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75d6b98655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4426.1.0-6-3761596165 calico-kube-controllers-75d6b98655-l2l88 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali66c8643a099 [] [] }} ContainerID="ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" Namespace="calico-system" Pod="calico-kube-controllers-75d6b98655-l2l88" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-" Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.381 [INFO][4205] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" Namespace="calico-system" Pod="calico-kube-controllers-75d6b98655-l2l88" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-eth0" Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.575 [INFO][4270] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" HandleID="k8s-pod-network.ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" Workload="ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-eth0" Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.576 [INFO][4270] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" HandleID="k8s-pod-network.ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" Workload="ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003880b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426.1.0-6-3761596165", "pod":"calico-kube-controllers-75d6b98655-l2l88", "timestamp":"2025-09-12 18:08:24.575268221 +0000 UTC"}, Hostname:"ci-4426.1.0-6-3761596165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.576 [INFO][4270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.619 [INFO][4270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.619 [INFO][4270] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-6-3761596165' Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.691 [INFO][4270] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.701 [INFO][4270] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.710 [INFO][4270] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.714 [INFO][4270] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.722 [INFO][4270] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.723 [INFO][4270] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.729 [INFO][4270] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.738 [INFO][4270] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.753 [INFO][4270] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.3/26] block=192.168.51.0/26 handle="k8s-pod-network.ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.753 [INFO][4270] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.3/26] handle="k8s-pod-network.ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.753 [INFO][4270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 18:08:24.795902 containerd[1525]: 2025-09-12 18:08:24.753 [INFO][4270] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.3/26] IPv6=[] ContainerID="ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" HandleID="k8s-pod-network.ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" Workload="ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-eth0" Sep 12 18:08:24.798691 containerd[1525]: 2025-09-12 18:08:24.763 [INFO][4205] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" Namespace="calico-system" Pod="calico-kube-controllers-75d6b98655-l2l88" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-eth0", GenerateName:"calico-kube-controllers-75d6b98655-", Namespace:"calico-system", SelfLink:"", UID:"0b2617e8-97e9-4630-885d-6d0cae65c523", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d6b98655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"", Pod:"calico-kube-controllers-75d6b98655-l2l88", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali66c8643a099", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:24.798691 containerd[1525]: 2025-09-12 18:08:24.763 [INFO][4205] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.3/32] ContainerID="ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" Namespace="calico-system" Pod="calico-kube-controllers-75d6b98655-l2l88" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-eth0" Sep 12 18:08:24.798691 containerd[1525]: 2025-09-12 18:08:24.763 [INFO][4205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66c8643a099 ContainerID="ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" Namespace="calico-system" Pod="calico-kube-controllers-75d6b98655-l2l88" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-eth0" Sep 12 18:08:24.798691 containerd[1525]: 2025-09-12 18:08:24.774 [INFO][4205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" Namespace="calico-system" Pod="calico-kube-controllers-75d6b98655-l2l88" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-eth0" Sep 12 18:08:24.798691 containerd[1525]: 2025-09-12 18:08:24.774 [INFO][4205] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" Namespace="calico-system" Pod="calico-kube-controllers-75d6b98655-l2l88" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-eth0", GenerateName:"calico-kube-controllers-75d6b98655-", Namespace:"calico-system", SelfLink:"", UID:"0b2617e8-97e9-4630-885d-6d0cae65c523", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d6b98655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da", Pod:"calico-kube-controllers-75d6b98655-l2l88", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali66c8643a099", MAC:"1e:53:a3:2d:95:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:24.798691 containerd[1525]: 2025-09-12 18:08:24.791 [INFO][4205] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" Namespace="calico-system" Pod="calico-kube-controllers-75d6b98655-l2l88" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--kube--controllers--75d6b98655--l2l88-eth0" Sep 12 18:08:24.848456 containerd[1525]: time="2025-09-12T18:08:24.847891540Z" level=info msg="connecting to shim ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da" address="unix:///run/containerd/s/2e290ef2dfa660ff82b62bb575a4230195469a3625a278a4bd801e1466c316ce" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:08:24.875871 containerd[1525]: time="2025-09-12T18:08:24.875276606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w94jp,Uid:a58b589e-1ab9-4796-8577-44c87f89297d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1\"" Sep 12 18:08:24.885421 kubelet[2717]: E0912 18:08:24.885322 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:24.892108 systemd-networkd[1452]: calif6377dfc784: Link UP Sep 12 18:08:24.892366 systemd-networkd[1452]: calif6377dfc784: Gained carrier Sep 12 18:08:24.905361 containerd[1525]: time="2025-09-12T18:08:24.905317870Z" level=info msg="CreateContainer within sandbox \"c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 18:08:24.908236 systemd[1]: Started cri-containerd-ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da.scope - libcontainer container ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da. Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.371 [INFO][4217] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-eth0 calico-apiserver-7b67d4c7b9- calico-apiserver 6b1ff3f0-812f-44c5-9187-9cedfb780f33 849 0 2025-09-12 18:07:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b67d4c7b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4426.1.0-6-3761596165 calico-apiserver-7b67d4c7b9-bf2ch eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif6377dfc784 [] [] }} ContainerID="6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-bf2ch" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-" Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.374 [INFO][4217] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-bf2ch" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-eth0" Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.576 [INFO][4266] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" HandleID="k8s-pod-network.6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" Workload="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-eth0" Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.576 [INFO][4266] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" HandleID="k8s-pod-network.6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" Workload="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003908c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4426.1.0-6-3761596165", "pod":"calico-apiserver-7b67d4c7b9-bf2ch", "timestamp":"2025-09-12 18:08:24.576627308 +0000 UTC"}, Hostname:"ci-4426.1.0-6-3761596165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.576 [INFO][4266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.754 [INFO][4266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.755 [INFO][4266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-6-3761596165' Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.788 [INFO][4266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.803 [INFO][4266] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.811 [INFO][4266] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.814 [INFO][4266] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.818 [INFO][4266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.818 [INFO][4266] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.820 [INFO][4266] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.825 [INFO][4266] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.843 [INFO][4266] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.4/26] block=192.168.51.0/26 handle="k8s-pod-network.6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.843 [INFO][4266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.4/26] handle="k8s-pod-network.6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.847 [INFO][4266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 18:08:24.938384 containerd[1525]: 2025-09-12 18:08:24.850 [INFO][4266] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.4/26] IPv6=[] ContainerID="6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" HandleID="k8s-pod-network.6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" Workload="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-eth0" Sep 12 18:08:24.940133 containerd[1525]: 2025-09-12 18:08:24.874 [INFO][4217] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-bf2ch" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-eth0", GenerateName:"calico-apiserver-7b67d4c7b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b1ff3f0-812f-44c5-9187-9cedfb780f33", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b67d4c7b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"", Pod:"calico-apiserver-7b67d4c7b9-bf2ch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6377dfc784", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:24.940133 containerd[1525]: 2025-09-12 18:08:24.878 [INFO][4217] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.4/32] ContainerID="6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-bf2ch" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-eth0" Sep 12 18:08:24.940133 containerd[1525]: 2025-09-12 18:08:24.878 [INFO][4217] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6377dfc784 ContainerID="6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-bf2ch" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-eth0" Sep 12 18:08:24.940133 containerd[1525]: 2025-09-12 18:08:24.893 [INFO][4217] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-bf2ch" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-eth0" Sep 12 18:08:24.940133 containerd[1525]: 2025-09-12 18:08:24.894 [INFO][4217] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-bf2ch" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-eth0", GenerateName:"calico-apiserver-7b67d4c7b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b1ff3f0-812f-44c5-9187-9cedfb780f33", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b67d4c7b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e", Pod:"calico-apiserver-7b67d4c7b9-bf2ch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6377dfc784", MAC:"0e:ef:fc:06:bd:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:24.940133 containerd[1525]: 2025-09-12 18:08:24.918 [INFO][4217] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-bf2ch" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--bf2ch-eth0" Sep 12 18:08:24.975562 containerd[1525]: time="2025-09-12T18:08:24.975434711Z" level=info msg="Container e8958fdedb2a319661b6d393cd6a2f1f5036b99ebfda3dfdbc7f432284377fdd: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:24.991800 containerd[1525]: time="2025-09-12T18:08:24.991750278Z" level=info msg="CreateContainer within sandbox \"c5c204607dced3f2abc330cd3924766e2576fca178b84e880786fdfd4918e1c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e8958fdedb2a319661b6d393cd6a2f1f5036b99ebfda3dfdbc7f432284377fdd\"" Sep 12 18:08:24.993642 containerd[1525]: time="2025-09-12T18:08:24.993603748Z" level=info msg="StartContainer for \"e8958fdedb2a319661b6d393cd6a2f1f5036b99ebfda3dfdbc7f432284377fdd\"" Sep 12 18:08:24.996835 containerd[1525]: time="2025-09-12T18:08:24.996724324Z" level=info msg="connecting to shim e8958fdedb2a319661b6d393cd6a2f1f5036b99ebfda3dfdbc7f432284377fdd" address="unix:///run/containerd/s/eadc749e489dadc8a8fd1aae2812d09b4f1bbc91bc9878c89755119f48ca65db" protocol=ttrpc version=3 Sep 12 18:08:25.027206 containerd[1525]: time="2025-09-12T18:08:25.026355158Z" level=info msg="connecting to shim 6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e" address="unix:///run/containerd/s/0e664367f2df5e06de0072f771ee0c9917721ae30cc240de614f20bd4b380f68" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:08:25.030814 systemd-networkd[1452]: cali8d8a53a540e: Link UP Sep 12 18:08:25.034319 systemd-networkd[1452]: cali8d8a53a540e: Gained carrier Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.381 [INFO][4235] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-eth0 coredns-674b8bbfcf- kube-system 67f73566-3e08-467e-ad6c-9763df385f9b 851 0 2025-09-12 18:07:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4426.1.0-6-3761596165 coredns-674b8bbfcf-wq2qd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8d8a53a540e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" Namespace="kube-system" Pod="coredns-674b8bbfcf-wq2qd" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-" Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.383 [INFO][4235] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" Namespace="kube-system" Pod="coredns-674b8bbfcf-wq2qd" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-eth0" Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.575 [INFO][4272] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" HandleID="k8s-pod-network.a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" Workload="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-eth0" Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.576 [INFO][4272] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" HandleID="k8s-pod-network.a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" Workload="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5540), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4426.1.0-6-3761596165", "pod":"coredns-674b8bbfcf-wq2qd", "timestamp":"2025-09-12 18:08:24.575460423 +0000 UTC"}, Hostname:"ci-4426.1.0-6-3761596165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.576 [INFO][4272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.851 [INFO][4272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.852 [INFO][4272] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-6-3761596165' Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.914 [INFO][4272] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.945 [INFO][4272] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.956 [INFO][4272] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.961 [INFO][4272] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.967 [INFO][4272] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.967 [INFO][4272] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.975 [INFO][4272] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.985 [INFO][4272] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:24.999 [INFO][4272] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.5/26] block=192.168.51.0/26 handle="k8s-pod-network.a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:25.000 [INFO][4272] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.5/26] handle="k8s-pod-network.a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:25.000 [INFO][4272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 18:08:25.086745 containerd[1525]: 2025-09-12 18:08:25.001 [INFO][4272] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.5/26] IPv6=[] ContainerID="a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" HandleID="k8s-pod-network.a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" Workload="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-eth0" Sep 12 18:08:25.090448 containerd[1525]: 2025-09-12 18:08:25.010 [INFO][4235] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" Namespace="kube-system" Pod="coredns-674b8bbfcf-wq2qd" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67f73566-3e08-467e-ad6c-9763df385f9b", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"", Pod:"coredns-674b8bbfcf-wq2qd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d8a53a540e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:25.090448 containerd[1525]: 2025-09-12 18:08:25.010 [INFO][4235] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.5/32] ContainerID="a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" Namespace="kube-system" Pod="coredns-674b8bbfcf-wq2qd" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-eth0" Sep 12 18:08:25.090448 containerd[1525]: 2025-09-12 18:08:25.010 [INFO][4235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d8a53a540e ContainerID="a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" Namespace="kube-system" Pod="coredns-674b8bbfcf-wq2qd" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-eth0" Sep 12 18:08:25.090448 containerd[1525]: 2025-09-12 18:08:25.032 [INFO][4235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" Namespace="kube-system" Pod="coredns-674b8bbfcf-wq2qd" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-eth0" Sep 12 18:08:25.090448 containerd[1525]: 2025-09-12 18:08:25.039 [INFO][4235] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" Namespace="kube-system" Pod="coredns-674b8bbfcf-wq2qd" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67f73566-3e08-467e-ad6c-9763df385f9b", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba", Pod:"coredns-674b8bbfcf-wq2qd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d8a53a540e", MAC:"c2:c1:c0:da:8c:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:25.090448 containerd[1525]: 2025-09-12 18:08:25.066 [INFO][4235] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" Namespace="kube-system" Pod="coredns-674b8bbfcf-wq2qd" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-coredns--674b8bbfcf--wq2qd-eth0" Sep 12 18:08:25.104412 systemd[1]: Started cri-containerd-e8958fdedb2a319661b6d393cd6a2f1f5036b99ebfda3dfdbc7f432284377fdd.scope - libcontainer container e8958fdedb2a319661b6d393cd6a2f1f5036b99ebfda3dfdbc7f432284377fdd. Sep 12 18:08:25.136158 containerd[1525]: time="2025-09-12T18:08:25.136101597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d6b98655-l2l88,Uid:0b2617e8-97e9-4630-885d-6d0cae65c523,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da\"" Sep 12 18:08:25.146400 containerd[1525]: time="2025-09-12T18:08:25.146335507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 18:08:25.154866 systemd[1]: Started cri-containerd-6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e.scope - libcontainer container 6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e. Sep 12 18:08:25.169825 systemd-networkd[1452]: cali14526604626: Link UP Sep 12 18:08:25.177426 systemd-networkd[1452]: cali14526604626: Gained carrier Sep 12 18:08:25.193134 containerd[1525]: time="2025-09-12T18:08:25.192986526Z" level=info msg="connecting to shim a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba" address="unix:///run/containerd/s/4cf4265d54ed11f4ad5171a05bdd8413206f84e50734397960dabc4374edcd96" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:24.407 [INFO][4241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-eth0 calico-apiserver-7b67d4c7b9- calico-apiserver c1dcc2d0-2db1-44d7-a487-378c0b3b156e 853 0 2025-09-12 18:07:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b67d4c7b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4426.1.0-6-3761596165 calico-apiserver-7b67d4c7b9-5mx2d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali14526604626 [] [] }} ContainerID="9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-5mx2d" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-" Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:24.408 [INFO][4241] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-5mx2d" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-eth0" Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:24.581 [INFO][4287] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" HandleID="k8s-pod-network.9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" Workload="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-eth0" Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:24.582 [INFO][4287] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" HandleID="k8s-pod-network.9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" Workload="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001037b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4426.1.0-6-3761596165", "pod":"calico-apiserver-7b67d4c7b9-5mx2d", "timestamp":"2025-09-12 18:08:24.581311572 +0000 UTC"}, Hostname:"ci-4426.1.0-6-3761596165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:24.582 [INFO][4287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.001 [INFO][4287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.002 [INFO][4287] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-6-3761596165' Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.047 [INFO][4287] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.055 [INFO][4287] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.069 [INFO][4287] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.077 [INFO][4287] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.084 [INFO][4287] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.084 [INFO][4287] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.089 [INFO][4287] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13 Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.108 [INFO][4287] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.138 [INFO][4287] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.6/26] block=192.168.51.0/26 handle="k8s-pod-network.9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.139 [INFO][4287] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.6/26] handle="k8s-pod-network.9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.140 [INFO][4287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 18:08:25.218492 containerd[1525]: 2025-09-12 18:08:25.140 [INFO][4287] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.6/26] IPv6=[] ContainerID="9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" HandleID="k8s-pod-network.9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" Workload="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-eth0" Sep 12 18:08:25.219680 containerd[1525]: 2025-09-12 18:08:25.159 [INFO][4241] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-5mx2d" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-eth0", GenerateName:"calico-apiserver-7b67d4c7b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1dcc2d0-2db1-44d7-a487-378c0b3b156e", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b67d4c7b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"", Pod:"calico-apiserver-7b67d4c7b9-5mx2d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14526604626", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:25.219680 containerd[1525]: 2025-09-12 18:08:25.160 [INFO][4241] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.6/32] ContainerID="9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-5mx2d" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-eth0" Sep 12 18:08:25.219680 containerd[1525]: 2025-09-12 18:08:25.160 [INFO][4241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14526604626 ContainerID="9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-5mx2d" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-eth0" Sep 12 18:08:25.219680 containerd[1525]: 2025-09-12 18:08:25.186 [INFO][4241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-5mx2d" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-eth0" Sep 12 18:08:25.219680 containerd[1525]: 2025-09-12 18:08:25.189 [INFO][4241] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-5mx2d" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-eth0", GenerateName:"calico-apiserver-7b67d4c7b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1dcc2d0-2db1-44d7-a487-378c0b3b156e", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b67d4c7b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13", Pod:"calico-apiserver-7b67d4c7b9-5mx2d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14526604626", MAC:"4e:19:c1:d3:07:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:25.219680 containerd[1525]: 2025-09-12 18:08:25.209 [INFO][4241] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" Namespace="calico-apiserver" Pod="calico-apiserver-7b67d4c7b9-5mx2d" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-calico--apiserver--7b67d4c7b9--5mx2d-eth0" Sep 12 18:08:25.226099 containerd[1525]: time="2025-09-12T18:08:25.226038053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9q8nh,Uid:8397ff1e-892f-4ca6-94b4-3339b968f3e8,Namespace:calico-system,Attempt:0,}" Sep 12 18:08:25.227013 containerd[1525]: time="2025-09-12T18:08:25.226682086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76f77,Uid:4cff656a-7bb3-4e69-b0de-ea6ad3f24730,Namespace:calico-system,Attempt:0,}" Sep 12 18:08:25.291428 systemd[1]: Started cri-containerd-a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba.scope - libcontainer container a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba. Sep 12 18:08:25.300985 containerd[1525]: time="2025-09-12T18:08:25.300859030Z" level=info msg="StartContainer for \"e8958fdedb2a319661b6d393cd6a2f1f5036b99ebfda3dfdbc7f432284377fdd\" returns successfully" Sep 12 18:08:25.363921 containerd[1525]: time="2025-09-12T18:08:25.363396692Z" level=info msg="connecting to shim 9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13" address="unix:///run/containerd/s/0df0e0c671ab708a864dc31d4cf90e0df3de35114f66753511eb808c9bfb21a5" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:08:25.468419 systemd[1]: Started cri-containerd-9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13.scope - libcontainer container 9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13. Sep 12 18:08:25.480733 containerd[1525]: time="2025-09-12T18:08:25.480669772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wq2qd,Uid:67f73566-3e08-467e-ad6c-9763df385f9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba\"" Sep 12 18:08:25.482437 kubelet[2717]: E0912 18:08:25.482412 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:25.484085 kubelet[2717]: E0912 18:08:25.483725 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:25.493452 containerd[1525]: time="2025-09-12T18:08:25.493159043Z" level=info msg="CreateContainer within sandbox \"a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 18:08:25.535471 kubelet[2717]: I0912 18:08:25.535414 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w94jp" podStartSLOduration=41.535399032 podStartE2EDuration="41.535399032s" podCreationTimestamp="2025-09-12 18:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:08:25.535285313 +0000 UTC m=+46.503787402" watchObservedRunningTime="2025-09-12 18:08:25.535399032 +0000 UTC m=+46.503901120" Sep 12 18:08:25.609315 containerd[1525]: time="2025-09-12T18:08:25.609014939Z" level=info msg="Container 5f590a21787558c6a9d35446809d0911c97c6b00a076722575ad06fff98ba3fe: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:25.630534 containerd[1525]: time="2025-09-12T18:08:25.630489008Z" level=info msg="CreateContainer within sandbox \"a21290155287873fc80e08004f4c5da42a5c30d1c689b4ec6ddc80bf389ccdba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f590a21787558c6a9d35446809d0911c97c6b00a076722575ad06fff98ba3fe\"" Sep 12 18:08:25.633495 containerd[1525]: time="2025-09-12T18:08:25.633443873Z" level=info msg="StartContainer for \"5f590a21787558c6a9d35446809d0911c97c6b00a076722575ad06fff98ba3fe\"" Sep 12 18:08:25.640105 containerd[1525]: time="2025-09-12T18:08:25.639940379Z" level=info msg="connecting to shim 5f590a21787558c6a9d35446809d0911c97c6b00a076722575ad06fff98ba3fe" address="unix:///run/containerd/s/4cf4265d54ed11f4ad5171a05bdd8413206f84e50734397960dabc4374edcd96" protocol=ttrpc version=3 Sep 12 18:08:25.680284 systemd[1]: Started cri-containerd-5f590a21787558c6a9d35446809d0911c97c6b00a076722575ad06fff98ba3fe.scope - libcontainer container 5f590a21787558c6a9d35446809d0911c97c6b00a076722575ad06fff98ba3fe. Sep 12 18:08:25.746802 systemd-networkd[1452]: cali5c5c46d3643: Link UP Sep 12 18:08:25.757589 systemd-networkd[1452]: cali5c5c46d3643: Gained carrier Sep 12 18:08:25.790124 containerd[1525]: time="2025-09-12T18:08:25.788004607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b67d4c7b9-bf2ch,Uid:6b1ff3f0-812f-44c5-9187-9cedfb780f33,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e\"" Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.422 [INFO][4542] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-eth0 csi-node-driver- calico-system 4cff656a-7bb3-4e69-b0de-ea6ad3f24730 727 0 2025-09-12 18:07:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4426.1.0-6-3761596165 csi-node-driver-76f77 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5c5c46d3643 [] [] }} ContainerID="d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" Namespace="calico-system" Pod="csi-node-driver-76f77" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-" Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.422 [INFO][4542] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" Namespace="calico-system" Pod="csi-node-driver-76f77" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-eth0" Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.553 [INFO][4600] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" HandleID="k8s-pod-network.d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" Workload="ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-eth0" Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.553 [INFO][4600] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" HandleID="k8s-pod-network.d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" Workload="ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00017eb00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426.1.0-6-3761596165", "pod":"csi-node-driver-76f77", "timestamp":"2025-09-12 18:08:25.553212518 +0000 UTC"}, Hostname:"ci-4426.1.0-6-3761596165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.553 [INFO][4600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.553 [INFO][4600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.553 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-6-3761596165' Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.593 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.608 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.647 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.650 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.660 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.660 [INFO][4600] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.666 [INFO][4600] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46 Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.678 [INFO][4600] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.705 [INFO][4600] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.7/26] block=192.168.51.0/26 handle="k8s-pod-network.d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.705 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.7/26] handle="k8s-pod-network.d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.706 [INFO][4600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 18:08:25.791573 containerd[1525]: 2025-09-12 18:08:25.706 [INFO][4600] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.7/26] IPv6=[] ContainerID="d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" HandleID="k8s-pod-network.d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" Workload="ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-eth0" Sep 12 18:08:25.792567 containerd[1525]: 2025-09-12 18:08:25.720 [INFO][4542] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" Namespace="calico-system" Pod="csi-node-driver-76f77" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4cff656a-7bb3-4e69-b0de-ea6ad3f24730", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"", Pod:"csi-node-driver-76f77", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c5c46d3643", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:25.792567 containerd[1525]: 2025-09-12 18:08:25.720 [INFO][4542] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.7/32] ContainerID="d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" Namespace="calico-system" Pod="csi-node-driver-76f77" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-eth0" Sep 12 18:08:25.792567 containerd[1525]: 2025-09-12 18:08:25.720 [INFO][4542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c5c46d3643 ContainerID="d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" Namespace="calico-system" Pod="csi-node-driver-76f77" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-eth0" Sep 12 18:08:25.792567 containerd[1525]: 2025-09-12 18:08:25.760 [INFO][4542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" Namespace="calico-system" Pod="csi-node-driver-76f77" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-eth0" Sep 12 18:08:25.792567 containerd[1525]: 2025-09-12 18:08:25.761 [INFO][4542] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" Namespace="calico-system" Pod="csi-node-driver-76f77" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4cff656a-7bb3-4e69-b0de-ea6ad3f24730", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46", Pod:"csi-node-driver-76f77", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c5c46d3643", MAC:"d6:92:e6:f9:83:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:25.792567 containerd[1525]: 2025-09-12 18:08:25.776 [INFO][4542] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" Namespace="calico-system" Pod="csi-node-driver-76f77" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-csi--node--driver--76f77-eth0" Sep 12 18:08:25.825289 containerd[1525]: time="2025-09-12T18:08:25.825075330Z" level=info msg="StartContainer for \"5f590a21787558c6a9d35446809d0911c97c6b00a076722575ad06fff98ba3fe\" returns successfully" Sep 12 18:08:25.838705 systemd-networkd[1452]: cali5b372c55148: Link UP Sep 12 18:08:25.844538 systemd-networkd[1452]: cali5b372c55148: Gained carrier Sep 12 18:08:25.868863 containerd[1525]: time="2025-09-12T18:08:25.868377910Z" level=info msg="connecting to shim d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46" address="unix:///run/containerd/s/f584a2fe08f8bd388896cf59b04684ba0558c2aaa9eda86af5a986fd8e3812aa" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.476 [INFO][4535] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-eth0 goldmane-54d579b49d- calico-system 8397ff1e-892f-4ca6-94b4-3339b968f3e8 850 0 2025-09-12 18:07:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4426.1.0-6-3761596165 goldmane-54d579b49d-9q8nh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5b372c55148 [] [] }} ContainerID="1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" Namespace="calico-system" Pod="goldmane-54d579b49d-9q8nh" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-" Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.476 [INFO][4535] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" Namespace="calico-system" Pod="goldmane-54d579b49d-9q8nh" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-eth0" Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.709 [INFO][4618] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" HandleID="k8s-pod-network.1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" Workload="ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-eth0" Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.711 [INFO][4618] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" HandleID="k8s-pod-network.1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" Workload="ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fee0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426.1.0-6-3761596165", "pod":"goldmane-54d579b49d-9q8nh", "timestamp":"2025-09-12 18:08:25.709247139 +0000 UTC"}, Hostname:"ci-4426.1.0-6-3761596165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.712 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.712 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.712 [INFO][4618] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-6-3761596165' Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.723 [INFO][4618] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.736 [INFO][4618] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.766 [INFO][4618] ipam/ipam.go 511: Trying affinity for 192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.776 [INFO][4618] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.782 [INFO][4618] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.782 [INFO][4618] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.790 [INFO][4618] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5 Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.801 [INFO][4618] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.822 [INFO][4618] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.8/26] block=192.168.51.0/26 handle="k8s-pod-network.1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.823 [INFO][4618] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.8/26] handle="k8s-pod-network.1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" host="ci-4426.1.0-6-3761596165" Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.823 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 18:08:25.890744 containerd[1525]: 2025-09-12 18:08:25.823 [INFO][4618] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.8/26] IPv6=[] ContainerID="1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" HandleID="k8s-pod-network.1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" Workload="ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-eth0" Sep 12 18:08:25.893179 containerd[1525]: 2025-09-12 18:08:25.831 [INFO][4535] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" Namespace="calico-system" Pod="goldmane-54d579b49d-9q8nh" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"8397ff1e-892f-4ca6-94b4-3339b968f3e8", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"", Pod:"goldmane-54d579b49d-9q8nh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b372c55148", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:25.893179 containerd[1525]: 2025-09-12 18:08:25.832 [INFO][4535] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.8/32] ContainerID="1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" Namespace="calico-system" Pod="goldmane-54d579b49d-9q8nh" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-eth0" Sep 12 18:08:25.893179 containerd[1525]: 2025-09-12 18:08:25.832 [INFO][4535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b372c55148 ContainerID="1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" Namespace="calico-system" Pod="goldmane-54d579b49d-9q8nh" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-eth0" Sep 12 18:08:25.893179 containerd[1525]: 2025-09-12 18:08:25.846 [INFO][4535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" Namespace="calico-system" Pod="goldmane-54d579b49d-9q8nh" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-eth0" Sep 12 18:08:25.893179 containerd[1525]: 2025-09-12 18:08:25.847 [INFO][4535] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" Namespace="calico-system" Pod="goldmane-54d579b49d-9q8nh" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"8397ff1e-892f-4ca6-94b4-3339b968f3e8", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 18, 7, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-6-3761596165", ContainerID:"1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5", Pod:"goldmane-54d579b49d-9q8nh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b372c55148", MAC:"46:65:2c:3c:b5:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 18:08:25.893179 containerd[1525]: 2025-09-12 18:08:25.875 [INFO][4535] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" Namespace="calico-system" Pod="goldmane-54d579b49d-9q8nh" WorkloadEndpoint="ci--4426.1.0--6--3761596165-k8s-goldmane--54d579b49d--9q8nh-eth0" Sep 12 18:08:25.951299 systemd[1]: Started cri-containerd-d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46.scope - libcontainer container d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46. Sep 12 18:08:25.966559 containerd[1525]: time="2025-09-12T18:08:25.966325827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b67d4c7b9-5mx2d,Uid:c1dcc2d0-2db1-44d7-a487-378c0b3b156e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13\"" Sep 12 18:08:25.975242 containerd[1525]: time="2025-09-12T18:08:25.975184812Z" level=info msg="connecting to shim 1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5" address="unix:///run/containerd/s/303f48012d6152a01099c7542de35e798db727440bc0a6678184be675418e910" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:08:26.024791 containerd[1525]: time="2025-09-12T18:08:26.024748737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76f77,Uid:4cff656a-7bb3-4e69-b0de-ea6ad3f24730,Namespace:calico-system,Attempt:0,} returns sandbox id \"d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46\"" Sep 12 18:08:26.036303 systemd[1]: Started cri-containerd-1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5.scope - libcontainer container 1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5. Sep 12 18:08:26.095806 containerd[1525]: time="2025-09-12T18:08:26.095761434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9q8nh,Uid:8397ff1e-892f-4ca6-94b4-3339b968f3e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5\"" Sep 12 18:08:26.228274 systemd-networkd[1452]: calieb2710229c3: Gained IPv6LL Sep 12 18:08:26.243591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553476334.mount: Deactivated successfully. Sep 12 18:08:26.485785 systemd-networkd[1452]: cali66c8643a099: Gained IPv6LL Sep 12 18:08:26.490823 kubelet[2717]: E0912 18:08:26.490541 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:26.491812 kubelet[2717]: E0912 18:08:26.491502 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:26.529809 kubelet[2717]: I0912 18:08:26.529738 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wq2qd" podStartSLOduration=42.529715777 podStartE2EDuration="42.529715777s" podCreationTimestamp="2025-09-12 18:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:08:26.51116353 +0000 UTC m=+47.479665619" watchObservedRunningTime="2025-09-12 18:08:26.529715777 +0000 UTC m=+47.498217866" Sep 12 18:08:26.549441 systemd-networkd[1452]: cali14526604626: Gained IPv6LL Sep 12 18:08:26.551356 systemd-networkd[1452]: calif6377dfc784: Gained IPv6LL Sep 12 18:08:26.740352 systemd-networkd[1452]: cali8d8a53a540e: Gained IPv6LL Sep 12 18:08:27.317617 systemd-networkd[1452]: cali5b372c55148: Gained IPv6LL Sep 12 18:08:27.317981 systemd-networkd[1452]: cali5c5c46d3643: Gained IPv6LL Sep 12 18:08:27.494589 kubelet[2717]: E0912 18:08:27.494555 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:27.494589 kubelet[2717]: E0912 18:08:27.494599 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:28.496987 kubelet[2717]: E0912 18:08:28.496944 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:28.498228 kubelet[2717]: E0912 18:08:28.498203 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:08:28.907331 containerd[1525]: time="2025-09-12T18:08:28.907255678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:28.908184 containerd[1525]: time="2025-09-12T18:08:28.908157867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 18:08:28.909270 containerd[1525]: time="2025-09-12T18:08:28.909094478Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:28.910851 containerd[1525]: time="2025-09-12T18:08:28.910821238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:28.911721 containerd[1525]: time="2025-09-12T18:08:28.911538472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.764742852s" Sep 12 18:08:28.911721 containerd[1525]: time="2025-09-12T18:08:28.911566049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 18:08:28.912850 containerd[1525]: time="2025-09-12T18:08:28.912826859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 18:08:28.968897 containerd[1525]: time="2025-09-12T18:08:28.968857365Z" level=info msg="CreateContainer within sandbox \"ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 18:08:28.995720 containerd[1525]: time="2025-09-12T18:08:28.995649082Z" level=info msg="Container 9a5e320ca93692b0fddaa7d94f5e700ca51885b169c3bf24bf0d4dad19454d81: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:29.014710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988592997.mount: Deactivated successfully. Sep 12 18:08:29.031517 containerd[1525]: time="2025-09-12T18:08:29.031330888Z" level=info msg="CreateContainer within sandbox \"ee7319a489f0860e0077c397536206fcc423a1d1c83c01f890a13d70be0049da\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9a5e320ca93692b0fddaa7d94f5e700ca51885b169c3bf24bf0d4dad19454d81\"" Sep 12 18:08:29.036126 containerd[1525]: time="2025-09-12T18:08:29.035982524Z" level=info msg="StartContainer for \"9a5e320ca93692b0fddaa7d94f5e700ca51885b169c3bf24bf0d4dad19454d81\"" Sep 12 18:08:29.038776 containerd[1525]: time="2025-09-12T18:08:29.038714710Z" level=info msg="connecting to shim 9a5e320ca93692b0fddaa7d94f5e700ca51885b169c3bf24bf0d4dad19454d81" address="unix:///run/containerd/s/2e290ef2dfa660ff82b62bb575a4230195469a3625a278a4bd801e1466c316ce" protocol=ttrpc version=3 Sep 12 18:08:29.109289 systemd[1]: Started cri-containerd-9a5e320ca93692b0fddaa7d94f5e700ca51885b169c3bf24bf0d4dad19454d81.scope - libcontainer container 9a5e320ca93692b0fddaa7d94f5e700ca51885b169c3bf24bf0d4dad19454d81. Sep 12 18:08:29.200070 containerd[1525]: time="2025-09-12T18:08:29.199481673Z" level=info msg="StartContainer for \"9a5e320ca93692b0fddaa7d94f5e700ca51885b169c3bf24bf0d4dad19454d81\" returns successfully" Sep 12 18:08:29.577148 kubelet[2717]: I0912 18:08:29.572872 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75d6b98655-l2l88" podStartSLOduration=26.806029979 podStartE2EDuration="30.572851805s" podCreationTimestamp="2025-09-12 18:07:59 +0000 UTC" firstStartedPulling="2025-09-12 18:08:25.14574684 +0000 UTC m=+46.114248921" lastFinishedPulling="2025-09-12 18:08:28.912568679 +0000 UTC m=+49.881070747" observedRunningTime="2025-09-12 18:08:29.568986259 +0000 UTC m=+50.537488347" watchObservedRunningTime="2025-09-12 18:08:29.572851805 +0000 UTC m=+50.541353887" Sep 12 18:08:29.636048 containerd[1525]: time="2025-09-12T18:08:29.635977944Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a5e320ca93692b0fddaa7d94f5e700ca51885b169c3bf24bf0d4dad19454d81\" id:\"be4515702597670edc2df374399545822c3c79e3e2b1324710e53bc7c72105f0\" pid:4860 exited_at:{seconds:1757700509 nanos:633963484}" Sep 12 18:08:31.547306 containerd[1525]: time="2025-09-12T18:08:31.546640917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:31.549071 containerd[1525]: time="2025-09-12T18:08:31.547131078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 18:08:31.550533 containerd[1525]: time="2025-09-12T18:08:31.549994670Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:31.553277 containerd[1525]: time="2025-09-12T18:08:31.553223821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:31.554395 containerd[1525]: time="2025-09-12T18:08:31.554348159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.641490255s" Sep 12 18:08:31.554844 containerd[1525]: time="2025-09-12T18:08:31.554513365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 18:08:31.558149 containerd[1525]: time="2025-09-12T18:08:31.557966033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 18:08:31.566429 containerd[1525]: time="2025-09-12T18:08:31.566228391Z" level=info msg="CreateContainer within sandbox \"6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 18:08:31.584082 containerd[1525]: time="2025-09-12T18:08:31.583405208Z" level=info msg="Container eeac282fa8227779b65e2cb422193a742bec9cc3d3210a09003ac5671296130f: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:31.601797 containerd[1525]: time="2025-09-12T18:08:31.601749879Z" level=info msg="CreateContainer within sandbox \"6aa920b3114e2b8af15c962c5a10fb7a4f752bdc632cde781d4231d389f5911e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eeac282fa8227779b65e2cb422193a742bec9cc3d3210a09003ac5671296130f\"" Sep 12 18:08:31.603443 containerd[1525]: time="2025-09-12T18:08:31.603231066Z" level=info msg="StartContainer for \"eeac282fa8227779b65e2cb422193a742bec9cc3d3210a09003ac5671296130f\"" Sep 12 18:08:31.608789 containerd[1525]: time="2025-09-12T18:08:31.608589946Z" level=info msg="connecting to shim eeac282fa8227779b65e2cb422193a742bec9cc3d3210a09003ac5671296130f" address="unix:///run/containerd/s/0e664367f2df5e06de0072f771ee0c9917721ae30cc240de614f20bd4b380f68" protocol=ttrpc version=3 Sep 12 18:08:31.663496 systemd[1]: Started cri-containerd-eeac282fa8227779b65e2cb422193a742bec9cc3d3210a09003ac5671296130f.scope - libcontainer container eeac282fa8227779b65e2cb422193a742bec9cc3d3210a09003ac5671296130f. Sep 12 18:08:31.754267 containerd[1525]: time="2025-09-12T18:08:31.754222468Z" level=info msg="StartContainer for \"eeac282fa8227779b65e2cb422193a742bec9cc3d3210a09003ac5671296130f\" returns successfully" Sep 12 18:08:31.952751 containerd[1525]: time="2025-09-12T18:08:31.952690269Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:31.973048 containerd[1525]: time="2025-09-12T18:08:31.953608013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 18:08:31.973181 containerd[1525]: time="2025-09-12T18:08:31.972209397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 414.206764ms" Sep 12 18:08:31.973181 containerd[1525]: time="2025-09-12T18:08:31.973149627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 18:08:31.977538 containerd[1525]: time="2025-09-12T18:08:31.977476085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 18:08:31.986536 containerd[1525]: time="2025-09-12T18:08:31.986483310Z" level=info msg="CreateContainer within sandbox \"9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 18:08:31.999615 containerd[1525]: time="2025-09-12T18:08:31.998311360Z" level=info msg="Container 8d5a6e7808eb64078b42020dffd54581bbd18690410bcc4a92245749907c2ad9: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:32.001840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2795090686.mount: Deactivated successfully. Sep 12 18:08:32.016303 containerd[1525]: time="2025-09-12T18:08:32.016232137Z" level=info msg="CreateContainer within sandbox \"9454fe3c343e005ab41339611c9263c03b4663eb2f821aab001914a4ef20fd13\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d5a6e7808eb64078b42020dffd54581bbd18690410bcc4a92245749907c2ad9\"" Sep 12 18:08:32.017985 containerd[1525]: time="2025-09-12T18:08:32.017948583Z" level=info msg="StartContainer for \"8d5a6e7808eb64078b42020dffd54581bbd18690410bcc4a92245749907c2ad9\"" Sep 12 18:08:32.021041 containerd[1525]: time="2025-09-12T18:08:32.020907964Z" level=info msg="connecting to shim 8d5a6e7808eb64078b42020dffd54581bbd18690410bcc4a92245749907c2ad9" address="unix:///run/containerd/s/0df0e0c671ab708a864dc31d4cf90e0df3de35114f66753511eb808c9bfb21a5" protocol=ttrpc version=3 Sep 12 18:08:32.079264 systemd[1]: Started cri-containerd-8d5a6e7808eb64078b42020dffd54581bbd18690410bcc4a92245749907c2ad9.scope - libcontainer container 8d5a6e7808eb64078b42020dffd54581bbd18690410bcc4a92245749907c2ad9. Sep 12 18:08:32.190913 containerd[1525]: time="2025-09-12T18:08:32.190857913Z" level=info msg="StartContainer for \"8d5a6e7808eb64078b42020dffd54581bbd18690410bcc4a92245749907c2ad9\" returns successfully" Sep 12 18:08:32.605715 kubelet[2717]: I0912 18:08:32.605642 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b67d4c7b9-bf2ch" podStartSLOduration=31.848697712 podStartE2EDuration="37.60561338s" podCreationTimestamp="2025-09-12 18:07:55 +0000 UTC" firstStartedPulling="2025-09-12 18:08:25.799443338 +0000 UTC m=+46.767945406" lastFinishedPulling="2025-09-12 18:08:31.556359008 +0000 UTC m=+52.524861074" observedRunningTime="2025-09-12 18:08:32.605307569 +0000 UTC m=+53.573809659" watchObservedRunningTime="2025-09-12 18:08:32.60561338 +0000 UTC m=+53.574115471" Sep 12 18:08:32.606666 kubelet[2717]: I0912 18:08:32.605759 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b67d4c7b9-5mx2d" podStartSLOduration=30.599347456 podStartE2EDuration="36.605752251s" podCreationTimestamp="2025-09-12 18:07:56 +0000 UTC" firstStartedPulling="2025-09-12 18:08:25.970154942 +0000 UTC m=+46.938657013" lastFinishedPulling="2025-09-12 18:08:31.976559728 +0000 UTC m=+52.945061808" observedRunningTime="2025-09-12 18:08:32.58696655 +0000 UTC m=+53.555468638" watchObservedRunningTime="2025-09-12 18:08:32.605752251 +0000 UTC m=+53.574254341" Sep 12 18:08:33.397791 containerd[1525]: time="2025-09-12T18:08:33.396256912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:33.397791 containerd[1525]: time="2025-09-12T18:08:33.397121726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 18:08:33.398492 containerd[1525]: time="2025-09-12T18:08:33.398460552Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:33.402632 containerd[1525]: time="2025-09-12T18:08:33.402583651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:33.404360 containerd[1525]: time="2025-09-12T18:08:33.404115871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.426477905s" Sep 12 18:08:33.404360 containerd[1525]: time="2025-09-12T18:08:33.404154865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 18:08:33.405685 containerd[1525]: time="2025-09-12T18:08:33.405514238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 18:08:33.409995 containerd[1525]: time="2025-09-12T18:08:33.409953801Z" level=info msg="CreateContainer within sandbox \"d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 18:08:33.448332 containerd[1525]: time="2025-09-12T18:08:33.448257432Z" level=info msg="Container aaab97ea2049ed25334fae707ae48859a3c555a6e92fa4d08f285b0bc5df2e10: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:33.508240 containerd[1525]: time="2025-09-12T18:08:33.508181050Z" level=info msg="CreateContainer within sandbox \"d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"aaab97ea2049ed25334fae707ae48859a3c555a6e92fa4d08f285b0bc5df2e10\"" Sep 12 18:08:33.509040 containerd[1525]: time="2025-09-12T18:08:33.508881001Z" level=info msg="StartContainer for \"aaab97ea2049ed25334fae707ae48859a3c555a6e92fa4d08f285b0bc5df2e10\"" Sep 12 18:08:33.510810 containerd[1525]: time="2025-09-12T18:08:33.510767622Z" level=info msg="connecting to shim aaab97ea2049ed25334fae707ae48859a3c555a6e92fa4d08f285b0bc5df2e10" address="unix:///run/containerd/s/f584a2fe08f8bd388896cf59b04684ba0558c2aaa9eda86af5a986fd8e3812aa" protocol=ttrpc version=3 Sep 12 18:08:33.561486 systemd[1]: Started cri-containerd-aaab97ea2049ed25334fae707ae48859a3c555a6e92fa4d08f285b0bc5df2e10.scope - libcontainer container aaab97ea2049ed25334fae707ae48859a3c555a6e92fa4d08f285b0bc5df2e10. Sep 12 18:08:33.662878 containerd[1525]: time="2025-09-12T18:08:33.662753938Z" level=info msg="StartContainer for \"aaab97ea2049ed25334fae707ae48859a3c555a6e92fa4d08f285b0bc5df2e10\" returns successfully" Sep 12 18:08:37.000815 systemd[1]: Started sshd@7-137.184.114.151:22-139.178.89.65:38654.service - OpenSSH per-connection server daemon (139.178.89.65:38654). Sep 12 18:08:37.202496 sshd[5012]: Accepted publickey for core from 139.178.89.65 port 38654 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:08:37.207772 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:08:37.229244 systemd-logind[1498]: New session 8 of user core. Sep 12 18:08:37.234431 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 18:08:37.994863 sshd[5015]: Connection closed by 139.178.89.65 port 38654 Sep 12 18:08:37.994130 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Sep 12 18:08:38.001669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396677575.mount: Deactivated successfully. Sep 12 18:08:38.011373 systemd[1]: sshd@7-137.184.114.151:22-139.178.89.65:38654.service: Deactivated successfully. Sep 12 18:08:38.014405 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 18:08:38.016697 systemd-logind[1498]: Session 8 logged out. Waiting for processes to exit. Sep 12 18:08:38.020858 systemd-logind[1498]: Removed session 8. Sep 12 18:08:38.665471 containerd[1525]: time="2025-09-12T18:08:38.665389978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:38.666998 containerd[1525]: time="2025-09-12T18:08:38.666265257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 18:08:38.668122 containerd[1525]: time="2025-09-12T18:08:38.668087996Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:38.670931 containerd[1525]: time="2025-09-12T18:08:38.670872802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:38.672921 containerd[1525]: time="2025-09-12T18:08:38.672830970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.267283579s" Sep 12 18:08:38.673198 containerd[1525]: time="2025-09-12T18:08:38.673070398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 18:08:38.674458 containerd[1525]: time="2025-09-12T18:08:38.674408732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 18:08:38.684190 containerd[1525]: time="2025-09-12T18:08:38.684115114Z" level=info msg="CreateContainer within sandbox \"1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 18:08:38.712508 containerd[1525]: time="2025-09-12T18:08:38.711959257Z" level=info msg="Container 40fb4643ee69565bc5d816c33ef0b52065defc57e73aeacfa65ed3b7ad10e323: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:38.761313 containerd[1525]: time="2025-09-12T18:08:38.761252541Z" level=info msg="CreateContainer within sandbox \"1ce9de8e3515c660b871f8908d6e4b03b54bd8b2f9bfda447aec0554875322b5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"40fb4643ee69565bc5d816c33ef0b52065defc57e73aeacfa65ed3b7ad10e323\"" Sep 12 18:08:38.762291 containerd[1525]: time="2025-09-12T18:08:38.762256237Z" level=info msg="StartContainer for \"40fb4643ee69565bc5d816c33ef0b52065defc57e73aeacfa65ed3b7ad10e323\"" Sep 12 18:08:38.769093 containerd[1525]: time="2025-09-12T18:08:38.769045630Z" level=info msg="connecting to shim 40fb4643ee69565bc5d816c33ef0b52065defc57e73aeacfa65ed3b7ad10e323" address="unix:///run/containerd/s/303f48012d6152a01099c7542de35e798db727440bc0a6678184be675418e910" protocol=ttrpc version=3 Sep 12 18:08:38.816284 systemd[1]: Started cri-containerd-40fb4643ee69565bc5d816c33ef0b52065defc57e73aeacfa65ed3b7ad10e323.scope - libcontainer container 40fb4643ee69565bc5d816c33ef0b52065defc57e73aeacfa65ed3b7ad10e323. Sep 12 18:08:38.885200 containerd[1525]: time="2025-09-12T18:08:38.885087081Z" level=info msg="StartContainer for \"40fb4643ee69565bc5d816c33ef0b52065defc57e73aeacfa65ed3b7ad10e323\" returns successfully" Sep 12 18:08:39.619089 kubelet[2717]: I0912 18:08:39.617434 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-9q8nh" podStartSLOduration=29.041289 podStartE2EDuration="41.617414833s" podCreationTimestamp="2025-09-12 18:07:58 +0000 UTC" firstStartedPulling="2025-09-12 18:08:26.098151843 +0000 UTC m=+47.066653911" lastFinishedPulling="2025-09-12 18:08:38.674277664 +0000 UTC m=+59.642779744" observedRunningTime="2025-09-12 18:08:39.614084036 +0000 UTC m=+60.582586127" watchObservedRunningTime="2025-09-12 18:08:39.617414833 +0000 UTC m=+60.585916921" Sep 12 18:08:39.907384 containerd[1525]: time="2025-09-12T18:08:39.906476982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40fb4643ee69565bc5d816c33ef0b52065defc57e73aeacfa65ed3b7ad10e323\" id:\"c5f31afbd844aab456e642db93cbafe4f174add718ec80c55e4a5ebe646c2513\" pid:5085 exit_status:1 exited_at:{seconds:1757700519 nanos:876006536}" Sep 12 18:08:40.737872 containerd[1525]: time="2025-09-12T18:08:40.737825012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40fb4643ee69565bc5d816c33ef0b52065defc57e73aeacfa65ed3b7ad10e323\" id:\"c6d51109c4f7cd80512780d268d71bce2b65892648f38837617d34f505a82d19\" pid:5114 exit_status:1 exited_at:{seconds:1757700520 nanos:737105303}" Sep 12 18:08:41.741289 containerd[1525]: time="2025-09-12T18:08:41.741209106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40fb4643ee69565bc5d816c33ef0b52065defc57e73aeacfa65ed3b7ad10e323\" id:\"27a74acb89d79b8a4568d32c74ad29d070091ce70b85e27095124ed52fe43a5f\" pid:5136 exit_status:1 exited_at:{seconds:1757700521 nanos:739997809}" Sep 12 18:08:41.980762 containerd[1525]: time="2025-09-12T18:08:41.980684951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:41.981477 containerd[1525]: time="2025-09-12T18:08:41.981451684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 18:08:41.982049 containerd[1525]: time="2025-09-12T18:08:41.981996534Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:41.984049 containerd[1525]: time="2025-09-12T18:08:41.983686789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:08:41.984769 containerd[1525]: time="2025-09-12T18:08:41.984739654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.310156212s" Sep 12 18:08:41.984958 containerd[1525]: time="2025-09-12T18:08:41.984941020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 18:08:42.008829 containerd[1525]: time="2025-09-12T18:08:42.008199552Z" level=info msg="CreateContainer within sandbox \"d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 18:08:42.017042 containerd[1525]: time="2025-09-12T18:08:42.016241115Z" level=info msg="Container bf5a790b1b938077d6a676d6a2fdb0dbe70aff0afb7f1821294cbc9be2781e12: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:08:42.042876 containerd[1525]: time="2025-09-12T18:08:42.042827902Z" level=info msg="CreateContainer within sandbox \"d73a29bd69c2dbc5b87c31d8f6d9d6ee9b85d261a4c57d937d660edd8858ab46\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bf5a790b1b938077d6a676d6a2fdb0dbe70aff0afb7f1821294cbc9be2781e12\"" Sep 12 18:08:42.047193 containerd[1525]: time="2025-09-12T18:08:42.046666729Z" level=info msg="StartContainer for \"bf5a790b1b938077d6a676d6a2fdb0dbe70aff0afb7f1821294cbc9be2781e12\"" Sep 12 18:08:42.050731 containerd[1525]: time="2025-09-12T18:08:42.050679511Z" level=info msg="connecting to shim bf5a790b1b938077d6a676d6a2fdb0dbe70aff0afb7f1821294cbc9be2781e12" address="unix:///run/containerd/s/f584a2fe08f8bd388896cf59b04684ba0558c2aaa9eda86af5a986fd8e3812aa" protocol=ttrpc version=3 Sep 12 18:08:42.077411 systemd[1]: Started cri-containerd-bf5a790b1b938077d6a676d6a2fdb0dbe70aff0afb7f1821294cbc9be2781e12.scope - libcontainer container bf5a790b1b938077d6a676d6a2fdb0dbe70aff0afb7f1821294cbc9be2781e12. Sep 12 18:08:42.133047 containerd[1525]: time="2025-09-12T18:08:42.132978076Z" level=info msg="StartContainer for \"bf5a790b1b938077d6a676d6a2fdb0dbe70aff0afb7f1821294cbc9be2781e12\" returns successfully" Sep 12 18:08:42.513168 kubelet[2717]: I0912 18:08:42.513002 2717 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 18:08:42.522171 kubelet[2717]: I0912 18:08:42.522121 2717 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 18:08:43.023703 systemd[1]: Started sshd@8-137.184.114.151:22-139.178.89.65:40744.service - OpenSSH per-connection server daemon (139.178.89.65:40744). Sep 12 18:08:43.206554 sshd[5184]: Accepted publickey for core from 139.178.89.65 port 40744 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:08:43.210855 sshd-session[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:08:43.217576 systemd-logind[1498]: New session 9 of user core. Sep 12 18:08:43.224267 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 18:08:43.858293 sshd[5187]: Connection closed by 139.178.89.65 port 40744 Sep 12 18:08:43.858947 sshd-session[5184]: pam_unix(sshd:session): session closed for user core Sep 12 18:08:43.864624 systemd-logind[1498]: Session 9 logged out. Waiting for processes to exit. Sep 12 18:08:43.864929 systemd[1]: sshd@8-137.184.114.151:22-139.178.89.65:40744.service: Deactivated successfully. Sep 12 18:08:43.867189 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 18:08:43.870001 systemd-logind[1498]: Removed session 9. Sep 12 18:08:45.707014 containerd[1525]: time="2025-09-12T18:08:45.706962378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40fb4643ee69565bc5d816c33ef0b52065defc57e73aeacfa65ed3b7ad10e323\" id:\"667e559544605a43f992fef4d87daf9baccfa2810b232d571434d65b42d0d90f\" pid:5213 exited_at:{seconds:1757700525 nanos:706300528}" Sep 12 18:08:48.827687 containerd[1525]: time="2025-09-12T18:08:48.827620553Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bef7db5284ee35c284678307a30cc54996944249d806d9092a40f284b9cb95a\" id:\"a57b8aebcb72fc12b03b55be0fe8711b5fc9377d413b21b9e7f61e7e3853c5c6\" pid:5236 exited_at:{seconds:1757700528 nanos:827194381}" Sep 12 18:08:48.877082 systemd[1]: Started sshd@9-137.184.114.151:22-139.178.89.65:40746.service - OpenSSH per-connection server daemon (139.178.89.65:40746). Sep 12 18:08:48.974682 kubelet[2717]: I0912 18:08:48.964952 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-76f77" podStartSLOduration=33.965475174 podStartE2EDuration="49.923381832s" podCreationTimestamp="2025-09-12 18:07:59 +0000 UTC" firstStartedPulling="2025-09-12 18:08:26.028810818 +0000 UTC m=+46.997312886" lastFinishedPulling="2025-09-12 18:08:41.986717477 +0000 UTC m=+62.955219544" observedRunningTime="2025-09-12 18:08:42.643517694 +0000 UTC m=+63.612019782" watchObservedRunningTime="2025-09-12 18:08:48.923381832 +0000 UTC m=+69.891883921" Sep 12 18:08:49.013456 sshd[5249]: Accepted publickey for core from 139.178.89.65 port 40746 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:08:49.015963 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:08:49.022380 systemd-logind[1498]: New session 10 of user core. Sep 12 18:08:49.027557 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 18:08:49.334945 sshd[5255]: Connection closed by 139.178.89.65 port 40746 Sep 12 18:08:49.335909 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Sep 12 18:08:49.348494 systemd[1]: sshd@9-137.184.114.151:22-139.178.89.65:40746.service: Deactivated successfully. Sep 12 18:08:49.352311 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 18:08:49.353843 systemd-logind[1498]: Session 10 logged out. Waiting for processes to exit. Sep 12 18:08:49.359451 systemd[1]: Started sshd@10-137.184.114.151:22-139.178.89.65:40760.service - OpenSSH per-connection server daemon (139.178.89.65:40760). Sep 12 18:08:49.361084 systemd-logind[1498]: Removed session 10. Sep 12 18:08:49.438766 sshd[5267]: Accepted publickey for core from 139.178.89.65 port 40760 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:08:49.441596 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:08:49.452628 systemd-logind[1498]: New session 11 of user core. Sep 12 18:08:49.465338 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 18:08:49.695663 sshd[5270]: Connection closed by 139.178.89.65 port 40760 Sep 12 18:08:49.696298 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Sep 12 18:08:49.711376 systemd[1]: sshd@10-137.184.114.151:22-139.178.89.65:40760.service: Deactivated successfully. Sep 12 18:08:49.716789 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 18:08:49.718617 systemd-logind[1498]: Session 11 logged out. Waiting for processes to exit. Sep 12 18:08:49.723819 systemd-logind[1498]: Removed session 11. Sep 12 18:08:49.726414 systemd[1]: Started sshd@11-137.184.114.151:22-139.178.89.65:40774.service - OpenSSH per-connection server daemon (139.178.89.65:40774). Sep 12 18:08:49.815066 sshd[5280]: Accepted publickey for core from 139.178.89.65 port 40774 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:08:49.817149 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:08:49.824204 systemd-logind[1498]: New session 12 of user core. Sep 12 18:08:49.833317 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 18:08:49.988032 sshd[5283]: Connection closed by 139.178.89.65 port 40774 Sep 12 18:08:49.989391 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Sep 12 18:08:49.995772 systemd-logind[1498]: Session 12 logged out. Waiting for processes to exit. Sep 12 18:08:49.996165 systemd[1]: sshd@11-137.184.114.151:22-139.178.89.65:40774.service: Deactivated successfully. Sep 12 18:08:49.999420 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 18:08:50.002176 systemd-logind[1498]: Removed session 12. Sep 12 18:08:55.007353 systemd[1]: Started sshd@12-137.184.114.151:22-139.178.89.65:59624.service - OpenSSH per-connection server daemon (139.178.89.65:59624). Sep 12 18:08:55.076137 sshd[5299]: Accepted publickey for core from 139.178.89.65 port 59624 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:08:55.080698 sshd-session[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:08:55.088080 systemd-logind[1498]: New session 13 of user core. Sep 12 18:08:55.093295 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 18:08:55.260996 sshd[5304]: Connection closed by 139.178.89.65 port 59624 Sep 12 18:08:55.262667 sshd-session[5299]: pam_unix(sshd:session): session closed for user core Sep 12 18:08:55.271172 systemd-logind[1498]: Session 13 logged out. Waiting for processes to exit. Sep 12 18:08:55.271881 systemd[1]: sshd@12-137.184.114.151:22-139.178.89.65:59624.service: Deactivated successfully. Sep 12 18:08:55.275883 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 18:08:55.280163 systemd-logind[1498]: Removed session 13. Sep 12 18:08:59.667482 containerd[1525]: time="2025-09-12T18:08:59.667412183Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a5e320ca93692b0fddaa7d94f5e700ca51885b169c3bf24bf0d4dad19454d81\" id:\"97388994b4d933ca1de1a55713a6ff23b520ac36a7e4d7a1595a12054d8d6ddf\" pid:5330 exited_at:{seconds:1757700539 nanos:666875876}" Sep 12 18:09:00.283889 systemd[1]: Started sshd@13-137.184.114.151:22-139.178.89.65:49288.service - OpenSSH per-connection server daemon (139.178.89.65:49288). Sep 12 18:09:00.488434 sshd[5340]: Accepted publickey for core from 139.178.89.65 port 49288 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:09:00.493118 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:09:00.505123 systemd-logind[1498]: New session 14 of user core. Sep 12 18:09:00.509285 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 18:09:01.128751 sshd[5343]: Connection closed by 139.178.89.65 port 49288 Sep 12 18:09:01.130614 sshd-session[5340]: pam_unix(sshd:session): session closed for user core Sep 12 18:09:01.146920 systemd[1]: sshd@13-137.184.114.151:22-139.178.89.65:49288.service: Deactivated successfully. Sep 12 18:09:01.149091 systemd-logind[1498]: Session 14 logged out. Waiting for processes to exit. Sep 12 18:09:01.150887 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 18:09:01.161300 systemd-logind[1498]: Removed session 14. Sep 12 18:09:05.253270 kubelet[2717]: E0912 18:09:05.252992 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:09:06.144188 systemd[1]: Started sshd@14-137.184.114.151:22-139.178.89.65:49294.service - OpenSSH per-connection server daemon (139.178.89.65:49294). Sep 12 18:09:06.250903 sshd[5363]: Accepted publickey for core from 139.178.89.65 port 49294 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:09:06.252849 sshd-session[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:09:06.260253 systemd-logind[1498]: New session 15 of user core. Sep 12 18:09:06.265305 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 18:09:06.638731 sshd[5366]: Connection closed by 139.178.89.65 port 49294 Sep 12 18:09:06.639640 sshd-session[5363]: pam_unix(sshd:session): session closed for user core Sep 12 18:09:06.650688 systemd[1]: sshd@14-137.184.114.151:22-139.178.89.65:49294.service: Deactivated successfully. Sep 12 18:09:06.654185 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 18:09:06.655690 systemd-logind[1498]: Session 15 logged out. Waiting for processes to exit. Sep 12 18:09:06.660368 systemd[1]: Started sshd@15-137.184.114.151:22-139.178.89.65:49298.service - OpenSSH per-connection server daemon (139.178.89.65:49298). Sep 12 18:09:06.662549 systemd-logind[1498]: Removed session 15. Sep 12 18:09:06.724136 sshd[5378]: Accepted publickey for core from 139.178.89.65 port 49298 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:09:06.725952 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:09:06.733193 systemd-logind[1498]: New session 16 of user core. Sep 12 18:09:06.739300 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 18:09:07.049216 sshd[5381]: Connection closed by 139.178.89.65 port 49298 Sep 12 18:09:07.050094 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Sep 12 18:09:07.062393 systemd[1]: sshd@15-137.184.114.151:22-139.178.89.65:49298.service: Deactivated successfully. Sep 12 18:09:07.065504 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 18:09:07.068744 systemd-logind[1498]: Session 16 logged out. Waiting for processes to exit. Sep 12 18:09:07.074986 systemd[1]: Started sshd@16-137.184.114.151:22-139.178.89.65:49304.service - OpenSSH per-connection server daemon (139.178.89.65:49304). Sep 12 18:09:07.077784 systemd-logind[1498]: Removed session 16. Sep 12 18:09:07.176473 sshd[5390]: Accepted publickey for core from 139.178.89.65 port 49304 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:09:07.178700 sshd-session[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:09:07.186930 systemd-logind[1498]: New session 17 of user core. Sep 12 18:09:07.191305 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 18:09:08.089349 sshd[5393]: Connection closed by 139.178.89.65 port 49304 Sep 12 18:09:08.090720 sshd-session[5390]: pam_unix(sshd:session): session closed for user core Sep 12 18:09:08.107495 systemd[1]: sshd@16-137.184.114.151:22-139.178.89.65:49304.service: Deactivated successfully. Sep 12 18:09:08.114863 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 18:09:08.119531 systemd-logind[1498]: Session 17 logged out. Waiting for processes to exit. Sep 12 18:09:08.126631 systemd[1]: Started sshd@17-137.184.114.151:22-139.178.89.65:49318.service - OpenSSH per-connection server daemon (139.178.89.65:49318). Sep 12 18:09:08.129783 systemd-logind[1498]: Removed session 17. Sep 12 18:09:08.236152 sshd[5409]: Accepted publickey for core from 139.178.89.65 port 49318 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:09:08.237371 sshd-session[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:09:08.244100 systemd-logind[1498]: New session 18 of user core. Sep 12 18:09:08.252331 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 18:09:08.839917 sshd[5415]: Connection closed by 139.178.89.65 port 49318 Sep 12 18:09:08.842542 sshd-session[5409]: pam_unix(sshd:session): session closed for user core Sep 12 18:09:08.863629 systemd[1]: sshd@17-137.184.114.151:22-139.178.89.65:49318.service: Deactivated successfully. Sep 12 18:09:08.867934 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 18:09:08.870214 systemd-logind[1498]: Session 18 logged out. Waiting for processes to exit. Sep 12 18:09:08.875243 systemd-logind[1498]: Removed session 18. Sep 12 18:09:08.877965 systemd[1]: Started sshd@18-137.184.114.151:22-139.178.89.65:49334.service - OpenSSH per-connection server daemon (139.178.89.65:49334). Sep 12 18:09:09.013476 sshd[5424]: Accepted publickey for core from 139.178.89.65 port 49334 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:09:09.017079 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:09:09.025315 systemd-logind[1498]: New session 19 of user core. Sep 12 18:09:09.031329 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 18:09:09.194830 sshd[5427]: Connection closed by 139.178.89.65 port 49334 Sep 12 18:09:09.195653 sshd-session[5424]: pam_unix(sshd:session): session closed for user core Sep 12 18:09:09.201412 systemd[1]: sshd@18-137.184.114.151:22-139.178.89.65:49334.service: Deactivated successfully. Sep 12 18:09:09.206543 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 18:09:09.208188 systemd-logind[1498]: Session 19 logged out. Waiting for processes to exit. Sep 12 18:09:09.211964 systemd-logind[1498]: Removed session 19. Sep 12 18:09:09.230819 kubelet[2717]: E0912 18:09:09.229960 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:09:10.222718 kubelet[2717]: E0912 18:09:10.222584 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:09:11.739865 containerd[1525]: time="2025-09-12T18:09:11.739809929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40fb4643ee69565bc5d816c33ef0b52065defc57e73aeacfa65ed3b7ad10e323\" id:\"8fa4af1085ff501f84283b71fa80b1427feff1abf180f6fb95f6d71fd9b42075\" pid:5451 exited_at:{seconds:1757700551 nanos:739103292}" Sep 12 18:09:13.223969 kubelet[2717]: E0912 18:09:13.223898 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:09:14.212587 systemd[1]: Started sshd@19-137.184.114.151:22-139.178.89.65:39790.service - OpenSSH per-connection server daemon (139.178.89.65:39790). Sep 12 18:09:14.379487 sshd[5464]: Accepted publickey for core from 139.178.89.65 port 39790 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:09:14.382699 sshd-session[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:09:14.393207 systemd-logind[1498]: New session 20 of user core. Sep 12 18:09:14.399295 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 18:09:15.401701 sshd[5467]: Connection closed by 139.178.89.65 port 39790 Sep 12 18:09:15.400659 sshd-session[5464]: pam_unix(sshd:session): session closed for user core Sep 12 18:09:15.413408 systemd[1]: sshd@19-137.184.114.151:22-139.178.89.65:39790.service: Deactivated successfully. Sep 12 18:09:15.417365 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 18:09:15.420697 systemd-logind[1498]: Session 20 logged out. Waiting for processes to exit. Sep 12 18:09:15.423539 systemd-logind[1498]: Removed session 20. Sep 12 18:09:17.333070 containerd[1525]: time="2025-09-12T18:09:17.332424669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a5e320ca93692b0fddaa7d94f5e700ca51885b169c3bf24bf0d4dad19454d81\" id:\"51f43fe99ba7ea17a6026280f23f02268059c268c9814022ea69cea225355847\" pid:5495 exited_at:{seconds:1757700557 nanos:331917793}" Sep 12 18:09:18.795030 containerd[1525]: time="2025-09-12T18:09:18.794786462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bef7db5284ee35c284678307a30cc54996944249d806d9092a40f284b9cb95a\" id:\"0d044037b951343394287c25dece9427117d2ff0fe4d6cb8a5cb9e5d2cde3d7d\" pid:5517 exited_at:{seconds:1757700558 nanos:794404545}" Sep 12 18:09:20.438677 systemd[1]: Started sshd@20-137.184.114.151:22-139.178.89.65:56526.service - OpenSSH per-connection server daemon (139.178.89.65:56526). Sep 12 18:09:20.586725 sshd[5530]: Accepted publickey for core from 139.178.89.65 port 56526 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:09:20.589665 sshd-session[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:09:20.598106 systemd-logind[1498]: New session 21 of user core. Sep 12 18:09:20.605205 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 18:09:21.436237 sshd[5533]: Connection closed by 139.178.89.65 port 56526 Sep 12 18:09:21.436735 sshd-session[5530]: pam_unix(sshd:session): session closed for user core Sep 12 18:09:21.446177 systemd[1]: sshd@20-137.184.114.151:22-139.178.89.65:56526.service: Deactivated successfully. Sep 12 18:09:21.449502 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 18:09:21.453499 systemd-logind[1498]: Session 21 logged out. Waiting for processes to exit. Sep 12 18:09:21.456944 systemd-logind[1498]: Removed session 21. Sep 12 18:09:24.237273 kubelet[2717]: E0912 18:09:24.236706 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:09:26.455969 systemd[1]: Started sshd@21-137.184.114.151:22-139.178.89.65:56534.service - OpenSSH per-connection server daemon (139.178.89.65:56534). Sep 12 18:09:26.553101 sshd[5547]: Accepted publickey for core from 139.178.89.65 port 56534 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:09:26.555014 sshd-session[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:09:26.561944 systemd-logind[1498]: New session 22 of user core. Sep 12 18:09:26.566360 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 18:09:26.842144 sshd[5550]: Connection closed by 139.178.89.65 port 56534 Sep 12 18:09:26.842140 sshd-session[5547]: pam_unix(sshd:session): session closed for user core Sep 12 18:09:26.851470 systemd-logind[1498]: Session 22 logged out. Waiting for processes to exit. Sep 12 18:09:26.851574 systemd[1]: sshd@21-137.184.114.151:22-139.178.89.65:56534.service: Deactivated successfully. Sep 12 18:09:26.855734 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 18:09:26.860742 systemd-logind[1498]: Removed session 22. Sep 12 18:09:28.210211 systemd[1]: Started sshd@22-137.184.114.151:22-172.236.228.39:8064.service - OpenSSH per-connection server daemon (172.236.228.39:8064). Sep 12 18:09:28.586482 kernel: hrtimer: interrupt took 6093240 ns