Sep 9 05:35:07.945221 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 03:39:34 -00 2025 Sep 9 05:35:07.945269 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:35:07.945285 kernel: BIOS-provided physical RAM map: Sep 9 05:35:07.945297 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 9 05:35:07.945307 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 9 05:35:07.945318 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 9 05:35:07.945330 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 9 05:35:07.945342 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 9 05:35:07.945352 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 05:35:07.945359 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 9 05:35:07.945366 kernel: NX (Execute Disable) protection: active Sep 9 05:35:07.945373 kernel: APIC: Static calls initialized Sep 9 05:35:07.945380 kernel: SMBIOS 2.8 present. Sep 9 05:35:07.945387 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 9 05:35:07.945397 kernel: DMI: Memory slots populated: 1/1 Sep 9 05:35:07.945404 kernel: Hypervisor detected: KVM Sep 9 05:35:07.945416 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 05:35:07.945423 kernel: kvm-clock: using sched offset of 5192595725 cycles Sep 9 05:35:07.945432 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 05:35:07.945439 kernel: tsc: Detected 1995.307 MHz processor Sep 9 05:35:07.945447 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 05:35:07.945456 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 05:35:07.945463 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 9 05:35:07.945473 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 9 05:35:07.945481 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 05:35:07.945488 kernel: ACPI: Early table checksum verification disabled Sep 9 05:35:07.945496 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 9 05:35:07.945504 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:07.945511 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:07.945518 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:07.945526 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 9 05:35:07.945533 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:07.945543 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:07.945550 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:07.945557 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:07.945565 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 9 05:35:07.945572 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 9 05:35:07.945579 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 9 05:35:07.945587 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 9 05:35:07.945594 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 9 05:35:07.945608 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 9 05:35:07.945615 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 9 05:35:07.945623 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 9 05:35:07.945631 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 9 05:35:07.945639 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Sep 9 05:35:07.945649 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Sep 9 05:35:07.945657 kernel: Zone ranges: Sep 9 05:35:07.945665 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 05:35:07.945673 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 9 05:35:07.945680 kernel: Normal empty Sep 9 05:35:07.945688 kernel: Device empty Sep 9 05:35:07.945696 kernel: Movable zone start for each node Sep 9 05:35:07.945704 kernel: Early memory node ranges Sep 9 05:35:07.945711 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 9 05:35:07.945719 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 9 05:35:07.945729 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 9 05:35:07.945736 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 05:35:07.945744 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 05:35:07.945752 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 9 05:35:07.945759 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 05:35:07.945767 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 05:35:07.945779 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 05:35:07.946930 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 05:35:07.946994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 05:35:07.947015 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 05:35:07.947028 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 05:35:07.947046 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 05:35:07.947827 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 05:35:07.947842 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 05:35:07.947851 kernel: TSC deadline timer available Sep 9 05:35:07.947859 kernel: CPU topo: Max. logical packages: 1 Sep 9 05:35:07.947867 kernel: CPU topo: Max. logical dies: 1 Sep 9 05:35:07.947875 kernel: CPU topo: Max. dies per package: 1 Sep 9 05:35:07.947883 kernel: CPU topo: Max. threads per core: 1 Sep 9 05:35:07.947895 kernel: CPU topo: Num. cores per package: 2 Sep 9 05:35:07.947903 kernel: CPU topo: Num. threads per package: 2 Sep 9 05:35:07.947911 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 9 05:35:07.947919 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 05:35:07.947927 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 9 05:35:07.947935 kernel: Booting paravirtualized kernel on KVM Sep 9 05:35:07.947944 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 05:35:07.947952 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 9 05:35:07.947960 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 9 05:35:07.947971 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 9 05:35:07.947979 kernel: pcpu-alloc: [0] 0 1 Sep 9 05:35:07.947987 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 9 05:35:07.947999 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:35:07.948017 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:35:07.948029 kernel: random: crng init done Sep 9 05:35:07.948040 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 05:35:07.948052 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 05:35:07.948082 kernel: Fallback order for Node 0: 0 Sep 9 05:35:07.948093 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Sep 9 05:35:07.948104 kernel: Policy zone: DMA32 Sep 9 05:35:07.948117 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:35:07.948129 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 05:35:07.948142 kernel: Kernel/User page tables isolation: enabled Sep 9 05:35:07.948150 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 05:35:07.948158 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 05:35:07.948166 kernel: Dynamic Preempt: voluntary Sep 9 05:35:07.948184 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:35:07.948198 kernel: rcu: RCU event tracing is enabled. Sep 9 05:35:07.948210 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 05:35:07.948222 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:35:07.948235 kernel: Rude variant of Tasks RCU enabled. Sep 9 05:35:07.948243 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:35:07.948252 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:35:07.948260 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 05:35:07.948269 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:35:07.948287 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:35:07.948295 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:35:07.948303 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 9 05:35:07.948311 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:35:07.948319 kernel: Console: colour VGA+ 80x25 Sep 9 05:35:07.948327 kernel: printk: legacy console [tty0] enabled Sep 9 05:35:07.948335 kernel: printk: legacy console [ttyS0] enabled Sep 9 05:35:07.948343 kernel: ACPI: Core revision 20240827 Sep 9 05:35:07.948352 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 05:35:07.948371 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 05:35:07.948380 kernel: x2apic enabled Sep 9 05:35:07.948388 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 05:35:07.948399 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 05:35:07.948412 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985ba32100, max_idle_ns: 881590654722 ns Sep 9 05:35:07.948421 kernel: Calibrating delay loop (skipped) preset value.. 3990.61 BogoMIPS (lpj=1995307) Sep 9 05:35:07.948429 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 9 05:35:07.948438 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 9 05:35:07.948447 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 05:35:07.948458 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 05:35:07.948466 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 05:35:07.948475 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 9 05:35:07.948484 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 05:35:07.948492 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 05:35:07.948501 kernel: MDS: Mitigation: Clear CPU buffers Sep 9 05:35:07.948509 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 9 05:35:07.948520 kernel: active return thunk: its_return_thunk Sep 9 05:35:07.948529 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 05:35:07.948538 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 05:35:07.948546 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 05:35:07.948555 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 05:35:07.948563 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 05:35:07.948572 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 9 05:35:07.948581 kernel: Freeing SMP alternatives memory: 32K Sep 9 05:35:07.948589 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:35:07.948604 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:35:07.948625 kernel: landlock: Up and running. Sep 9 05:35:07.948638 kernel: SELinux: Initializing. Sep 9 05:35:07.948647 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 05:35:07.948656 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 05:35:07.948664 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 9 05:35:07.948673 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 9 05:35:07.948681 kernel: signal: max sigframe size: 1776 Sep 9 05:35:07.948693 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:35:07.948712 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:35:07.948724 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:35:07.948737 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 05:35:07.948762 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:35:07.948781 kernel: smpboot: x86: Booting SMP configuration: Sep 9 05:35:07.948806 kernel: .... node #0, CPUs: #1 Sep 9 05:35:07.948815 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 05:35:07.948824 kernel: smpboot: Total of 2 processors activated (7981.22 BogoMIPS) Sep 9 05:35:07.948833 kernel: Memory: 1966916K/2096612K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54076K init, 2892K bss, 125140K reserved, 0K cma-reserved) Sep 9 05:35:07.948845 kernel: devtmpfs: initialized Sep 9 05:35:07.948854 kernel: x86/mm: Memory block size: 128MB Sep 9 05:35:07.948862 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:35:07.948871 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 05:35:07.948880 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:35:07.948888 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:35:07.948897 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:35:07.948906 kernel: audit: type=2000 audit(1757396104.278:1): state=initialized audit_enabled=0 res=1 Sep 9 05:35:07.948914 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:35:07.948925 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 05:35:07.948934 kernel: cpuidle: using governor menu Sep 9 05:35:07.948942 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:35:07.948951 kernel: dca service started, version 1.12.1 Sep 9 05:35:07.948959 kernel: PCI: Using configuration type 1 for base access Sep 9 05:35:07.948968 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 05:35:07.948976 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:35:07.948985 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:35:07.948993 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:35:07.949005 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:35:07.949013 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:35:07.949022 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 05:35:07.949030 kernel: ACPI: Interpreter enabled Sep 9 05:35:07.949039 kernel: ACPI: PM: (supports S0 S5) Sep 9 05:35:07.949048 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 05:35:07.949056 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 05:35:07.949065 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 05:35:07.949073 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 9 05:35:07.949084 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 05:35:07.949376 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 9 05:35:07.949481 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 9 05:35:07.949593 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 9 05:35:07.949610 kernel: acpiphp: Slot [3] registered Sep 9 05:35:07.949624 kernel: acpiphp: Slot [4] registered Sep 9 05:35:07.949636 kernel: acpiphp: Slot [5] registered Sep 9 05:35:07.949653 kernel: acpiphp: Slot [6] registered Sep 9 05:35:07.949667 kernel: acpiphp: Slot [7] registered Sep 9 05:35:07.949676 kernel: acpiphp: Slot [8] registered Sep 9 05:35:07.949684 kernel: acpiphp: Slot [9] registered Sep 9 05:35:07.949693 kernel: acpiphp: Slot [10] registered Sep 9 05:35:07.949702 kernel: acpiphp: Slot [11] registered Sep 9 05:35:07.949710 kernel: acpiphp: Slot [12] registered Sep 9 05:35:07.949719 kernel: acpiphp: Slot [13] registered Sep 9 05:35:07.949727 kernel: acpiphp: Slot [14] registered Sep 9 05:35:07.949736 kernel: acpiphp: Slot [15] registered Sep 9 05:35:07.949748 kernel: acpiphp: Slot [16] registered Sep 9 05:35:07.949756 kernel: acpiphp: Slot [17] registered Sep 9 05:35:07.949765 kernel: acpiphp: Slot [18] registered Sep 9 05:35:07.949773 kernel: acpiphp: Slot [19] registered Sep 9 05:35:07.949782 kernel: acpiphp: Slot [20] registered Sep 9 05:35:07.951862 kernel: acpiphp: Slot [21] registered Sep 9 05:35:07.951878 kernel: acpiphp: Slot [22] registered Sep 9 05:35:07.951888 kernel: acpiphp: Slot [23] registered Sep 9 05:35:07.951896 kernel: acpiphp: Slot [24] registered Sep 9 05:35:07.951915 kernel: acpiphp: Slot [25] registered Sep 9 05:35:07.951927 kernel: acpiphp: Slot [26] registered Sep 9 05:35:07.951936 kernel: acpiphp: Slot [27] registered Sep 9 05:35:07.951944 kernel: acpiphp: Slot [28] registered Sep 9 05:35:07.951954 kernel: acpiphp: Slot [29] registered Sep 9 05:35:07.951963 kernel: acpiphp: Slot [30] registered Sep 9 05:35:07.951971 kernel: acpiphp: Slot [31] registered Sep 9 05:35:07.951980 kernel: PCI host bridge to bus 0000:00 Sep 9 05:35:07.952195 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 05:35:07.952322 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 05:35:07.952424 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 05:35:07.952520 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 9 05:35:07.952604 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 9 05:35:07.952687 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 05:35:07.952874 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 9 05:35:07.953028 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 9 05:35:07.953187 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Sep 9 05:35:07.953314 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Sep 9 05:35:07.953410 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Sep 9 05:35:07.953501 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Sep 9 05:35:07.953593 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Sep 9 05:35:07.953714 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Sep 9 05:35:07.956013 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Sep 9 05:35:07.956210 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Sep 9 05:35:07.956381 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 9 05:35:07.956485 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 9 05:35:07.956580 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 9 05:35:07.956696 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Sep 9 05:35:07.958969 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Sep 9 05:35:07.959114 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Sep 9 05:35:07.959244 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Sep 9 05:35:07.959381 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Sep 9 05:35:07.959523 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 05:35:07.959704 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 05:35:07.959884 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Sep 9 05:35:07.960029 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Sep 9 05:35:07.960169 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Sep 9 05:35:07.960324 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 05:35:07.960470 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Sep 9 05:35:07.960582 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Sep 9 05:35:07.960696 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 9 05:35:07.960840 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Sep 9 05:35:07.960984 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Sep 9 05:35:07.961114 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Sep 9 05:35:07.961211 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 9 05:35:07.961325 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 05:35:07.961421 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Sep 9 05:35:07.961514 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Sep 9 05:35:07.961641 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Sep 9 05:35:07.963755 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 05:35:07.963977 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Sep 9 05:35:07.964082 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Sep 9 05:35:07.964213 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Sep 9 05:35:07.964335 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 05:35:07.964433 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Sep 9 05:35:07.964592 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 9 05:35:07.964608 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 05:35:07.964623 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 05:35:07.964636 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 05:35:07.964650 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 05:35:07.964663 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 9 05:35:07.964677 kernel: iommu: Default domain type: Translated Sep 9 05:35:07.964686 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 05:35:07.964700 kernel: PCI: Using ACPI for IRQ routing Sep 9 05:35:07.964709 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 05:35:07.964718 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 9 05:35:07.964728 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 9 05:35:07.964858 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 9 05:35:07.964992 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 9 05:35:07.965128 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 05:35:07.965150 kernel: vgaarb: loaded Sep 9 05:35:07.965165 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 05:35:07.965184 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 05:35:07.965197 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 05:35:07.965209 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:35:07.965222 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:35:07.965234 kernel: pnp: PnP ACPI init Sep 9 05:35:07.965246 kernel: pnp: PnP ACPI: found 4 devices Sep 9 05:35:07.965259 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 05:35:07.965271 kernel: NET: Registered PF_INET protocol family Sep 9 05:35:07.965284 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 05:35:07.965300 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 9 05:35:07.965314 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:35:07.965327 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 05:35:07.965343 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 9 05:35:07.965356 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 9 05:35:07.965369 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 05:35:07.965382 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 05:35:07.965396 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:35:07.965405 kernel: NET: Registered PF_XDP protocol family Sep 9 05:35:07.965539 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 05:35:07.965648 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 05:35:07.970450 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 05:35:07.970643 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 9 05:35:07.970753 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 9 05:35:07.970898 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 9 05:35:07.971007 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 9 05:35:07.971033 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 9 05:35:07.971177 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 29969 usecs Sep 9 05:35:07.971196 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:35:07.971210 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 05:35:07.971224 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985ba32100, max_idle_ns: 881590654722 ns Sep 9 05:35:07.971239 kernel: Initialise system trusted keyrings Sep 9 05:35:07.971249 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 9 05:35:07.971258 kernel: Key type asymmetric registered Sep 9 05:35:07.971267 kernel: Asymmetric key parser 'x509' registered Sep 9 05:35:07.971280 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 05:35:07.971290 kernel: io scheduler mq-deadline registered Sep 9 05:35:07.971299 kernel: io scheduler kyber registered Sep 9 05:35:07.971308 kernel: io scheduler bfq registered Sep 9 05:35:07.971317 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 05:35:07.971327 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 9 05:35:07.971336 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 9 05:35:07.971345 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 9 05:35:07.971354 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:35:07.971366 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 05:35:07.971374 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 05:35:07.971383 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 05:35:07.971392 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 05:35:07.971401 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 05:35:07.971552 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 9 05:35:07.971663 kernel: rtc_cmos 00:03: registered as rtc0 Sep 9 05:35:07.971752 kernel: rtc_cmos 00:03: setting system clock to 2025-09-09T05:35:07 UTC (1757396107) Sep 9 05:35:07.971874 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 9 05:35:07.971887 kernel: intel_pstate: CPU model not supported Sep 9 05:35:07.971896 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:35:07.971904 kernel: Segment Routing with IPv6 Sep 9 05:35:07.971917 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:35:07.971930 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:35:07.971939 kernel: Key type dns_resolver registered Sep 9 05:35:07.971948 kernel: IPI shorthand broadcast: enabled Sep 9 05:35:07.971957 kernel: sched_clock: Marking stable (4325009685, 174483905)->(4528549034, -29055444) Sep 9 05:35:07.971970 kernel: registered taskstats version 1 Sep 9 05:35:07.971978 kernel: Loading compiled-in X.509 certificates Sep 9 05:35:07.971987 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 884b9ad6a330f59ae6e6488b20a5491e41ff24a3' Sep 9 05:35:07.971996 kernel: Demotion targets for Node 0: null Sep 9 05:35:07.972004 kernel: Key type .fscrypt registered Sep 9 05:35:07.972013 kernel: Key type fscrypt-provisioning registered Sep 9 05:35:07.972042 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 05:35:07.972054 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:35:07.972063 kernel: ima: No architecture policies found Sep 9 05:35:07.972074 kernel: clk: Disabling unused clocks Sep 9 05:35:07.972083 kernel: Warning: unable to open an initial console. Sep 9 05:35:07.972092 kernel: Freeing unused kernel image (initmem) memory: 54076K Sep 9 05:35:07.972101 kernel: Write protecting the kernel read-only data: 24576k Sep 9 05:35:07.972110 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 05:35:07.972119 kernel: Run /init as init process Sep 9 05:35:07.972128 kernel: with arguments: Sep 9 05:35:07.972137 kernel: /init Sep 9 05:35:07.972146 kernel: with environment: Sep 9 05:35:07.972157 kernel: HOME=/ Sep 9 05:35:07.972165 kernel: TERM=linux Sep 9 05:35:07.972174 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:35:07.972185 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:35:07.972198 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:35:07.972208 systemd[1]: Detected virtualization kvm. Sep 9 05:35:07.972218 systemd[1]: Detected architecture x86-64. Sep 9 05:35:07.972229 systemd[1]: Running in initrd. Sep 9 05:35:07.972238 systemd[1]: No hostname configured, using default hostname. Sep 9 05:35:07.972247 systemd[1]: Hostname set to . Sep 9 05:35:07.972256 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:35:07.972265 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:35:07.972275 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:35:07.972284 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:35:07.972295 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:35:07.972307 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:35:07.972316 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:35:07.972329 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:35:07.972340 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:35:07.972352 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:35:07.972361 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:35:07.972371 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:35:07.972380 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:35:07.972396 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:35:07.972412 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:35:07.972426 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:35:07.972439 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:35:07.972448 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:35:07.972461 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:35:07.972470 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:35:07.972480 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:35:07.972489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:35:07.972498 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:35:07.972507 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:35:07.972517 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:35:07.972526 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:35:07.972538 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:35:07.972547 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:35:07.972556 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:35:07.972566 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:35:07.972575 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:35:07.972584 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:07.972598 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:35:07.972616 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:35:07.972668 systemd-journald[212]: Collecting audit messages is disabled. Sep 9 05:35:07.972702 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:35:07.972714 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:35:07.972725 systemd-journald[212]: Journal started Sep 9 05:35:07.972748 systemd-journald[212]: Runtime Journal (/run/log/journal/e799494bacdf40c5ac12396ddd4aad5e) is 4.9M, max 39.5M, 34.6M free. Sep 9 05:35:07.980497 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:35:07.981046 systemd-modules-load[213]: Inserted module 'overlay' Sep 9 05:35:07.990419 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:35:08.048908 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:35:08.048944 kernel: Bridge firewalling registered Sep 9 05:35:08.025035 systemd-modules-load[213]: Inserted module 'br_netfilter' Sep 9 05:35:08.050039 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:35:08.057743 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:08.059064 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:35:08.064964 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:35:08.066352 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:35:08.069972 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:35:08.075074 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:35:08.076852 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:35:08.098931 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:35:08.105320 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:35:08.106896 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:35:08.113343 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:35:08.123029 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:35:08.154202 systemd-resolved[246]: Positive Trust Anchors: Sep 9 05:35:08.154938 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:35:08.154974 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:35:08.161297 systemd-resolved[246]: Defaulting to hostname 'linux'. Sep 9 05:35:08.164015 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:35:08.164940 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:35:08.167511 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:35:08.298848 kernel: SCSI subsystem initialized Sep 9 05:35:08.314904 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:35:08.333844 kernel: iscsi: registered transport (tcp) Sep 9 05:35:08.363919 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:35:08.364013 kernel: QLogic iSCSI HBA Driver Sep 9 05:35:08.390452 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:35:08.414427 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:35:08.415599 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:35:08.486455 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:35:08.489898 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:35:08.549878 kernel: raid6: avx2x4 gen() 23332 MB/s Sep 9 05:35:08.566873 kernel: raid6: avx2x2 gen() 25746 MB/s Sep 9 05:35:08.584017 kernel: raid6: avx2x1 gen() 15143 MB/s Sep 9 05:35:08.584125 kernel: raid6: using algorithm avx2x2 gen() 25746 MB/s Sep 9 05:35:08.602840 kernel: raid6: .... xor() 17269 MB/s, rmw enabled Sep 9 05:35:08.602928 kernel: raid6: using avx2x2 recovery algorithm Sep 9 05:35:08.627852 kernel: xor: automatically using best checksumming function avx Sep 9 05:35:08.829862 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:35:08.839369 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:35:08.843299 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:35:08.880072 systemd-udevd[460]: Using default interface naming scheme 'v255'. Sep 9 05:35:08.886474 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:35:08.890663 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:35:08.921025 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 9 05:35:08.957185 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:35:08.961271 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:35:09.026693 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:35:09.033703 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:35:09.114857 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Sep 9 05:35:09.122833 kernel: scsi host0: Virtio SCSI HBA Sep 9 05:35:09.164874 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 05:35:09.166828 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 9 05:35:09.170821 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 05:35:09.182864 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 9 05:35:09.202722 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:35:09.204944 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 05:35:09.204978 kernel: GPT:9289727 != 125829119 Sep 9 05:35:09.204990 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 05:35:09.205001 kernel: GPT:9289727 != 125829119 Sep 9 05:35:09.205012 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 05:35:09.205023 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:35:09.203665 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:09.215946 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 9 05:35:09.214261 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:09.220213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:09.223838 kernel: AES CTR mode by8 optimization enabled Sep 9 05:35:09.224361 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:35:09.228108 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 9 05:35:09.243516 kernel: ACPI: bus type USB registered Sep 9 05:35:09.260723 kernel: usbcore: registered new interface driver usbfs Sep 9 05:35:09.260814 kernel: usbcore: registered new interface driver hub Sep 9 05:35:09.262816 kernel: usbcore: registered new device driver usb Sep 9 05:35:09.268887 kernel: libata version 3.00 loaded. Sep 9 05:35:09.280761 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 9 05:35:09.289880 kernel: scsi host1: ata_piix Sep 9 05:35:09.299833 kernel: scsi host2: ata_piix Sep 9 05:35:09.300152 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Sep 9 05:35:09.300176 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Sep 9 05:35:09.341861 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 9 05:35:09.343869 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 9 05:35:09.344100 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 9 05:35:09.344265 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 9 05:35:09.344442 kernel: hub 1-0:1.0: USB hub found Sep 9 05:35:09.344678 kernel: hub 1-0:1.0: 2 ports detected Sep 9 05:35:09.347591 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 05:35:09.389116 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 05:35:09.390374 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:09.401972 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:35:09.420435 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 05:35:09.421131 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 05:35:09.429999 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:35:09.461426 disk-uuid[616]: Primary Header is updated. Sep 9 05:35:09.461426 disk-uuid[616]: Secondary Entries is updated. Sep 9 05:35:09.461426 disk-uuid[616]: Secondary Header is updated. Sep 9 05:35:09.465382 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:35:09.474744 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:35:09.479156 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:35:09.483189 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:35:09.485911 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:35:09.490048 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:35:09.533881 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:35:10.476870 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:35:10.477125 disk-uuid[617]: The operation has completed successfully. Sep 9 05:35:10.542773 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:35:10.542924 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:35:10.577886 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:35:10.611850 sh[641]: Success Sep 9 05:35:10.637029 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:35:10.637136 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:35:10.639822 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:35:10.653717 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 9 05:35:10.695914 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:35:10.700922 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:35:10.711844 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:35:10.722839 kernel: BTRFS: device fsid 9ca60a92-6b53-4529-adc0-1f4392d2ad56 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (653) Sep 9 05:35:10.726145 kernel: BTRFS info (device dm-0): first mount of filesystem 9ca60a92-6b53-4529-adc0-1f4392d2ad56 Sep 9 05:35:10.726253 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:10.734193 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:35:10.734283 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:35:10.736911 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:35:10.737942 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:35:10.738829 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:35:10.740932 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:35:10.743929 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:35:10.787508 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (686) Sep 9 05:35:10.787585 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:10.789115 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:10.795520 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:35:10.795619 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:35:10.803833 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:10.805705 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:35:10.807693 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:35:10.944610 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:35:10.948863 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:35:11.038976 ignition[732]: Ignition 2.22.0 Sep 9 05:35:11.038991 ignition[732]: Stage: fetch-offline Sep 9 05:35:11.042253 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:35:11.039027 ignition[732]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:11.039036 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:11.039130 ignition[732]: parsed url from cmdline: "" Sep 9 05:35:11.039133 ignition[732]: no config URL provided Sep 9 05:35:11.039138 ignition[732]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:35:11.039146 ignition[732]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:35:11.039151 ignition[732]: failed to fetch config: resource requires networking Sep 9 05:35:11.040258 ignition[732]: Ignition finished successfully Sep 9 05:35:11.052494 systemd-networkd[827]: lo: Link UP Sep 9 05:35:11.052507 systemd-networkd[827]: lo: Gained carrier Sep 9 05:35:11.055627 systemd-networkd[827]: Enumeration completed Sep 9 05:35:11.055871 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:35:11.056366 systemd-networkd[827]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 9 05:35:11.056371 systemd-networkd[827]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 9 05:35:11.057746 systemd[1]: Reached target network.target - Network. Sep 9 05:35:11.058215 systemd-networkd[827]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:35:11.058221 systemd-networkd[827]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:35:11.059359 systemd-networkd[827]: eth0: Link UP Sep 9 05:35:11.060022 systemd-networkd[827]: eth1: Link UP Sep 9 05:35:11.060747 systemd-networkd[827]: eth0: Gained carrier Sep 9 05:35:11.060763 systemd-networkd[827]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 9 05:35:11.061811 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 05:35:11.067018 systemd-networkd[827]: eth1: Gained carrier Sep 9 05:35:11.067040 systemd-networkd[827]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:35:11.082942 systemd-networkd[827]: eth0: DHCPv4 address 143.198.157.2/20, gateway 143.198.144.1 acquired from 169.254.169.253 Sep 9 05:35:11.092979 systemd-networkd[827]: eth1: DHCPv4 address 10.124.0.25/20 acquired from 169.254.169.253 Sep 9 05:35:11.127558 ignition[836]: Ignition 2.22.0 Sep 9 05:35:11.128613 ignition[836]: Stage: fetch Sep 9 05:35:11.128967 ignition[836]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:11.128985 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:11.129148 ignition[836]: parsed url from cmdline: "" Sep 9 05:35:11.129154 ignition[836]: no config URL provided Sep 9 05:35:11.129163 ignition[836]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:35:11.129177 ignition[836]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:35:11.129226 ignition[836]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 9 05:35:11.152100 ignition[836]: GET result: OK Sep 9 05:35:11.152371 ignition[836]: parsing config with SHA512: 3d2d6923aef00fc1f8eb9dad08d28145b8f1ce9dc341074f01febd66c02a18110b4fb3ffbd84519843ebc7354f258bb1bab9b8ccbeae146c50fbd39fd07af164 Sep 9 05:35:11.164762 unknown[836]: fetched base config from "system" Sep 9 05:35:11.164781 unknown[836]: fetched base config from "system" Sep 9 05:35:11.166507 ignition[836]: fetch: fetch complete Sep 9 05:35:11.165710 unknown[836]: fetched user config from "digitalocean" Sep 9 05:35:11.166517 ignition[836]: fetch: fetch passed Sep 9 05:35:11.166611 ignition[836]: Ignition finished successfully Sep 9 05:35:11.171657 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 05:35:11.175034 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:35:11.241665 ignition[843]: Ignition 2.22.0 Sep 9 05:35:11.244623 ignition[843]: Stage: kargs Sep 9 05:35:11.247228 ignition[843]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:11.247258 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:11.248640 ignition[843]: kargs: kargs passed Sep 9 05:35:11.248733 ignition[843]: Ignition finished successfully Sep 9 05:35:11.254865 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:35:11.258554 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:35:11.303871 ignition[849]: Ignition 2.22.0 Sep 9 05:35:11.304728 ignition[849]: Stage: disks Sep 9 05:35:11.304982 ignition[849]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:11.304995 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:11.311008 ignition[849]: disks: disks passed Sep 9 05:35:11.311095 ignition[849]: Ignition finished successfully Sep 9 05:35:11.312668 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:35:11.314001 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:35:11.315451 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:35:11.316237 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:35:11.317725 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:35:11.320128 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:35:11.324517 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:35:11.361045 systemd-fsck[857]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 05:35:11.365424 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:35:11.369312 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:35:11.533833 kernel: EXT4-fs (vda9): mounted filesystem d2d7815e-fa16-4396-ab9d-ac540c1d8856 r/w with ordered data mode. Quota mode: none. Sep 9 05:35:11.535178 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:35:11.536638 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:35:11.539539 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:35:11.542834 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:35:11.551197 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Sep 9 05:35:11.554253 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 9 05:35:11.556080 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:35:11.556206 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:35:11.564402 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:35:11.566153 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Sep 9 05:35:11.571644 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:11.571747 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:11.576874 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:35:11.576965 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:35:11.582059 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:35:11.594341 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:35:11.669918 coreos-metadata[867]: Sep 09 05:35:11.669 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 9 05:35:11.678726 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:35:11.680931 coreos-metadata[868]: Sep 09 05:35:11.678 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 9 05:35:11.684205 coreos-metadata[867]: Sep 09 05:35:11.684 INFO Fetch successful Sep 9 05:35:11.690970 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:35:11.693831 coreos-metadata[868]: Sep 09 05:35:11.693 INFO Fetch successful Sep 9 05:35:11.699273 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Sep 9 05:35:11.701169 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Sep 9 05:35:11.704001 coreos-metadata[868]: Sep 09 05:35:11.701 INFO wrote hostname ci-4452.0.0-n-58b1c71666 to /sysroot/etc/hostname Sep 9 05:35:11.706001 initrd-setup-root[910]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:35:11.706465 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 05:35:11.714323 initrd-setup-root[918]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:35:11.845025 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:35:11.848027 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:35:11.850029 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:35:11.872400 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:35:11.874032 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:11.896551 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:35:11.929831 ignition[986]: INFO : Ignition 2.22.0 Sep 9 05:35:11.929831 ignition[986]: INFO : Stage: mount Sep 9 05:35:11.929831 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:11.929831 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:11.932503 ignition[986]: INFO : mount: mount passed Sep 9 05:35:11.932503 ignition[986]: INFO : Ignition finished successfully Sep 9 05:35:11.933091 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:35:11.935933 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:35:11.960067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:35:11.997726 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (998) Sep 9 05:35:11.997841 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:11.997862 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:12.004146 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:35:12.004230 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:35:12.008256 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:35:12.061629 ignition[1015]: INFO : Ignition 2.22.0 Sep 9 05:35:12.061629 ignition[1015]: INFO : Stage: files Sep 9 05:35:12.063102 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:12.063102 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:12.065081 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:35:12.067093 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:35:12.067093 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:35:12.071546 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:35:12.072563 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:35:12.072563 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:35:12.072291 unknown[1015]: wrote ssh authorized keys file for user: core Sep 9 05:35:12.075612 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 05:35:12.075612 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 9 05:35:12.125340 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:35:12.279856 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 05:35:12.279856 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:35:12.279856 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:35:12.279856 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:35:12.279856 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:35:12.279856 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:35:12.293446 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:35:12.293446 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:35:12.293446 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:35:12.293446 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:35:12.293446 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:35:12.293446 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:35:12.293446 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:35:12.293446 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:35:12.293446 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 9 05:35:12.347173 systemd-networkd[827]: eth1: Gained IPv6LL Sep 9 05:35:12.652004 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 05:35:13.115040 systemd-networkd[827]: eth0: Gained IPv6LL Sep 9 05:35:14.748864 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:35:14.750612 ignition[1015]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 05:35:14.751424 ignition[1015]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:35:14.754224 ignition[1015]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:35:14.754224 ignition[1015]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 05:35:14.754224 ignition[1015]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:35:14.758254 ignition[1015]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:35:14.758254 ignition[1015]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:35:14.758254 ignition[1015]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:35:14.758254 ignition[1015]: INFO : files: files passed Sep 9 05:35:14.758254 ignition[1015]: INFO : Ignition finished successfully Sep 9 05:35:14.759700 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:35:14.765613 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:35:14.767609 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:35:14.785973 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:35:14.786176 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:35:14.797340 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:35:14.797340 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:35:14.799970 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:35:14.800647 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:35:14.801837 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:35:14.803747 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:35:14.865513 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:35:14.865672 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:35:14.867404 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:35:14.868395 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:35:14.870029 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:35:14.871265 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:35:14.902265 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:35:14.904693 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:35:14.937124 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:35:14.938718 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:35:14.940408 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:35:14.941867 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:35:14.942668 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:35:14.944486 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:35:14.945233 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:35:14.946723 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:35:14.947815 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:35:14.949123 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:35:14.950490 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:35:14.951736 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:35:14.952962 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:35:14.954328 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:35:14.955877 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:35:14.957080 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:35:14.958186 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:35:14.958429 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:35:14.959850 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:35:14.960538 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:35:14.961755 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:35:14.961939 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:35:14.963161 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:35:14.963426 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:35:14.964969 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:35:14.965265 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:35:14.967009 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:35:14.967165 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:35:14.968215 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 9 05:35:14.968424 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 05:35:14.970902 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:35:14.975654 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:35:14.977744 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:35:14.978689 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:35:14.980354 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:35:14.980498 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:35:14.997644 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:35:15.001020 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:35:15.023189 ignition[1069]: INFO : Ignition 2.22.0 Sep 9 05:35:15.023189 ignition[1069]: INFO : Stage: umount Sep 9 05:35:15.025488 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:15.025488 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:15.025488 ignition[1069]: INFO : umount: umount passed Sep 9 05:35:15.025488 ignition[1069]: INFO : Ignition finished successfully Sep 9 05:35:15.026300 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:35:15.026435 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:35:15.032822 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:35:15.034249 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:35:15.034401 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:35:15.035729 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:35:15.035863 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:35:15.037927 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 05:35:15.037987 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 05:35:15.039035 systemd[1]: Stopped target network.target - Network. Sep 9 05:35:15.040867 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:35:15.040975 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:35:15.042476 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:35:15.043958 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:35:15.044074 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:35:15.049372 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:35:15.053222 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:35:15.053906 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:35:15.053965 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:35:15.054528 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:35:15.054568 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:35:15.059090 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:35:15.059204 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:35:15.060084 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:35:15.060168 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:35:15.063204 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:35:15.064777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:35:15.074034 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:35:15.074250 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:35:15.082497 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:35:15.083072 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:35:15.083151 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:35:15.086300 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:35:15.086638 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:35:15.087984 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:35:15.093242 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:35:15.096696 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:35:15.098857 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:35:15.098933 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:35:15.104614 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:35:15.107733 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:35:15.107872 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:35:15.111069 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:35:15.111151 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:35:15.119169 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:35:15.119285 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:35:15.121504 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:35:15.125309 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:35:15.133251 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:35:15.133389 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:35:15.138255 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:35:15.138403 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:35:15.145455 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:35:15.147110 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:35:15.149097 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:35:15.149239 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:35:15.150750 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:35:15.150991 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:35:15.152214 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:35:15.152315 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:35:15.154768 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:35:15.154977 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:35:15.156708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:35:15.156848 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:35:15.160174 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:35:15.162317 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:35:15.162429 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:35:15.165988 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:35:15.166093 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:35:15.168945 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 05:35:15.169014 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:35:15.171522 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:35:15.171598 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:35:15.172356 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:35:15.172416 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:15.175511 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:35:15.175650 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:35:15.184149 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:35:15.184288 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:35:15.185718 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:35:15.187784 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:35:15.209588 systemd[1]: Switching root. Sep 9 05:35:15.254777 systemd-journald[212]: Journal stopped Sep 9 05:35:16.610020 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Sep 9 05:35:16.610121 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:35:16.610138 kernel: SELinux: policy capability open_perms=1 Sep 9 05:35:16.610149 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:35:16.610165 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:35:16.610183 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:35:16.610195 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:35:16.610207 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:35:16.610218 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:35:16.610229 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:35:16.610241 kernel: audit: type=1403 audit(1757396115.445:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:35:16.610259 systemd[1]: Successfully loaded SELinux policy in 78.735ms. Sep 9 05:35:16.610287 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.451ms. Sep 9 05:35:16.610303 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:35:16.610316 systemd[1]: Detected virtualization kvm. Sep 9 05:35:16.610330 systemd[1]: Detected architecture x86-64. Sep 9 05:35:16.610341 systemd[1]: Detected first boot. Sep 9 05:35:16.610354 systemd[1]: Hostname set to . Sep 9 05:35:16.610367 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:35:16.610383 zram_generator::config[1113]: No configuration found. Sep 9 05:35:16.610396 kernel: Guest personality initialized and is inactive Sep 9 05:35:16.610410 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 05:35:16.610421 kernel: Initialized host personality Sep 9 05:35:16.610432 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:35:16.610443 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:35:16.610456 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:35:16.610468 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:35:16.610480 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:35:16.610491 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:35:16.610506 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:35:16.610518 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:35:16.610530 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:35:16.610541 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:35:16.610553 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:35:16.610565 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:35:16.610577 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:35:16.610590 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:35:16.610602 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:35:16.610616 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:35:16.610628 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:35:16.610646 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:35:16.610658 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:35:16.610670 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:35:16.610682 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 05:35:16.610695 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:35:16.610707 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:35:16.610719 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:35:16.610732 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:35:16.610743 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:35:16.610755 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:35:16.610767 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:35:16.610778 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:35:16.612849 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:35:16.612896 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:35:16.612908 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:35:16.612921 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:35:16.612934 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:35:16.612945 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:35:16.612957 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:35:16.612969 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:35:16.612981 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:35:16.612992 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:35:16.613004 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:35:16.613019 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:35:16.613034 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:16.613052 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:35:16.613064 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:35:16.613075 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:35:16.613088 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:35:16.613100 systemd[1]: Reached target machines.target - Containers. Sep 9 05:35:16.613111 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:35:16.613126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:16.613138 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:35:16.613150 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:35:16.613163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:35:16.613174 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:35:16.613186 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:35:16.613198 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:35:16.613209 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:35:16.613224 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:35:16.613236 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:35:16.613247 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:35:16.613258 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:35:16.613270 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:35:16.613283 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:16.613297 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:35:16.613309 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:35:16.613324 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:35:16.613337 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:35:16.613348 kernel: loop: module loaded Sep 9 05:35:16.613362 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:35:16.613373 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:35:16.613388 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:35:16.613399 systemd[1]: Stopped verity-setup.service. Sep 9 05:35:16.613411 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:16.613424 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:35:16.613436 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:35:16.613447 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:35:16.613461 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:35:16.613472 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:35:16.613485 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:35:16.613496 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:35:16.613507 kernel: ACPI: bus type drm_connector registered Sep 9 05:35:16.613520 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:35:16.613532 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:35:16.613544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:35:16.613556 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:35:16.613570 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:35:16.613585 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:35:16.613597 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:35:16.613609 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:35:16.613620 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:35:16.613633 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:35:16.613644 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:35:16.613656 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:35:16.613671 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:35:16.613684 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:35:16.613695 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:35:16.613707 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:35:16.613721 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:35:16.613733 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:35:16.613745 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:35:16.613757 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:35:16.613768 kernel: fuse: init (API version 7.41) Sep 9 05:35:16.613779 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:35:16.613813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:16.613826 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:35:16.613839 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:35:16.613892 systemd-journald[1183]: Collecting audit messages is disabled. Sep 9 05:35:16.613920 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:35:16.613934 systemd-journald[1183]: Journal started Sep 9 05:35:16.613962 systemd-journald[1183]: Runtime Journal (/run/log/journal/e799494bacdf40c5ac12396ddd4aad5e) is 4.9M, max 39.5M, 34.6M free. Sep 9 05:35:16.124114 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:35:16.147650 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 05:35:16.148261 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:35:16.607710 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 9 05:35:16.607724 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 9 05:35:16.626951 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:35:16.636844 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:35:16.641826 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:35:16.645529 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:35:16.645939 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:35:16.648253 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:35:16.649446 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:35:16.651530 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:35:16.658910 kernel: loop0: detected capacity change from 0 to 221472 Sep 9 05:35:16.691330 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:35:16.698224 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:35:16.709929 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:35:16.714261 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:35:16.724412 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:35:16.724312 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:35:16.733875 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:35:16.742448 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:35:16.752405 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:35:16.765816 kernel: loop1: detected capacity change from 0 to 8 Sep 9 05:35:16.771670 systemd-journald[1183]: Time spent on flushing to /var/log/journal/e799494bacdf40c5ac12396ddd4aad5e is 52.414ms for 1020 entries. Sep 9 05:35:16.771670 systemd-journald[1183]: System Journal (/var/log/journal/e799494bacdf40c5ac12396ddd4aad5e) is 8M, max 195.6M, 187.6M free. Sep 9 05:35:16.835741 systemd-journald[1183]: Received client request to flush runtime journal. Sep 9 05:35:16.835912 kernel: loop2: detected capacity change from 0 to 128016 Sep 9 05:35:16.822332 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:35:16.839635 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:35:16.856304 kernel: loop3: detected capacity change from 0 to 110984 Sep 9 05:35:16.862512 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:35:16.869040 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:35:16.891955 kernel: loop4: detected capacity change from 0 to 221472 Sep 9 05:35:16.900852 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Sep 9 05:35:16.901406 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Sep 9 05:35:16.906438 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:35:16.942510 kernel: loop5: detected capacity change from 0 to 8 Sep 9 05:35:16.951018 kernel: loop6: detected capacity change from 0 to 128016 Sep 9 05:35:16.985651 kernel: loop7: detected capacity change from 0 to 110984 Sep 9 05:35:17.036565 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 9 05:35:17.037383 (sd-merge)[1261]: Merged extensions into '/usr'. Sep 9 05:35:17.050512 systemd[1]: Reload requested from client PID 1217 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:35:17.050547 systemd[1]: Reloading... Sep 9 05:35:17.236079 zram_generator::config[1286]: No configuration found. Sep 9 05:35:17.505654 ldconfig[1209]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:35:17.700646 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:35:17.701405 systemd[1]: Reloading finished in 649 ms. Sep 9 05:35:17.735574 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:35:17.737124 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:35:17.746974 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:35:17.764220 systemd[1]: Starting ensure-sysext.service... Sep 9 05:35:17.772218 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:35:17.787591 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:35:17.796614 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:35:17.799366 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:35:17.813341 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:35:17.813360 systemd[1]: Reloading... Sep 9 05:35:17.829261 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:35:17.829331 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:35:17.829699 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:35:17.831414 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:35:17.834558 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:35:17.835663 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Sep 9 05:35:17.835726 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Sep 9 05:35:17.850735 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:35:17.850755 systemd-tmpfiles[1334]: Skipping /boot Sep 9 05:35:17.853096 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Sep 9 05:35:17.882049 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:35:17.882069 systemd-tmpfiles[1334]: Skipping /boot Sep 9 05:35:17.988470 zram_generator::config[1389]: No configuration found. Sep 9 05:35:18.261874 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 05:35:18.264832 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 05:35:18.267832 kernel: ACPI: button: Power Button [PWRF] Sep 9 05:35:18.301257 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 9 05:35:18.301667 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 05:35:18.373992 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 05:35:18.374428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:35:18.375222 systemd[1]: Reloading finished in 561 ms. Sep 9 05:35:18.384316 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:35:18.395148 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:35:18.451025 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 9 05:35:18.451589 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:18.455912 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:35:18.459570 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:35:18.461276 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:18.463321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:35:18.468097 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:35:18.472066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:35:18.474065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:18.475970 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:35:18.477880 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:18.485887 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:35:18.491192 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:35:18.497006 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:35:18.503892 kernel: ISO 9660 Extensions: RRIP_1991A Sep 9 05:35:18.508183 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:35:18.508831 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:18.515408 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 9 05:35:18.524180 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:18.524424 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:18.546300 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:35:18.547390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:18.547533 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:18.547682 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:18.561776 systemd[1]: Finished ensure-sysext.service. Sep 9 05:35:18.570459 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 05:35:18.575132 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:35:18.576574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:35:18.577885 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:35:18.579101 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:35:18.592651 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:35:18.605196 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:35:18.607303 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:35:18.608377 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:35:18.609620 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:35:18.613092 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:35:18.625358 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:35:18.626023 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:35:18.627212 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:35:18.636962 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:35:18.637224 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:35:18.655932 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:35:18.657398 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:35:18.693838 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:35:18.729563 augenrules[1504]: No rules Sep 9 05:35:18.730241 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:35:18.730508 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:35:18.807035 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 9 05:35:18.807111 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 9 05:35:18.810316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:18.816829 kernel: Console: switching to colour dummy device 80x25 Sep 9 05:35:18.816912 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 9 05:35:18.816958 kernel: [drm] features: -context_init Sep 9 05:35:18.816972 kernel: [drm] number of scanouts: 1 Sep 9 05:35:18.816985 kernel: [drm] number of cap sets: 0 Sep 9 05:35:18.815248 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:35:18.821842 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Sep 9 05:35:18.827105 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 9 05:35:18.827205 kernel: Console: switching to colour frame buffer device 128x48 Sep 9 05:35:18.836864 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 9 05:35:18.866110 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:35:18.866429 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:18.873266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:18.937331 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:35:18.937609 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:18.941555 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:35:18.943255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:19.045783 systemd-networkd[1466]: lo: Link UP Sep 9 05:35:19.047951 systemd-networkd[1466]: lo: Gained carrier Sep 9 05:35:19.062602 systemd-networkd[1466]: Enumeration completed Sep 9 05:35:19.063106 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:35:19.064259 systemd-networkd[1466]: eth0: Configuring with /run/systemd/network/10-a2:75:68:02:7a:cf.network. Sep 9 05:35:19.066492 systemd-networkd[1466]: eth1: Configuring with /run/systemd/network/10-d2:aa:01:2a:0a:2d.network. Sep 9 05:35:19.067134 systemd-networkd[1466]: eth0: Link UP Sep 9 05:35:19.067332 systemd-networkd[1466]: eth0: Gained carrier Sep 9 05:35:19.068127 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:35:19.072596 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:35:19.076296 systemd-networkd[1466]: eth1: Link UP Sep 9 05:35:19.076961 systemd-resolved[1467]: Positive Trust Anchors: Sep 9 05:35:19.079370 systemd-resolved[1467]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:35:19.079429 systemd-resolved[1467]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:35:19.079973 systemd-networkd[1466]: eth1: Gained carrier Sep 9 05:35:19.092517 kernel: EDAC MC: Ver: 3.0.0 Sep 9 05:35:19.097836 systemd-resolved[1467]: Using system hostname 'ci-4452.0.0-n-58b1c71666'. Sep 9 05:35:19.101541 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:35:19.102490 systemd[1]: Reached target network.target - Network. Sep 9 05:35:19.102558 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:35:19.109921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:19.112872 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 05:35:19.113554 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:35:19.113772 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:35:19.113891 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:35:19.113964 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 05:35:19.114062 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:35:19.114130 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:35:19.114162 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:35:19.114215 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:35:19.114435 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:35:19.116421 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:35:19.119410 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:35:19.122958 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:35:19.125819 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:35:19.131252 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:35:19.134632 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:35:19.136438 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:35:19.145357 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:35:19.148689 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:35:19.152328 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:35:19.154767 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:35:19.160712 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:35:19.161666 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:35:19.164848 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:35:19.164905 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:35:19.167267 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:35:19.174076 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 05:35:19.180429 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:35:19.187126 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:35:19.191927 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:35:19.196536 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:35:19.198869 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:35:19.207358 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 05:35:19.213938 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:35:19.220105 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:35:19.229113 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:35:19.229433 coreos-metadata[1540]: Sep 09 05:35:19.229 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 9 05:35:19.234602 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:35:19.238273 jq[1545]: false Sep 9 05:35:19.244769 coreos-metadata[1540]: Sep 09 05:35:19.243 INFO Fetch successful Sep 9 05:35:19.248216 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:35:19.249742 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 05:35:19.251742 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:35:19.259526 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:35:19.265130 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:35:19.270470 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing passwd entry cache Sep 9 05:35:19.270852 oslogin_cache_refresh[1547]: Refreshing passwd entry cache Sep 9 05:35:19.275549 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting users, quitting Sep 9 05:35:19.275549 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:35:19.275549 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing group entry cache Sep 9 05:35:19.274618 oslogin_cache_refresh[1547]: Failure getting users, quitting Sep 9 05:35:19.274637 oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:35:19.274694 oslogin_cache_refresh[1547]: Refreshing group entry cache Sep 9 05:35:19.278831 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting groups, quitting Sep 9 05:35:19.278831 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:35:19.277046 oslogin_cache_refresh[1547]: Failure getting groups, quitting Sep 9 05:35:19.277065 oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:35:19.285961 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:35:19.290464 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:35:19.291955 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:35:19.292451 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 05:35:19.292861 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 05:35:19.293663 extend-filesystems[1546]: Found /dev/vda6 Sep 9 05:35:20.487170 systemd-timesyncd[1480]: Contacted time server 23.95.49.216:123 (0.flatcar.pool.ntp.org). Sep 9 05:35:20.487233 systemd-timesyncd[1480]: Initial clock synchronization to Tue 2025-09-09 05:35:20.487049 UTC. Sep 9 05:35:20.487289 systemd-resolved[1467]: Clock change detected. Flushing caches. Sep 9 05:35:20.493559 extend-filesystems[1546]: Found /dev/vda9 Sep 9 05:35:20.497051 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:35:20.497389 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:35:20.517100 extend-filesystems[1546]: Checking size of /dev/vda9 Sep 9 05:35:20.523645 update_engine[1554]: I20250909 05:35:20.519148 1554 main.cc:92] Flatcar Update Engine starting Sep 9 05:35:20.543312 jq[1559]: true Sep 9 05:35:20.546754 tar[1564]: linux-amd64/helm Sep 9 05:35:20.545084 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:35:20.545422 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:35:20.557111 (ntainerd)[1574]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:35:20.579701 jq[1588]: true Sep 9 05:35:20.599199 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 05:35:20.600444 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:35:20.614886 extend-filesystems[1546]: Resized partition /dev/vda9 Sep 9 05:35:20.618003 dbus-daemon[1541]: [system] SELinux support is enabled Sep 9 05:35:20.618339 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:35:20.626480 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:35:20.628608 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:35:20.630160 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:35:20.630271 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 9 05:35:20.630301 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:35:20.641681 extend-filesystems[1595]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 05:35:20.650602 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 9 05:35:20.669677 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:35:20.671794 update_engine[1554]: I20250909 05:35:20.669705 1554 update_check_scheduler.cc:74] Next update check in 2m57s Sep 9 05:35:20.684948 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:35:20.816921 systemd-logind[1552]: New seat seat0. Sep 9 05:35:20.825176 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 9 05:35:20.847003 systemd-logind[1552]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 05:35:20.847032 systemd-logind[1552]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 05:35:20.847598 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:35:20.851591 extend-filesystems[1595]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 05:35:20.851591 extend-filesystems[1595]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 9 05:35:20.851591 extend-filesystems[1595]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 9 05:35:20.860311 extend-filesystems[1546]: Resized filesystem in /dev/vda9 Sep 9 05:35:20.855812 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:35:20.878512 bash[1611]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:35:20.857615 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:35:20.872590 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:35:20.887903 systemd[1]: Starting sshkeys.service... Sep 9 05:35:20.954988 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 05:35:20.963351 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 05:35:21.067617 coreos-metadata[1621]: Sep 09 05:35:21.067 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 9 05:35:21.080103 coreos-metadata[1621]: Sep 09 05:35:21.079 INFO Fetch successful Sep 9 05:35:21.098301 unknown[1621]: wrote ssh authorized keys file for user: core Sep 9 05:35:21.099604 containerd[1574]: time="2025-09-09T05:35:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:35:21.105278 containerd[1574]: time="2025-09-09T05:35:21.104326519Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:35:21.141969 update-ssh-keys[1629]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:35:21.145391 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 05:35:21.148816 systemd[1]: Finished sshkeys.service. Sep 9 05:35:21.164802 containerd[1574]: time="2025-09-09T05:35:21.164690681Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.918µs" Sep 9 05:35:21.164802 containerd[1574]: time="2025-09-09T05:35:21.164737036Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:35:21.164802 containerd[1574]: time="2025-09-09T05:35:21.164760265Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:35:21.165177 containerd[1574]: time="2025-09-09T05:35:21.164986651Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:35:21.165177 containerd[1574]: time="2025-09-09T05:35:21.165012545Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:35:21.165177 containerd[1574]: time="2025-09-09T05:35:21.165043420Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:35:21.165177 containerd[1574]: time="2025-09-09T05:35:21.165125149Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:35:21.165177 containerd[1574]: time="2025-09-09T05:35:21.165141525Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:35:21.165444 containerd[1574]: time="2025-09-09T05:35:21.165406303Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:35:21.165444 containerd[1574]: time="2025-09-09T05:35:21.165421849Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:35:21.165444 containerd[1574]: time="2025-09-09T05:35:21.165435826Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:35:21.165504 containerd[1574]: time="2025-09-09T05:35:21.165449953Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:35:21.165869 containerd[1574]: time="2025-09-09T05:35:21.165548492Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:35:21.165869 containerd[1574]: time="2025-09-09T05:35:21.165783704Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:35:21.165869 containerd[1574]: time="2025-09-09T05:35:21.165815670Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:35:21.165869 containerd[1574]: time="2025-09-09T05:35:21.165827630Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:35:21.165869 containerd[1574]: time="2025-09-09T05:35:21.165868381Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:35:21.166282 containerd[1574]: time="2025-09-09T05:35:21.166091876Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:35:21.166282 containerd[1574]: time="2025-09-09T05:35:21.166157564Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:35:21.168748 locksmithd[1601]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:35:21.174045 containerd[1574]: time="2025-09-09T05:35:21.173948130Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:35:21.174045 containerd[1574]: time="2025-09-09T05:35:21.174019978Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:35:21.174045 containerd[1574]: time="2025-09-09T05:35:21.174034456Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:35:21.174045 containerd[1574]: time="2025-09-09T05:35:21.174046182Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:35:21.174045 containerd[1574]: time="2025-09-09T05:35:21.174060785Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174071179Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174086489Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174111700Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174125542Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174135914Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174146221Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174165238Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174309007Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174339395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174367624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174392216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174407194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174423674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174474578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:35:21.174559 containerd[1574]: time="2025-09-09T05:35:21.174495585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:35:21.178395 containerd[1574]: time="2025-09-09T05:35:21.174508661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:35:21.178395 containerd[1574]: time="2025-09-09T05:35:21.174542166Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:35:21.178395 containerd[1574]: time="2025-09-09T05:35:21.174559603Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:35:21.178395 containerd[1574]: time="2025-09-09T05:35:21.174628326Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:35:21.178395 containerd[1574]: time="2025-09-09T05:35:21.174644751Z" level=info msg="Start snapshots syncer" Sep 9 05:35:21.178395 containerd[1574]: time="2025-09-09T05:35:21.174670135Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:35:21.178639 containerd[1574]: time="2025-09-09T05:35:21.175079997Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:35:21.178639 containerd[1574]: time="2025-09-09T05:35:21.175161532Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175256968Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175393913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175417539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175429975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175440714Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175453885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175465078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175476318Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175498653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175509742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175555616Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175584752Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175598708Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:35:21.178838 containerd[1574]: time="2025-09-09T05:35:21.175610262Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:35:21.179112 containerd[1574]: time="2025-09-09T05:35:21.175621507Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:35:21.179112 containerd[1574]: time="2025-09-09T05:35:21.175630337Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:35:21.179112 containerd[1574]: time="2025-09-09T05:35:21.175640848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:35:21.179112 containerd[1574]: time="2025-09-09T05:35:21.175660038Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:35:21.179112 containerd[1574]: time="2025-09-09T05:35:21.175680417Z" level=info msg="runtime interface created" Sep 9 05:35:21.179112 containerd[1574]: time="2025-09-09T05:35:21.175687215Z" level=info msg="created NRI interface" Sep 9 05:35:21.179112 containerd[1574]: time="2025-09-09T05:35:21.175695162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:35:21.179112 containerd[1574]: time="2025-09-09T05:35:21.175708544Z" level=info msg="Connect containerd service" Sep 9 05:35:21.179112 containerd[1574]: time="2025-09-09T05:35:21.175735764Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:35:21.180949 containerd[1574]: time="2025-09-09T05:35:21.180907462Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:35:21.417418 containerd[1574]: time="2025-09-09T05:35:21.417279313Z" level=info msg="Start subscribing containerd event" Sep 9 05:35:21.418103 containerd[1574]: time="2025-09-09T05:35:21.417561257Z" level=info msg="Start recovering state" Sep 9 05:35:21.418103 containerd[1574]: time="2025-09-09T05:35:21.417789662Z" level=info msg="Start event monitor" Sep 9 05:35:21.418103 containerd[1574]: time="2025-09-09T05:35:21.418056644Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:35:21.418103 containerd[1574]: time="2025-09-09T05:35:21.418071764Z" level=info msg="Start streaming server" Sep 9 05:35:21.418103 containerd[1574]: time="2025-09-09T05:35:21.418082254Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:35:21.418103 containerd[1574]: time="2025-09-09T05:35:21.418092422Z" level=info msg="runtime interface starting up..." Sep 9 05:35:21.418294 containerd[1574]: time="2025-09-09T05:35:21.418099093Z" level=info msg="starting plugins..." Sep 9 05:35:21.418294 containerd[1574]: time="2025-09-09T05:35:21.418143352Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:35:21.419651 containerd[1574]: time="2025-09-09T05:35:21.419558321Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:35:21.419915 containerd[1574]: time="2025-09-09T05:35:21.419892350Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:35:21.421025 containerd[1574]: time="2025-09-09T05:35:21.420076677Z" level=info msg="containerd successfully booted in 0.323016s" Sep 9 05:35:21.420151 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:35:21.426397 sshd_keygen[1578]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:35:21.467044 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:35:21.474963 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:35:21.503290 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:35:21.505798 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:35:21.513893 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:35:21.550684 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:35:21.557112 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:35:21.562850 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 05:35:21.564285 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:35:21.607334 tar[1564]: linux-amd64/LICENSE Sep 9 05:35:21.607334 tar[1564]: linux-amd64/README.md Sep 9 05:35:21.629488 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:35:21.703815 systemd-networkd[1466]: eth0: Gained IPv6LL Sep 9 05:35:21.706570 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:35:21.709207 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:35:21.714020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:21.719924 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:35:21.751750 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:35:22.215758 systemd-networkd[1466]: eth1: Gained IPv6LL Sep 9 05:35:22.899136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:22.901913 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:35:22.904274 systemd[1]: Startup finished in 4.461s (kernel) + 7.718s (initrd) + 6.371s (userspace) = 18.551s. Sep 9 05:35:22.916392 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:35:22.980382 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:35:22.981786 systemd[1]: Started sshd@0-143.198.157.2:22-139.178.89.65:39742.service - OpenSSH per-connection server daemon (139.178.89.65:39742). Sep 9 05:35:23.097924 sshd[1689]: Accepted publickey for core from 139.178.89.65 port 39742 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:23.099573 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:23.120148 systemd-logind[1552]: New session 1 of user core. Sep 9 05:35:23.122402 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:35:23.127858 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:35:23.162015 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:35:23.167099 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:35:23.186668 (systemd)[1698]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:35:23.194947 systemd-logind[1552]: New session c1 of user core. Sep 9 05:35:23.376607 systemd[1698]: Queued start job for default target default.target. Sep 9 05:35:23.382951 systemd[1698]: Created slice app.slice - User Application Slice. Sep 9 05:35:23.383007 systemd[1698]: Reached target paths.target - Paths. Sep 9 05:35:23.383078 systemd[1698]: Reached target timers.target - Timers. Sep 9 05:35:23.385852 systemd[1698]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:35:23.416774 systemd[1698]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:35:23.419897 systemd[1698]: Reached target sockets.target - Sockets. Sep 9 05:35:23.419988 systemd[1698]: Reached target basic.target - Basic System. Sep 9 05:35:23.420026 systemd[1698]: Reached target default.target - Main User Target. Sep 9 05:35:23.420067 systemd[1698]: Startup finished in 210ms. Sep 9 05:35:23.420495 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:35:23.427923 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:35:23.508824 systemd[1]: Started sshd@1-143.198.157.2:22-139.178.89.65:39754.service - OpenSSH per-connection server daemon (139.178.89.65:39754). Sep 9 05:35:23.595906 sshd[1709]: Accepted publickey for core from 139.178.89.65 port 39754 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:23.598679 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:23.607610 systemd-logind[1552]: New session 2 of user core. Sep 9 05:35:23.617835 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:35:23.631214 kubelet[1683]: E0909 05:35:23.631120 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:35:23.635197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:35:23.635372 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:35:23.636139 systemd[1]: kubelet.service: Consumed 1.488s CPU time, 263.5M memory peak. Sep 9 05:35:23.688616 sshd[1713]: Connection closed by 139.178.89.65 port 39754 Sep 9 05:35:23.689858 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:23.695772 systemd-logind[1552]: Session 2 logged out. Waiting for processes to exit. Sep 9 05:35:23.696153 systemd[1]: sshd@1-143.198.157.2:22-139.178.89.65:39754.service: Deactivated successfully. Sep 9 05:35:23.698848 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 05:35:23.704052 systemd[1]: Started sshd@2-143.198.157.2:22-139.178.89.65:39764.service - OpenSSH per-connection server daemon (139.178.89.65:39764). Sep 9 05:35:23.705864 systemd-logind[1552]: Removed session 2. Sep 9 05:35:23.785150 sshd[1720]: Accepted publickey for core from 139.178.89.65 port 39764 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:23.787577 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:23.795419 systemd-logind[1552]: New session 3 of user core. Sep 9 05:35:23.805045 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:35:23.866511 sshd[1723]: Connection closed by 139.178.89.65 port 39764 Sep 9 05:35:23.866284 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:23.880366 systemd[1]: sshd@2-143.198.157.2:22-139.178.89.65:39764.service: Deactivated successfully. Sep 9 05:35:23.883018 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 05:35:23.885069 systemd-logind[1552]: Session 3 logged out. Waiting for processes to exit. Sep 9 05:35:23.888598 systemd[1]: Started sshd@3-143.198.157.2:22-139.178.89.65:39780.service - OpenSSH per-connection server daemon (139.178.89.65:39780). Sep 9 05:35:23.889919 systemd-logind[1552]: Removed session 3. Sep 9 05:35:23.977557 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 39780 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:23.979118 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:23.985945 systemd-logind[1552]: New session 4 of user core. Sep 9 05:35:23.999001 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:35:24.061469 sshd[1732]: Connection closed by 139.178.89.65 port 39780 Sep 9 05:35:24.062039 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:24.078850 systemd[1]: sshd@3-143.198.157.2:22-139.178.89.65:39780.service: Deactivated successfully. Sep 9 05:35:24.081000 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:35:24.082026 systemd-logind[1552]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:35:24.085781 systemd[1]: Started sshd@4-143.198.157.2:22-139.178.89.65:39796.service - OpenSSH per-connection server daemon (139.178.89.65:39796). Sep 9 05:35:24.087003 systemd-logind[1552]: Removed session 4. Sep 9 05:35:24.160819 sshd[1738]: Accepted publickey for core from 139.178.89.65 port 39796 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:24.162696 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:24.168685 systemd-logind[1552]: New session 5 of user core. Sep 9 05:35:24.176929 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:35:24.252367 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:35:24.253453 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:24.268634 sudo[1742]: pam_unix(sudo:session): session closed for user root Sep 9 05:35:24.273200 sshd[1741]: Connection closed by 139.178.89.65 port 39796 Sep 9 05:35:24.274486 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:24.290584 systemd[1]: sshd@4-143.198.157.2:22-139.178.89.65:39796.service: Deactivated successfully. Sep 9 05:35:24.293114 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:35:24.294233 systemd-logind[1552]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:35:24.299269 systemd[1]: Started sshd@5-143.198.157.2:22-139.178.89.65:39806.service - OpenSSH per-connection server daemon (139.178.89.65:39806). Sep 9 05:35:24.301110 systemd-logind[1552]: Removed session 5. Sep 9 05:35:24.564858 sshd[1748]: Accepted publickey for core from 139.178.89.65 port 39806 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:24.567770 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:24.577105 systemd-logind[1552]: New session 6 of user core. Sep 9 05:35:24.585854 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:35:24.651457 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:35:24.652003 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:24.660659 sudo[1753]: pam_unix(sudo:session): session closed for user root Sep 9 05:35:24.670988 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:35:24.671443 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:24.689090 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:35:24.749147 augenrules[1775]: No rules Sep 9 05:35:24.751020 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:35:24.751306 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:35:24.752789 sudo[1752]: pam_unix(sudo:session): session closed for user root Sep 9 05:35:24.756064 sshd[1751]: Connection closed by 139.178.89.65 port 39806 Sep 9 05:35:24.756709 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:24.770067 systemd[1]: sshd@5-143.198.157.2:22-139.178.89.65:39806.service: Deactivated successfully. Sep 9 05:35:24.772334 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:35:24.773615 systemd-logind[1552]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:35:24.778321 systemd[1]: Started sshd@6-143.198.157.2:22-139.178.89.65:39822.service - OpenSSH per-connection server daemon (139.178.89.65:39822). Sep 9 05:35:24.779585 systemd-logind[1552]: Removed session 6. Sep 9 05:35:24.852639 sshd[1784]: Accepted publickey for core from 139.178.89.65 port 39822 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:24.854084 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:24.862053 systemd-logind[1552]: New session 7 of user core. Sep 9 05:35:24.867889 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:35:24.930463 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:35:24.931485 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:25.502454 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:35:25.526155 (dockerd)[1807]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:35:25.963416 dockerd[1807]: time="2025-09-09T05:35:25.962448685Z" level=info msg="Starting up" Sep 9 05:35:25.965507 dockerd[1807]: time="2025-09-09T05:35:25.965449762Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:35:25.982963 dockerd[1807]: time="2025-09-09T05:35:25.982857623Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:35:26.005549 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1285057292-merged.mount: Deactivated successfully. Sep 9 05:35:26.104963 dockerd[1807]: time="2025-09-09T05:35:26.104696218Z" level=info msg="Loading containers: start." Sep 9 05:35:26.118782 kernel: Initializing XFRM netlink socket Sep 9 05:35:26.464923 systemd-networkd[1466]: docker0: Link UP Sep 9 05:35:26.472820 dockerd[1807]: time="2025-09-09T05:35:26.472699749Z" level=info msg="Loading containers: done." Sep 9 05:35:26.496322 dockerd[1807]: time="2025-09-09T05:35:26.496181519Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:35:26.496634 dockerd[1807]: time="2025-09-09T05:35:26.496362744Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:35:26.496634 dockerd[1807]: time="2025-09-09T05:35:26.496545432Z" level=info msg="Initializing buildkit" Sep 9 05:35:26.535829 dockerd[1807]: time="2025-09-09T05:35:26.535763778Z" level=info msg="Completed buildkit initialization" Sep 9 05:35:26.547969 dockerd[1807]: time="2025-09-09T05:35:26.547863334Z" level=info msg="Daemon has completed initialization" Sep 9 05:35:26.548401 dockerd[1807]: time="2025-09-09T05:35:26.548344686Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:35:26.548492 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:35:27.440974 containerd[1574]: time="2025-09-09T05:35:27.440895944Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 05:35:28.134288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622420186.mount: Deactivated successfully. Sep 9 05:35:29.420552 containerd[1574]: time="2025-09-09T05:35:29.419843909Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 9 05:35:29.420552 containerd[1574]: time="2025-09-09T05:35:29.420079173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:29.422807 containerd[1574]: time="2025-09-09T05:35:29.422677037Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:29.425773 containerd[1574]: time="2025-09-09T05:35:29.425723011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:29.427048 containerd[1574]: time="2025-09-09T05:35:29.427010439Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 1.986054132s" Sep 9 05:35:29.427176 containerd[1574]: time="2025-09-09T05:35:29.427161987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 9 05:35:29.428689 containerd[1574]: time="2025-09-09T05:35:29.428566025Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 05:35:31.128444 containerd[1574]: time="2025-09-09T05:35:31.128391043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:31.130506 containerd[1574]: time="2025-09-09T05:35:31.130463470Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 9 05:35:31.131299 containerd[1574]: time="2025-09-09T05:35:31.131248384Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:31.135563 containerd[1574]: time="2025-09-09T05:35:31.135024757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:31.136113 containerd[1574]: time="2025-09-09T05:35:31.136079167Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.707110146s" Sep 9 05:35:31.136188 containerd[1574]: time="2025-09-09T05:35:31.136116070Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 9 05:35:31.136705 containerd[1574]: time="2025-09-09T05:35:31.136583519Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 05:35:32.550559 containerd[1574]: time="2025-09-09T05:35:32.549674470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:32.552599 containerd[1574]: time="2025-09-09T05:35:32.552535641Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 9 05:35:32.553662 containerd[1574]: time="2025-09-09T05:35:32.553292757Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:32.556198 containerd[1574]: time="2025-09-09T05:35:32.556151238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:32.557148 containerd[1574]: time="2025-09-09T05:35:32.557110220Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.420498504s" Sep 9 05:35:32.557148 containerd[1574]: time="2025-09-09T05:35:32.557149316Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 9 05:35:32.557725 containerd[1574]: time="2025-09-09T05:35:32.557675108Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 05:35:33.647573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:35:33.651109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:33.808964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573658632.mount: Deactivated successfully. Sep 9 05:35:33.858753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:33.867924 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:35:33.969657 kubelet[2103]: E0909 05:35:33.969422 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:35:33.977163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:35:33.977351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:35:33.977809 systemd[1]: kubelet.service: Consumed 236ms CPU time, 110.4M memory peak. Sep 9 05:35:34.480386 containerd[1574]: time="2025-09-09T05:35:34.480281143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:34.481465 containerd[1574]: time="2025-09-09T05:35:34.481426231Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 9 05:35:34.482289 containerd[1574]: time="2025-09-09T05:35:34.482238443Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:34.485255 containerd[1574]: time="2025-09-09T05:35:34.485171813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:34.485584 containerd[1574]: time="2025-09-09T05:35:34.485549823Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 1.927795884s" Sep 9 05:35:34.485679 containerd[1574]: time="2025-09-09T05:35:34.485591641Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 9 05:35:34.485679 containerd[1574]: time="2025-09-09T05:35:34.486626598Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 05:35:34.488164 systemd-resolved[1467]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 9 05:35:35.003225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount682290817.mount: Deactivated successfully. Sep 9 05:35:36.046557 containerd[1574]: time="2025-09-09T05:35:36.045725135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:36.049557 containerd[1574]: time="2025-09-09T05:35:36.049468974Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:36.049768 containerd[1574]: time="2025-09-09T05:35:36.049577113Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 05:35:36.052575 containerd[1574]: time="2025-09-09T05:35:36.052142377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:36.054106 containerd[1574]: time="2025-09-09T05:35:36.053586692Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.566929419s" Sep 9 05:35:36.054106 containerd[1574]: time="2025-09-09T05:35:36.053639716Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 05:35:36.054322 containerd[1574]: time="2025-09-09T05:35:36.054283764Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:35:36.502334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603331015.mount: Deactivated successfully. Sep 9 05:35:36.509459 containerd[1574]: time="2025-09-09T05:35:36.509349900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:35:36.511534 containerd[1574]: time="2025-09-09T05:35:36.511080262Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 05:35:36.512619 containerd[1574]: time="2025-09-09T05:35:36.512570462Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:35:36.514837 containerd[1574]: time="2025-09-09T05:35:36.514778779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:35:36.515866 containerd[1574]: time="2025-09-09T05:35:36.515828237Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 461.504947ms" Sep 9 05:35:36.516054 containerd[1574]: time="2025-09-09T05:35:36.515956527Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 05:35:36.516789 containerd[1574]: time="2025-09-09T05:35:36.516541631Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 05:35:37.119643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2490729220.mount: Deactivated successfully. Sep 9 05:35:37.576677 systemd-resolved[1467]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 9 05:35:39.242501 containerd[1574]: time="2025-09-09T05:35:39.242428738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:39.245276 containerd[1574]: time="2025-09-09T05:35:39.245211328Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 9 05:35:39.246838 containerd[1574]: time="2025-09-09T05:35:39.246652012Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:39.251189 containerd[1574]: time="2025-09-09T05:35:39.251104425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:39.253578 containerd[1574]: time="2025-09-09T05:35:39.252563780Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.735978996s" Sep 9 05:35:39.253578 containerd[1574]: time="2025-09-09T05:35:39.252628034Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 9 05:35:42.688832 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:42.689195 systemd[1]: kubelet.service: Consumed 236ms CPU time, 110.4M memory peak. Sep 9 05:35:42.691863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:42.727816 systemd[1]: Reload requested from client PID 2248 ('systemctl') (unit session-7.scope)... Sep 9 05:35:42.727870 systemd[1]: Reloading... Sep 9 05:35:42.861583 zram_generator::config[2291]: No configuration found. Sep 9 05:35:43.186100 systemd[1]: Reloading finished in 457 ms. Sep 9 05:35:43.254280 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 05:35:43.254407 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 05:35:43.254950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:43.255016 systemd[1]: kubelet.service: Consumed 139ms CPU time, 98.1M memory peak. Sep 9 05:35:43.257609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:43.453110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:43.464028 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:35:43.523675 kubelet[2345]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:35:43.524073 kubelet[2345]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 05:35:43.524136 kubelet[2345]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:35:43.524329 kubelet[2345]: I0909 05:35:43.524279 2345 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:35:44.026398 kubelet[2345]: I0909 05:35:44.026344 2345 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 05:35:44.026731 kubelet[2345]: I0909 05:35:44.026709 2345 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:35:44.027163 kubelet[2345]: I0909 05:35:44.027144 2345 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 05:35:44.066511 kubelet[2345]: I0909 05:35:44.066453 2345 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:35:44.067237 kubelet[2345]: E0909 05:35:44.067188 2345 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.198.157.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.157.2:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:44.076117 kubelet[2345]: I0909 05:35:44.076089 2345 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:35:44.081651 kubelet[2345]: I0909 05:35:44.081531 2345 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:35:44.082554 kubelet[2345]: I0909 05:35:44.082482 2345 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 05:35:44.082979 kubelet[2345]: I0909 05:35:44.082935 2345 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:35:44.083260 kubelet[2345]: I0909 05:35:44.083050 2345 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452.0.0-n-58b1c71666","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:35:44.083462 kubelet[2345]: I0909 05:35:44.083450 2345 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:35:44.083511 kubelet[2345]: I0909 05:35:44.083505 2345 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 05:35:44.083790 kubelet[2345]: I0909 05:35:44.083777 2345 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:35:44.088369 kubelet[2345]: I0909 05:35:44.088328 2345 kubelet.go:408] "Attempting to sync node with API server" Sep 9 05:35:44.088586 kubelet[2345]: I0909 05:35:44.088574 2345 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:35:44.088698 kubelet[2345]: I0909 05:35:44.088689 2345 kubelet.go:314] "Adding apiserver pod source" Sep 9 05:35:44.088788 kubelet[2345]: I0909 05:35:44.088773 2345 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:35:44.091979 kubelet[2345]: W0909 05:35:44.091133 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.157.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452.0.0-n-58b1c71666&limit=500&resourceVersion=0": dial tcp 143.198.157.2:6443: connect: connection refused Sep 9 05:35:44.091979 kubelet[2345]: E0909 05:35:44.091256 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.157.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452.0.0-n-58b1c71666&limit=500&resourceVersion=0\": dial tcp 143.198.157.2:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:44.091979 kubelet[2345]: W0909 05:35:44.091703 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.157.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.157.2:6443: connect: connection refused Sep 9 05:35:44.091979 kubelet[2345]: E0909 05:35:44.091759 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.157.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.157.2:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:44.092295 kubelet[2345]: I0909 05:35:44.092269 2345 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:35:44.096180 kubelet[2345]: I0909 05:35:44.096003 2345 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:35:44.096826 kubelet[2345]: W0909 05:35:44.096791 2345 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:35:44.100597 kubelet[2345]: I0909 05:35:44.100568 2345 server.go:1274] "Started kubelet" Sep 9 05:35:44.101298 kubelet[2345]: I0909 05:35:44.101260 2345 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:35:44.103260 kubelet[2345]: I0909 05:35:44.103233 2345 server.go:449] "Adding debug handlers to kubelet server" Sep 9 05:35:44.103803 kubelet[2345]: I0909 05:35:44.103767 2345 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:35:44.109290 kubelet[2345]: I0909 05:35:44.109213 2345 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:35:44.109872 kubelet[2345]: I0909 05:35:44.109843 2345 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:35:44.112761 kubelet[2345]: I0909 05:35:44.112726 2345 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 05:35:44.119696 kubelet[2345]: I0909 05:35:44.119543 2345 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 05:35:44.124890 kubelet[2345]: I0909 05:35:44.124372 2345 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:35:44.125906 kubelet[2345]: E0909 05:35:44.113309 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-58b1c71666\" not found" Sep 9 05:35:44.126026 kubelet[2345]: I0909 05:35:44.125955 2345 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:35:44.127574 kubelet[2345]: W0909 05:35:44.126797 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.157.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.157.2:6443: connect: connection refused Sep 9 05:35:44.127574 kubelet[2345]: E0909 05:35:44.126884 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.157.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.157.2:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:44.127574 kubelet[2345]: E0909 05:35:44.126979 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.157.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452.0.0-n-58b1c71666?timeout=10s\": dial tcp 143.198.157.2:6443: connect: connection refused" interval="200ms" Sep 9 05:35:44.131620 kubelet[2345]: E0909 05:35:44.127094 2345 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.157.2:6443/api/v1/namespaces/default/events\": dial tcp 143.198.157.2:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4452.0.0-n-58b1c71666.1863867b62c08236 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4452.0.0-n-58b1c71666,UID:ci-4452.0.0-n-58b1c71666,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4452.0.0-n-58b1c71666,},FirstTimestamp:2025-09-09 05:35:44.100508214 +0000 UTC m=+0.631154601,LastTimestamp:2025-09-09 05:35:44.100508214 +0000 UTC m=+0.631154601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4452.0.0-n-58b1c71666,}" Sep 9 05:35:44.131620 kubelet[2345]: I0909 05:35:44.131442 2345 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:35:44.131924 kubelet[2345]: I0909 05:35:44.131705 2345 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:35:44.136140 kubelet[2345]: I0909 05:35:44.136099 2345 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:35:44.141332 kubelet[2345]: I0909 05:35:44.140910 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:35:44.142771 kubelet[2345]: I0909 05:35:44.142324 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:35:44.142771 kubelet[2345]: I0909 05:35:44.142382 2345 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 05:35:44.142771 kubelet[2345]: I0909 05:35:44.142438 2345 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 05:35:44.142771 kubelet[2345]: E0909 05:35:44.142491 2345 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:35:44.156211 kubelet[2345]: W0909 05:35:44.156141 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.157.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.157.2:6443: connect: connection refused Sep 9 05:35:44.156211 kubelet[2345]: E0909 05:35:44.156212 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.157.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.157.2:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:44.156466 kubelet[2345]: E0909 05:35:44.156335 2345 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:35:44.175388 kubelet[2345]: I0909 05:35:44.175316 2345 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 05:35:44.175649 kubelet[2345]: I0909 05:35:44.175489 2345 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 05:35:44.175649 kubelet[2345]: I0909 05:35:44.175510 2345 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:35:44.178785 kubelet[2345]: I0909 05:35:44.178738 2345 policy_none.go:49] "None policy: Start" Sep 9 05:35:44.180190 kubelet[2345]: I0909 05:35:44.180134 2345 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 05:35:44.180384 kubelet[2345]: I0909 05:35:44.180312 2345 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:35:44.193584 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:35:44.204043 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:35:44.209705 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:35:44.221070 kubelet[2345]: I0909 05:35:44.221019 2345 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:35:44.222456 kubelet[2345]: I0909 05:35:44.222390 2345 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:35:44.224083 kubelet[2345]: I0909 05:35:44.222433 2345 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:35:44.224083 kubelet[2345]: I0909 05:35:44.223285 2345 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:35:44.231145 kubelet[2345]: E0909 05:35:44.231092 2345 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4452.0.0-n-58b1c71666\" not found" Sep 9 05:35:44.259937 systemd[1]: Created slice kubepods-burstable-pod7f3a8e939e4d7346d2381be8f0743598.slice - libcontainer container kubepods-burstable-pod7f3a8e939e4d7346d2381be8f0743598.slice. Sep 9 05:35:44.290040 systemd[1]: Created slice kubepods-burstable-podc462a5a2fd523dfc10d95df6779af543.slice - libcontainer container kubepods-burstable-podc462a5a2fd523dfc10d95df6779af543.slice. Sep 9 05:35:44.296927 systemd[1]: Created slice kubepods-burstable-poddbb401cc055fccbfd214961abdf697f0.slice - libcontainer container kubepods-burstable-poddbb401cc055fccbfd214961abdf697f0.slice. Sep 9 05:35:44.324435 kubelet[2345]: I0909 05:35:44.324377 2345 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.325155 kubelet[2345]: E0909 05:35:44.325112 2345 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.157.2:6443/api/v1/nodes\": dial tcp 143.198.157.2:6443: connect: connection refused" node="ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.326383 kubelet[2345]: I0909 05:35:44.326341 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c462a5a2fd523dfc10d95df6779af543-kubeconfig\") pod \"kube-scheduler-ci-4452.0.0-n-58b1c71666\" (UID: \"c462a5a2fd523dfc10d95df6779af543\") " pod="kube-system/kube-scheduler-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.326472 kubelet[2345]: I0909 05:35:44.326393 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbb401cc055fccbfd214961abdf697f0-ca-certs\") pod \"kube-apiserver-ci-4452.0.0-n-58b1c71666\" (UID: \"dbb401cc055fccbfd214961abdf697f0\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.326472 kubelet[2345]: I0909 05:35:44.326419 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbb401cc055fccbfd214961abdf697f0-k8s-certs\") pod \"kube-apiserver-ci-4452.0.0-n-58b1c71666\" (UID: \"dbb401cc055fccbfd214961abdf697f0\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.326472 kubelet[2345]: I0909 05:35:44.326436 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f3a8e939e4d7346d2381be8f0743598-ca-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-58b1c71666\" (UID: \"7f3a8e939e4d7346d2381be8f0743598\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.326472 kubelet[2345]: I0909 05:35:44.326455 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f3a8e939e4d7346d2381be8f0743598-flexvolume-dir\") pod \"kube-controller-manager-ci-4452.0.0-n-58b1c71666\" (UID: \"7f3a8e939e4d7346d2381be8f0743598\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.326622 kubelet[2345]: I0909 05:35:44.326470 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f3a8e939e4d7346d2381be8f0743598-k8s-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-58b1c71666\" (UID: \"7f3a8e939e4d7346d2381be8f0743598\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.326622 kubelet[2345]: I0909 05:35:44.326579 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f3a8e939e4d7346d2381be8f0743598-kubeconfig\") pod \"kube-controller-manager-ci-4452.0.0-n-58b1c71666\" (UID: \"7f3a8e939e4d7346d2381be8f0743598\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.326744 kubelet[2345]: I0909 05:35:44.326633 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f3a8e939e4d7346d2381be8f0743598-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452.0.0-n-58b1c71666\" (UID: \"7f3a8e939e4d7346d2381be8f0743598\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.326744 kubelet[2345]: I0909 05:35:44.326722 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbb401cc055fccbfd214961abdf697f0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452.0.0-n-58b1c71666\" (UID: \"dbb401cc055fccbfd214961abdf697f0\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.327701 kubelet[2345]: E0909 05:35:44.327655 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.157.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452.0.0-n-58b1c71666?timeout=10s\": dial tcp 143.198.157.2:6443: connect: connection refused" interval="400ms" Sep 9 05:35:44.526819 kubelet[2345]: I0909 05:35:44.526771 2345 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.527617 kubelet[2345]: E0909 05:35:44.527256 2345 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.157.2:6443/api/v1/nodes\": dial tcp 143.198.157.2:6443: connect: connection refused" node="ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.587308 kubelet[2345]: E0909 05:35:44.587080 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:44.588241 containerd[1574]: time="2025-09-09T05:35:44.588144704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452.0.0-n-58b1c71666,Uid:7f3a8e939e4d7346d2381be8f0743598,Namespace:kube-system,Attempt:0,}" Sep 9 05:35:44.602548 kubelet[2345]: E0909 05:35:44.600582 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:44.602956 kubelet[2345]: E0909 05:35:44.602926 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:44.609032 containerd[1574]: time="2025-09-09T05:35:44.608940374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452.0.0-n-58b1c71666,Uid:c462a5a2fd523dfc10d95df6779af543,Namespace:kube-system,Attempt:0,}" Sep 9 05:35:44.610093 containerd[1574]: time="2025-09-09T05:35:44.610006595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452.0.0-n-58b1c71666,Uid:dbb401cc055fccbfd214961abdf697f0,Namespace:kube-system,Attempt:0,}" Sep 9 05:35:44.734568 kubelet[2345]: E0909 05:35:44.734473 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.157.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452.0.0-n-58b1c71666?timeout=10s\": dial tcp 143.198.157.2:6443: connect: connection refused" interval="800ms" Sep 9 05:35:44.745495 containerd[1574]: time="2025-09-09T05:35:44.745238004Z" level=info msg="connecting to shim c715a4b4aacbd3137beddb4c164e66e8e04bd4137605aaaad70baf144f695007" address="unix:///run/containerd/s/31a74635af5af25062b90958a5eaf50aa4517b466e7097a038fe101e40451151" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:35:44.749507 containerd[1574]: time="2025-09-09T05:35:44.748800697Z" level=info msg="connecting to shim 029188600a6b29e9e16e01e840283234bf45103f9e94ee489c9ccfdbcb883ea4" address="unix:///run/containerd/s/e9097fe7a2969062fa30f0ad6d73ed3a148066363ed000feb86167375101a2bc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:35:44.753467 containerd[1574]: time="2025-09-09T05:35:44.753404330Z" level=info msg="connecting to shim 60327a5298ec3c3441d40f3ceb3a777b048b312ca3298e9192047e57de18548b" address="unix:///run/containerd/s/f1d9211afbe67af547e4408fc38fb4747de5bc07fba021eb995679359ab489f3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:35:44.888813 systemd[1]: Started cri-containerd-60327a5298ec3c3441d40f3ceb3a777b048b312ca3298e9192047e57de18548b.scope - libcontainer container 60327a5298ec3c3441d40f3ceb3a777b048b312ca3298e9192047e57de18548b. Sep 9 05:35:44.900806 systemd[1]: Started cri-containerd-029188600a6b29e9e16e01e840283234bf45103f9e94ee489c9ccfdbcb883ea4.scope - libcontainer container 029188600a6b29e9e16e01e840283234bf45103f9e94ee489c9ccfdbcb883ea4. Sep 9 05:35:44.903420 systemd[1]: Started cri-containerd-c715a4b4aacbd3137beddb4c164e66e8e04bd4137605aaaad70baf144f695007.scope - libcontainer container c715a4b4aacbd3137beddb4c164e66e8e04bd4137605aaaad70baf144f695007. Sep 9 05:35:44.930006 kubelet[2345]: I0909 05:35:44.929961 2345 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.930669 kubelet[2345]: E0909 05:35:44.930510 2345 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.157.2:6443/api/v1/nodes\": dial tcp 143.198.157.2:6443: connect: connection refused" node="ci-4452.0.0-n-58b1c71666" Sep 9 05:35:44.973457 kubelet[2345]: W0909 05:35:44.973374 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.157.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452.0.0-n-58b1c71666&limit=500&resourceVersion=0": dial tcp 143.198.157.2:6443: connect: connection refused Sep 9 05:35:44.973654 kubelet[2345]: E0909 05:35:44.973472 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.157.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452.0.0-n-58b1c71666&limit=500&resourceVersion=0\": dial tcp 143.198.157.2:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:45.021082 containerd[1574]: time="2025-09-09T05:35:45.020706960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452.0.0-n-58b1c71666,Uid:7f3a8e939e4d7346d2381be8f0743598,Namespace:kube-system,Attempt:0,} returns sandbox id \"60327a5298ec3c3441d40f3ceb3a777b048b312ca3298e9192047e57de18548b\"" Sep 9 05:35:45.023617 kubelet[2345]: E0909 05:35:45.023239 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:45.031665 containerd[1574]: time="2025-09-09T05:35:45.031617099Z" level=info msg="CreateContainer within sandbox \"60327a5298ec3c3441d40f3ceb3a777b048b312ca3298e9192047e57de18548b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:35:45.046937 containerd[1574]: time="2025-09-09T05:35:45.046870120Z" level=info msg="Container ff5b7403fdac12b008feead4af4b35749d79449ffe658c72fafd3e196cb5fe7c: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:35:45.049264 containerd[1574]: time="2025-09-09T05:35:45.049203204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452.0.0-n-58b1c71666,Uid:c462a5a2fd523dfc10d95df6779af543,Namespace:kube-system,Attempt:0,} returns sandbox id \"029188600a6b29e9e16e01e840283234bf45103f9e94ee489c9ccfdbcb883ea4\"" Sep 9 05:35:45.051332 kubelet[2345]: E0909 05:35:45.051057 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:45.062800 containerd[1574]: time="2025-09-09T05:35:45.062750472Z" level=info msg="CreateContainer within sandbox \"029188600a6b29e9e16e01e840283234bf45103f9e94ee489c9ccfdbcb883ea4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:35:45.070828 containerd[1574]: time="2025-09-09T05:35:45.070784408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452.0.0-n-58b1c71666,Uid:dbb401cc055fccbfd214961abdf697f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c715a4b4aacbd3137beddb4c164e66e8e04bd4137605aaaad70baf144f695007\"" Sep 9 05:35:45.071847 containerd[1574]: time="2025-09-09T05:35:45.071800748Z" level=info msg="CreateContainer within sandbox \"60327a5298ec3c3441d40f3ceb3a777b048b312ca3298e9192047e57de18548b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ff5b7403fdac12b008feead4af4b35749d79449ffe658c72fafd3e196cb5fe7c\"" Sep 9 05:35:45.072319 kubelet[2345]: E0909 05:35:45.072238 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:45.073835 containerd[1574]: time="2025-09-09T05:35:45.073780396Z" level=info msg="StartContainer for \"ff5b7403fdac12b008feead4af4b35749d79449ffe658c72fafd3e196cb5fe7c\"" Sep 9 05:35:45.074608 containerd[1574]: time="2025-09-09T05:35:45.074575929Z" level=info msg="CreateContainer within sandbox \"c715a4b4aacbd3137beddb4c164e66e8e04bd4137605aaaad70baf144f695007\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:35:45.080299 containerd[1574]: time="2025-09-09T05:35:45.080171994Z" level=info msg="connecting to shim ff5b7403fdac12b008feead4af4b35749d79449ffe658c72fafd3e196cb5fe7c" address="unix:///run/containerd/s/f1d9211afbe67af547e4408fc38fb4747de5bc07fba021eb995679359ab489f3" protocol=ttrpc version=3 Sep 9 05:35:45.085339 containerd[1574]: time="2025-09-09T05:35:45.084869445Z" level=info msg="Container 50f27db353bdc606322a3f346e48a62aa0894b2cded78fa2a74374ac2444594d: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:35:45.087285 containerd[1574]: time="2025-09-09T05:35:45.087246678Z" level=info msg="Container 729a059d07b30a10605de84071974b224cd8a18a743d0205d3b3ecc4bf061736: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:35:45.095165 containerd[1574]: time="2025-09-09T05:35:45.095110007Z" level=info msg="CreateContainer within sandbox \"029188600a6b29e9e16e01e840283234bf45103f9e94ee489c9ccfdbcb883ea4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"50f27db353bdc606322a3f346e48a62aa0894b2cded78fa2a74374ac2444594d\"" Sep 9 05:35:45.096441 containerd[1574]: time="2025-09-09T05:35:45.096404504Z" level=info msg="StartContainer for \"50f27db353bdc606322a3f346e48a62aa0894b2cded78fa2a74374ac2444594d\"" Sep 9 05:35:45.098098 containerd[1574]: time="2025-09-09T05:35:45.098052211Z" level=info msg="connecting to shim 50f27db353bdc606322a3f346e48a62aa0894b2cded78fa2a74374ac2444594d" address="unix:///run/containerd/s/e9097fe7a2969062fa30f0ad6d73ed3a148066363ed000feb86167375101a2bc" protocol=ttrpc version=3 Sep 9 05:35:45.098327 containerd[1574]: time="2025-09-09T05:35:45.098298496Z" level=info msg="CreateContainer within sandbox \"c715a4b4aacbd3137beddb4c164e66e8e04bd4137605aaaad70baf144f695007\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"729a059d07b30a10605de84071974b224cd8a18a743d0205d3b3ecc4bf061736\"" Sep 9 05:35:45.099327 containerd[1574]: time="2025-09-09T05:35:45.099268220Z" level=info msg="StartContainer for \"729a059d07b30a10605de84071974b224cd8a18a743d0205d3b3ecc4bf061736\"" Sep 9 05:35:45.103667 containerd[1574]: time="2025-09-09T05:35:45.103588393Z" level=info msg="connecting to shim 729a059d07b30a10605de84071974b224cd8a18a743d0205d3b3ecc4bf061736" address="unix:///run/containerd/s/31a74635af5af25062b90958a5eaf50aa4517b466e7097a038fe101e40451151" protocol=ttrpc version=3 Sep 9 05:35:45.121782 systemd[1]: Started cri-containerd-ff5b7403fdac12b008feead4af4b35749d79449ffe658c72fafd3e196cb5fe7c.scope - libcontainer container ff5b7403fdac12b008feead4af4b35749d79449ffe658c72fafd3e196cb5fe7c. Sep 9 05:35:45.160833 systemd[1]: Started cri-containerd-50f27db353bdc606322a3f346e48a62aa0894b2cded78fa2a74374ac2444594d.scope - libcontainer container 50f27db353bdc606322a3f346e48a62aa0894b2cded78fa2a74374ac2444594d. Sep 9 05:35:45.163822 systemd[1]: Started cri-containerd-729a059d07b30a10605de84071974b224cd8a18a743d0205d3b3ecc4bf061736.scope - libcontainer container 729a059d07b30a10605de84071974b224cd8a18a743d0205d3b3ecc4bf061736. Sep 9 05:35:45.219341 kubelet[2345]: W0909 05:35:45.219241 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.157.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.157.2:6443: connect: connection refused Sep 9 05:35:45.219509 kubelet[2345]: E0909 05:35:45.219346 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.157.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.157.2:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:45.261769 containerd[1574]: time="2025-09-09T05:35:45.261723846Z" level=info msg="StartContainer for \"729a059d07b30a10605de84071974b224cd8a18a743d0205d3b3ecc4bf061736\" returns successfully" Sep 9 05:35:45.284008 containerd[1574]: time="2025-09-09T05:35:45.283950196Z" level=info msg="StartContainer for \"ff5b7403fdac12b008feead4af4b35749d79449ffe658c72fafd3e196cb5fe7c\" returns successfully" Sep 9 05:35:45.333470 containerd[1574]: time="2025-09-09T05:35:45.333222228Z" level=info msg="StartContainer for \"50f27db353bdc606322a3f346e48a62aa0894b2cded78fa2a74374ac2444594d\" returns successfully" Sep 9 05:35:45.732089 kubelet[2345]: I0909 05:35:45.732058 2345 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452.0.0-n-58b1c71666" Sep 9 05:35:46.193494 kubelet[2345]: E0909 05:35:46.193285 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:46.196909 kubelet[2345]: E0909 05:35:46.196876 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:46.205096 kubelet[2345]: E0909 05:35:46.205057 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:47.167752 kubelet[2345]: E0909 05:35:47.167704 2345 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4452.0.0-n-58b1c71666\" not found" node="ci-4452.0.0-n-58b1c71666" Sep 9 05:35:47.207220 kubelet[2345]: E0909 05:35:47.206719 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:47.208309 kubelet[2345]: E0909 05:35:47.208259 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:47.208544 kubelet[2345]: E0909 05:35:47.208459 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:47.232577 kubelet[2345]: I0909 05:35:47.232484 2345 kubelet_node_status.go:75] "Successfully registered node" node="ci-4452.0.0-n-58b1c71666" Sep 9 05:35:47.232577 kubelet[2345]: E0909 05:35:47.232576 2345 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4452.0.0-n-58b1c71666\": node \"ci-4452.0.0-n-58b1c71666\" not found" Sep 9 05:35:47.254546 kubelet[2345]: E0909 05:35:47.254480 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-58b1c71666\" not found" Sep 9 05:35:47.355608 kubelet[2345]: E0909 05:35:47.354688 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-58b1c71666\" not found" Sep 9 05:35:47.456889 kubelet[2345]: E0909 05:35:47.456250 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-58b1c71666\" not found" Sep 9 05:35:47.557095 kubelet[2345]: E0909 05:35:47.557035 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-58b1c71666\" not found" Sep 9 05:35:48.095880 kubelet[2345]: I0909 05:35:48.095484 2345 apiserver.go:52] "Watching apiserver" Sep 9 05:35:48.120477 kubelet[2345]: I0909 05:35:48.120430 2345 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 05:35:48.226210 kubelet[2345]: W0909 05:35:48.226147 2345 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 05:35:48.227354 kubelet[2345]: E0909 05:35:48.226748 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:48.228084 kubelet[2345]: W0909 05:35:48.227986 2345 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 05:35:48.228311 kubelet[2345]: E0909 05:35:48.228228 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:49.186687 kubelet[2345]: W0909 05:35:49.186569 2345 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 05:35:49.187745 kubelet[2345]: E0909 05:35:49.186953 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:49.212483 kubelet[2345]: E0909 05:35:49.211588 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:49.212483 kubelet[2345]: E0909 05:35:49.212002 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:49.212483 kubelet[2345]: E0909 05:35:49.212409 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:49.446333 systemd[1]: Reload requested from client PID 2616 ('systemctl') (unit session-7.scope)... Sep 9 05:35:49.446361 systemd[1]: Reloading... Sep 9 05:35:49.592568 zram_generator::config[2662]: No configuration found. Sep 9 05:35:49.857676 systemd[1]: Reloading finished in 410 ms. Sep 9 05:35:49.897496 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:49.914237 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:35:49.914711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:49.914816 systemd[1]: kubelet.service: Consumed 1.186s CPU time, 126.4M memory peak. Sep 9 05:35:49.918741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:50.121981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:50.137059 (kubelet)[2710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:35:50.218446 kubelet[2710]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:35:50.218446 kubelet[2710]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 05:35:50.218446 kubelet[2710]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:35:50.219005 kubelet[2710]: I0909 05:35:50.218498 2710 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:35:50.232547 kubelet[2710]: I0909 05:35:50.232468 2710 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 05:35:50.232547 kubelet[2710]: I0909 05:35:50.232506 2710 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:35:50.232801 kubelet[2710]: I0909 05:35:50.232778 2710 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 05:35:50.234385 kubelet[2710]: I0909 05:35:50.234342 2710 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 05:35:50.241383 kubelet[2710]: I0909 05:35:50.240949 2710 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:35:50.246925 kubelet[2710]: I0909 05:35:50.246895 2710 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:35:50.250209 kubelet[2710]: I0909 05:35:50.250180 2710 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:35:50.250374 kubelet[2710]: I0909 05:35:50.250303 2710 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 05:35:50.250457 kubelet[2710]: I0909 05:35:50.250424 2710 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:35:50.250737 kubelet[2710]: I0909 05:35:50.250458 2710 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452.0.0-n-58b1c71666","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:35:50.250843 kubelet[2710]: I0909 05:35:50.250750 2710 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:35:50.250843 kubelet[2710]: I0909 05:35:50.250764 2710 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 05:35:50.250843 kubelet[2710]: I0909 05:35:50.250808 2710 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:35:50.251020 kubelet[2710]: I0909 05:35:50.251004 2710 kubelet.go:408] "Attempting to sync node with API server" Sep 9 05:35:50.251065 kubelet[2710]: I0909 05:35:50.251024 2710 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:35:50.251065 kubelet[2710]: I0909 05:35:50.251053 2710 kubelet.go:314] "Adding apiserver pod source" Sep 9 05:35:50.251065 kubelet[2710]: I0909 05:35:50.251064 2710 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:35:50.252937 kubelet[2710]: I0909 05:35:50.252910 2710 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:35:50.256194 kubelet[2710]: I0909 05:35:50.256159 2710 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:35:50.256785 kubelet[2710]: I0909 05:35:50.256763 2710 server.go:1274] "Started kubelet" Sep 9 05:35:50.261181 kubelet[2710]: I0909 05:35:50.260435 2710 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:35:50.261876 kubelet[2710]: I0909 05:35:50.261858 2710 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:35:50.262542 kubelet[2710]: I0909 05:35:50.261247 2710 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:35:50.263357 kubelet[2710]: I0909 05:35:50.262905 2710 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:35:50.270864 kubelet[2710]: I0909 05:35:50.260486 2710 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:35:50.276212 kubelet[2710]: I0909 05:35:50.275756 2710 server.go:449] "Adding debug handlers to kubelet server" Sep 9 05:35:50.279823 kubelet[2710]: E0909 05:35:50.279783 2710 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:35:50.283278 kubelet[2710]: I0909 05:35:50.282385 2710 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 05:35:50.283278 kubelet[2710]: E0909 05:35:50.282772 2710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-58b1c71666\" not found" Sep 9 05:35:50.290559 kubelet[2710]: I0909 05:35:50.290191 2710 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 05:35:50.290559 kubelet[2710]: I0909 05:35:50.290364 2710 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:35:50.294143 kubelet[2710]: I0909 05:35:50.294087 2710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:35:50.295821 kubelet[2710]: I0909 05:35:50.295786 2710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:35:50.295821 kubelet[2710]: I0909 05:35:50.295828 2710 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 05:35:50.295988 kubelet[2710]: I0909 05:35:50.295849 2710 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 05:35:50.295988 kubelet[2710]: E0909 05:35:50.295898 2710 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:35:50.307289 kubelet[2710]: I0909 05:35:50.307198 2710 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:35:50.308326 kubelet[2710]: I0909 05:35:50.308282 2710 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:35:50.312680 kubelet[2710]: I0909 05:35:50.312642 2710 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:35:50.373292 kubelet[2710]: I0909 05:35:50.373165 2710 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 05:35:50.373292 kubelet[2710]: I0909 05:35:50.373193 2710 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 05:35:50.373292 kubelet[2710]: I0909 05:35:50.373228 2710 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:35:50.375029 kubelet[2710]: I0909 05:35:50.374304 2710 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:35:50.375029 kubelet[2710]: I0909 05:35:50.374332 2710 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:35:50.375029 kubelet[2710]: I0909 05:35:50.374363 2710 policy_none.go:49] "None policy: Start" Sep 9 05:35:50.377663 kubelet[2710]: I0909 05:35:50.377362 2710 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 05:35:50.377663 kubelet[2710]: I0909 05:35:50.377395 2710 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:35:50.377663 kubelet[2710]: I0909 05:35:50.377595 2710 state_mem.go:75] "Updated machine memory state" Sep 9 05:35:50.383734 kubelet[2710]: I0909 05:35:50.383700 2710 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:35:50.384135 kubelet[2710]: I0909 05:35:50.383956 2710 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:35:50.384135 kubelet[2710]: I0909 05:35:50.383974 2710 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:35:50.384845 kubelet[2710]: I0909 05:35:50.384826 2710 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:35:50.404938 kubelet[2710]: W0909 05:35:50.404682 2710 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 05:35:50.404938 kubelet[2710]: E0909 05:35:50.404788 2710 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4452.0.0-n-58b1c71666\" already exists" pod="kube-system/kube-scheduler-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.411974 kubelet[2710]: W0909 05:35:50.411501 2710 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 05:35:50.411974 kubelet[2710]: E0909 05:35:50.411627 2710 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4452.0.0-n-58b1c71666\" already exists" pod="kube-system/kube-apiserver-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.411974 kubelet[2710]: W0909 05:35:50.411836 2710 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 05:35:50.411974 kubelet[2710]: E0909 05:35:50.411876 2710 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4452.0.0-n-58b1c71666\" already exists" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.496167 kubelet[2710]: I0909 05:35:50.495275 2710 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.506467 kubelet[2710]: I0909 05:35:50.506322 2710 kubelet_node_status.go:111] "Node was previously registered" node="ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.507070 kubelet[2710]: I0909 05:35:50.507050 2710 kubelet_node_status.go:75] "Successfully registered node" node="ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.591034 kubelet[2710]: I0909 05:35:50.590946 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbb401cc055fccbfd214961abdf697f0-k8s-certs\") pod \"kube-apiserver-ci-4452.0.0-n-58b1c71666\" (UID: \"dbb401cc055fccbfd214961abdf697f0\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.591320 kubelet[2710]: I0909 05:35:50.591172 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbb401cc055fccbfd214961abdf697f0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452.0.0-n-58b1c71666\" (UID: \"dbb401cc055fccbfd214961abdf697f0\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.591538 kubelet[2710]: I0909 05:35:50.591423 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f3a8e939e4d7346d2381be8f0743598-flexvolume-dir\") pod \"kube-controller-manager-ci-4452.0.0-n-58b1c71666\" (UID: \"7f3a8e939e4d7346d2381be8f0743598\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.591538 kubelet[2710]: I0909 05:35:50.591493 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f3a8e939e4d7346d2381be8f0743598-k8s-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-58b1c71666\" (UID: \"7f3a8e939e4d7346d2381be8f0743598\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.591699 kubelet[2710]: I0909 05:35:50.591512 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f3a8e939e4d7346d2381be8f0743598-kubeconfig\") pod \"kube-controller-manager-ci-4452.0.0-n-58b1c71666\" (UID: \"7f3a8e939e4d7346d2381be8f0743598\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.591699 kubelet[2710]: I0909 05:35:50.591664 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f3a8e939e4d7346d2381be8f0743598-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452.0.0-n-58b1c71666\" (UID: \"7f3a8e939e4d7346d2381be8f0743598\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.591850 kubelet[2710]: I0909 05:35:50.591683 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbb401cc055fccbfd214961abdf697f0-ca-certs\") pod \"kube-apiserver-ci-4452.0.0-n-58b1c71666\" (UID: \"dbb401cc055fccbfd214961abdf697f0\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.591850 kubelet[2710]: I0909 05:35:50.591814 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f3a8e939e4d7346d2381be8f0743598-ca-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-58b1c71666\" (UID: \"7f3a8e939e4d7346d2381be8f0743598\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.591850 kubelet[2710]: I0909 05:35:50.591829 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c462a5a2fd523dfc10d95df6779af543-kubeconfig\") pod \"kube-scheduler-ci-4452.0.0-n-58b1c71666\" (UID: \"c462a5a2fd523dfc10d95df6779af543\") " pod="kube-system/kube-scheduler-ci-4452.0.0-n-58b1c71666" Sep 9 05:35:50.708024 kubelet[2710]: E0909 05:35:50.707970 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:50.713164 kubelet[2710]: E0909 05:35:50.712661 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:50.714733 kubelet[2710]: E0909 05:35:50.713406 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:51.252427 kubelet[2710]: I0909 05:35:51.252343 2710 apiserver.go:52] "Watching apiserver" Sep 9 05:35:51.291273 kubelet[2710]: I0909 05:35:51.290933 2710 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 05:35:51.326645 kubelet[2710]: I0909 05:35:51.324972 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4452.0.0-n-58b1c71666" podStartSLOduration=3.324920571 podStartE2EDuration="3.324920571s" podCreationTimestamp="2025-09-09 05:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:35:51.304555452 +0000 UTC m=+1.159389460" watchObservedRunningTime="2025-09-09 05:35:51.324920571 +0000 UTC m=+1.179754567" Sep 9 05:35:51.328579 kubelet[2710]: I0909 05:35:51.327032 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-58b1c71666" podStartSLOduration=3.3270135659999998 podStartE2EDuration="3.327013566s" podCreationTimestamp="2025-09-09 05:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:35:51.323692191 +0000 UTC m=+1.178526209" watchObservedRunningTime="2025-09-09 05:35:51.327013566 +0000 UTC m=+1.181847574" Sep 9 05:35:51.346857 kubelet[2710]: E0909 05:35:51.346669 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:51.347716 kubelet[2710]: E0909 05:35:51.347672 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:51.348255 kubelet[2710]: E0909 05:35:51.348235 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:51.388894 kubelet[2710]: I0909 05:35:51.388823 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4452.0.0-n-58b1c71666" podStartSLOduration=2.388798213 podStartE2EDuration="2.388798213s" podCreationTimestamp="2025-09-09 05:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:35:51.345988669 +0000 UTC m=+1.200822683" watchObservedRunningTime="2025-09-09 05:35:51.388798213 +0000 UTC m=+1.243632223" Sep 9 05:35:52.349578 kubelet[2710]: E0909 05:35:52.349185 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:52.350288 kubelet[2710]: E0909 05:35:52.349510 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:53.351170 kubelet[2710]: E0909 05:35:53.351098 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:54.126021 kubelet[2710]: E0909 05:35:54.124754 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:55.892066 kubelet[2710]: I0909 05:35:55.892015 2710 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:35:55.893830 containerd[1574]: time="2025-09-09T05:35:55.893789193Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:35:55.895269 kubelet[2710]: I0909 05:35:55.894993 2710 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:35:56.765950 systemd[1]: Created slice kubepods-besteffort-pod0fe95f24_e4d4_46f4_96fd_f36682a9f090.slice - libcontainer container kubepods-besteffort-pod0fe95f24_e4d4_46f4_96fd_f36682a9f090.slice. Sep 9 05:35:56.835083 kubelet[2710]: I0909 05:35:56.833742 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fe95f24-e4d4-46f4-96fd-f36682a9f090-kube-proxy\") pod \"kube-proxy-wglwj\" (UID: \"0fe95f24-e4d4-46f4-96fd-f36682a9f090\") " pod="kube-system/kube-proxy-wglwj" Sep 9 05:35:56.835083 kubelet[2710]: I0909 05:35:56.833809 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fe95f24-e4d4-46f4-96fd-f36682a9f090-xtables-lock\") pod \"kube-proxy-wglwj\" (UID: \"0fe95f24-e4d4-46f4-96fd-f36682a9f090\") " pod="kube-system/kube-proxy-wglwj" Sep 9 05:35:56.835083 kubelet[2710]: I0909 05:35:56.833845 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fe95f24-e4d4-46f4-96fd-f36682a9f090-lib-modules\") pod \"kube-proxy-wglwj\" (UID: \"0fe95f24-e4d4-46f4-96fd-f36682a9f090\") " pod="kube-system/kube-proxy-wglwj" Sep 9 05:35:56.835083 kubelet[2710]: I0909 05:35:56.833876 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdhsv\" (UniqueName: \"kubernetes.io/projected/0fe95f24-e4d4-46f4-96fd-f36682a9f090-kube-api-access-jdhsv\") pod \"kube-proxy-wglwj\" (UID: \"0fe95f24-e4d4-46f4-96fd-f36682a9f090\") " pod="kube-system/kube-proxy-wglwj" Sep 9 05:35:56.942767 systemd[1]: Created slice kubepods-besteffort-pod7427ef99_9c09_44d9_b478_46df3ee8a5b9.slice - libcontainer container kubepods-besteffort-pod7427ef99_9c09_44d9_b478_46df3ee8a5b9.slice. Sep 9 05:35:57.036293 kubelet[2710]: I0909 05:35:57.036124 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27hrk\" (UniqueName: \"kubernetes.io/projected/7427ef99-9c09-44d9-b478-46df3ee8a5b9-kube-api-access-27hrk\") pod \"tigera-operator-58fc44c59b-rwd66\" (UID: \"7427ef99-9c09-44d9-b478-46df3ee8a5b9\") " pod="tigera-operator/tigera-operator-58fc44c59b-rwd66" Sep 9 05:35:57.036293 kubelet[2710]: I0909 05:35:57.036199 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7427ef99-9c09-44d9-b478-46df3ee8a5b9-var-lib-calico\") pod \"tigera-operator-58fc44c59b-rwd66\" (UID: \"7427ef99-9c09-44d9-b478-46df3ee8a5b9\") " pod="tigera-operator/tigera-operator-58fc44c59b-rwd66" Sep 9 05:35:57.075039 kubelet[2710]: E0909 05:35:57.074984 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:57.076553 containerd[1574]: time="2025-09-09T05:35:57.076486000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wglwj,Uid:0fe95f24-e4d4-46f4-96fd-f36682a9f090,Namespace:kube-system,Attempt:0,}" Sep 9 05:35:57.106347 containerd[1574]: time="2025-09-09T05:35:57.106188463Z" level=info msg="connecting to shim aef6880afbf2a4cd6b6d23116f6045ce8f9e948c4feb73418387b43a9c21a0a8" address="unix:///run/containerd/s/a6eb4b66e4a2a4b30b758278945b18911dc60efb29cc7ecae8becbcb75e408f1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:35:57.148852 systemd[1]: Started cri-containerd-aef6880afbf2a4cd6b6d23116f6045ce8f9e948c4feb73418387b43a9c21a0a8.scope - libcontainer container aef6880afbf2a4cd6b6d23116f6045ce8f9e948c4feb73418387b43a9c21a0a8. Sep 9 05:35:57.205547 containerd[1574]: time="2025-09-09T05:35:57.205466892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wglwj,Uid:0fe95f24-e4d4-46f4-96fd-f36682a9f090,Namespace:kube-system,Attempt:0,} returns sandbox id \"aef6880afbf2a4cd6b6d23116f6045ce8f9e948c4feb73418387b43a9c21a0a8\"" Sep 9 05:35:57.207204 kubelet[2710]: E0909 05:35:57.207169 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:57.212296 containerd[1574]: time="2025-09-09T05:35:57.212210452Z" level=info msg="CreateContainer within sandbox \"aef6880afbf2a4cd6b6d23116f6045ce8f9e948c4feb73418387b43a9c21a0a8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:35:57.244865 containerd[1574]: time="2025-09-09T05:35:57.244629945Z" level=info msg="Container 03084f3b1bfbd7ef182da0af727f94cb15cc6f1fbc1ce28e95f6a7dcf4b76679: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:35:57.249539 containerd[1574]: time="2025-09-09T05:35:57.249452014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-rwd66,Uid:7427ef99-9c09-44d9-b478-46df3ee8a5b9,Namespace:tigera-operator,Attempt:0,}" Sep 9 05:35:57.256405 containerd[1574]: time="2025-09-09T05:35:57.256344788Z" level=info msg="CreateContainer within sandbox \"aef6880afbf2a4cd6b6d23116f6045ce8f9e948c4feb73418387b43a9c21a0a8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"03084f3b1bfbd7ef182da0af727f94cb15cc6f1fbc1ce28e95f6a7dcf4b76679\"" Sep 9 05:35:57.259457 containerd[1574]: time="2025-09-09T05:35:57.258954554Z" level=info msg="StartContainer for \"03084f3b1bfbd7ef182da0af727f94cb15cc6f1fbc1ce28e95f6a7dcf4b76679\"" Sep 9 05:35:57.261676 containerd[1574]: time="2025-09-09T05:35:57.261479674Z" level=info msg="connecting to shim 03084f3b1bfbd7ef182da0af727f94cb15cc6f1fbc1ce28e95f6a7dcf4b76679" address="unix:///run/containerd/s/a6eb4b66e4a2a4b30b758278945b18911dc60efb29cc7ecae8becbcb75e408f1" protocol=ttrpc version=3 Sep 9 05:35:57.282127 containerd[1574]: time="2025-09-09T05:35:57.282058701Z" level=info msg="connecting to shim 796af022731879a7e49da29236d36c062f01443742c3afca579ec0ad35537acd" address="unix:///run/containerd/s/24a6ccf49ae5122ca975134650bae68e9277c6d680291093df59d70d76fe501d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:35:57.301802 systemd[1]: Started cri-containerd-03084f3b1bfbd7ef182da0af727f94cb15cc6f1fbc1ce28e95f6a7dcf4b76679.scope - libcontainer container 03084f3b1bfbd7ef182da0af727f94cb15cc6f1fbc1ce28e95f6a7dcf4b76679. Sep 9 05:35:57.333804 systemd[1]: Started cri-containerd-796af022731879a7e49da29236d36c062f01443742c3afca579ec0ad35537acd.scope - libcontainer container 796af022731879a7e49da29236d36c062f01443742c3afca579ec0ad35537acd. Sep 9 05:35:57.374199 containerd[1574]: time="2025-09-09T05:35:57.374154667Z" level=info msg="StartContainer for \"03084f3b1bfbd7ef182da0af727f94cb15cc6f1fbc1ce28e95f6a7dcf4b76679\" returns successfully" Sep 9 05:35:57.437745 containerd[1574]: time="2025-09-09T05:35:57.437664740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-rwd66,Uid:7427ef99-9c09-44d9-b478-46df3ee8a5b9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"796af022731879a7e49da29236d36c062f01443742c3afca579ec0ad35537acd\"" Sep 9 05:35:57.443648 containerd[1574]: time="2025-09-09T05:35:57.443597820Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 05:35:57.448432 systemd-resolved[1467]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Sep 9 05:35:57.958975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4134793010.mount: Deactivated successfully. Sep 9 05:35:58.375476 kubelet[2710]: E0909 05:35:58.375338 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:58.390407 kubelet[2710]: I0909 05:35:58.390310 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wglwj" podStartSLOduration=2.390284132 podStartE2EDuration="2.390284132s" podCreationTimestamp="2025-09-09 05:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:35:58.389185323 +0000 UTC m=+8.244019350" watchObservedRunningTime="2025-09-09 05:35:58.390284132 +0000 UTC m=+8.245118166" Sep 9 05:35:58.803781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount304525456.mount: Deactivated successfully. Sep 9 05:35:59.669137 containerd[1574]: time="2025-09-09T05:35:59.669064313Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:59.670578 containerd[1574]: time="2025-09-09T05:35:59.670175551Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 05:35:59.671534 containerd[1574]: time="2025-09-09T05:35:59.671417135Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:59.673907 containerd[1574]: time="2025-09-09T05:35:59.673827615Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:59.675121 containerd[1574]: time="2025-09-09T05:35:59.674846012Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.230986272s" Sep 9 05:35:59.675121 containerd[1574]: time="2025-09-09T05:35:59.674902597Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 05:35:59.681764 containerd[1574]: time="2025-09-09T05:35:59.681262202Z" level=info msg="CreateContainer within sandbox \"796af022731879a7e49da29236d36c062f01443742c3afca579ec0ad35537acd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 05:35:59.692371 containerd[1574]: time="2025-09-09T05:35:59.692311742Z" level=info msg="Container 87518de536da36e7ccade1feade787b47adc4cf35629a1519c4e5a898535f4bc: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:35:59.696261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount449233987.mount: Deactivated successfully. Sep 9 05:35:59.711721 containerd[1574]: time="2025-09-09T05:35:59.711663218Z" level=info msg="CreateContainer within sandbox \"796af022731879a7e49da29236d36c062f01443742c3afca579ec0ad35537acd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"87518de536da36e7ccade1feade787b47adc4cf35629a1519c4e5a898535f4bc\"" Sep 9 05:35:59.713057 containerd[1574]: time="2025-09-09T05:35:59.712501972Z" level=info msg="StartContainer for \"87518de536da36e7ccade1feade787b47adc4cf35629a1519c4e5a898535f4bc\"" Sep 9 05:35:59.715897 containerd[1574]: time="2025-09-09T05:35:59.715847876Z" level=info msg="connecting to shim 87518de536da36e7ccade1feade787b47adc4cf35629a1519c4e5a898535f4bc" address="unix:///run/containerd/s/24a6ccf49ae5122ca975134650bae68e9277c6d680291093df59d70d76fe501d" protocol=ttrpc version=3 Sep 9 05:35:59.746844 systemd[1]: Started cri-containerd-87518de536da36e7ccade1feade787b47adc4cf35629a1519c4e5a898535f4bc.scope - libcontainer container 87518de536da36e7ccade1feade787b47adc4cf35629a1519c4e5a898535f4bc. Sep 9 05:35:59.788793 containerd[1574]: time="2025-09-09T05:35:59.788734309Z" level=info msg="StartContainer for \"87518de536da36e7ccade1feade787b47adc4cf35629a1519c4e5a898535f4bc\" returns successfully" Sep 9 05:36:00.402165 kubelet[2710]: I0909 05:36:00.402068 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-rwd66" podStartSLOduration=2.1659209329999998 podStartE2EDuration="4.402045226s" podCreationTimestamp="2025-09-09 05:35:56 +0000 UTC" firstStartedPulling="2025-09-09 05:35:57.4403852 +0000 UTC m=+7.295219203" lastFinishedPulling="2025-09-09 05:35:59.676509503 +0000 UTC m=+9.531343496" observedRunningTime="2025-09-09 05:36:00.40129623 +0000 UTC m=+10.256130249" watchObservedRunningTime="2025-09-09 05:36:00.402045226 +0000 UTC m=+10.256879265" Sep 9 05:36:01.938413 kubelet[2710]: E0909 05:36:01.938341 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:02.277646 kubelet[2710]: E0909 05:36:02.277184 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:04.139079 kubelet[2710]: E0909 05:36:04.138366 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:06.356777 update_engine[1554]: I20250909 05:36:06.356660 1554 update_attempter.cc:509] Updating boot flags... Sep 9 05:36:07.077490 sudo[1788]: pam_unix(sudo:session): session closed for user root Sep 9 05:36:07.081621 sshd[1787]: Connection closed by 139.178.89.65 port 39822 Sep 9 05:36:07.083714 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Sep 9 05:36:07.088310 systemd[1]: sshd@6-143.198.157.2:22-139.178.89.65:39822.service: Deactivated successfully. Sep 9 05:36:07.094808 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:36:07.095809 systemd[1]: session-7.scope: Consumed 6.198s CPU time, 172.8M memory peak. Sep 9 05:36:07.099591 systemd-logind[1552]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:36:07.105775 systemd-logind[1552]: Removed session 7. Sep 9 05:36:12.286150 systemd[1]: Created slice kubepods-besteffort-pod1ccae83c_4a21_4e2e_8fe8_8727200565b3.slice - libcontainer container kubepods-besteffort-pod1ccae83c_4a21_4e2e_8fe8_8727200565b3.slice. Sep 9 05:36:12.352791 kubelet[2710]: I0909 05:36:12.352706 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwjsh\" (UniqueName: \"kubernetes.io/projected/1ccae83c-4a21-4e2e-8fe8-8727200565b3-kube-api-access-jwjsh\") pod \"calico-typha-69c8f97b8b-7wmmb\" (UID: \"1ccae83c-4a21-4e2e-8fe8-8727200565b3\") " pod="calico-system/calico-typha-69c8f97b8b-7wmmb" Sep 9 05:36:12.352791 kubelet[2710]: I0909 05:36:12.352755 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ccae83c-4a21-4e2e-8fe8-8727200565b3-tigera-ca-bundle\") pod \"calico-typha-69c8f97b8b-7wmmb\" (UID: \"1ccae83c-4a21-4e2e-8fe8-8727200565b3\") " pod="calico-system/calico-typha-69c8f97b8b-7wmmb" Sep 9 05:36:12.352791 kubelet[2710]: I0909 05:36:12.352772 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1ccae83c-4a21-4e2e-8fe8-8727200565b3-typha-certs\") pod \"calico-typha-69c8f97b8b-7wmmb\" (UID: \"1ccae83c-4a21-4e2e-8fe8-8727200565b3\") " pod="calico-system/calico-typha-69c8f97b8b-7wmmb" Sep 9 05:36:12.603242 kubelet[2710]: E0909 05:36:12.602452 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:12.604793 containerd[1574]: time="2025-09-09T05:36:12.604579676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c8f97b8b-7wmmb,Uid:1ccae83c-4a21-4e2e-8fe8-8727200565b3,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:12.646034 containerd[1574]: time="2025-09-09T05:36:12.645786692Z" level=info msg="connecting to shim 4b5acfc72cab231c531aeed9b697f8a3dea9869511abb828baf782ca7da9bf9b" address="unix:///run/containerd/s/41adf2afe67a8c4d8027e617c35f94d4411dfbee65f3114644bf9ce48195e887" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:12.694831 systemd[1]: Started cri-containerd-4b5acfc72cab231c531aeed9b697f8a3dea9869511abb828baf782ca7da9bf9b.scope - libcontainer container 4b5acfc72cab231c531aeed9b697f8a3dea9869511abb828baf782ca7da9bf9b. Sep 9 05:36:12.738650 systemd[1]: Created slice kubepods-besteffort-pod3acab5fa_ecad_4661_93d8_b9a0b7747af5.slice - libcontainer container kubepods-besteffort-pod3acab5fa_ecad_4661_93d8_b9a0b7747af5.slice. Sep 9 05:36:12.755549 kubelet[2710]: I0909 05:36:12.755366 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3acab5fa-ecad-4661-93d8-b9a0b7747af5-cni-log-dir\") pod \"calico-node-qm6gs\" (UID: \"3acab5fa-ecad-4661-93d8-b9a0b7747af5\") " pod="calico-system/calico-node-qm6gs" Sep 9 05:36:12.755549 kubelet[2710]: I0909 05:36:12.755405 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3acab5fa-ecad-4661-93d8-b9a0b7747af5-cni-net-dir\") pod \"calico-node-qm6gs\" (UID: \"3acab5fa-ecad-4661-93d8-b9a0b7747af5\") " pod="calico-system/calico-node-qm6gs" Sep 9 05:36:12.755549 kubelet[2710]: I0909 05:36:12.755426 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3acab5fa-ecad-4661-93d8-b9a0b7747af5-lib-modules\") pod \"calico-node-qm6gs\" (UID: \"3acab5fa-ecad-4661-93d8-b9a0b7747af5\") " pod="calico-system/calico-node-qm6gs" Sep 9 05:36:12.755549 kubelet[2710]: I0909 05:36:12.755443 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3acab5fa-ecad-4661-93d8-b9a0b7747af5-var-lib-calico\") pod \"calico-node-qm6gs\" (UID: \"3acab5fa-ecad-4661-93d8-b9a0b7747af5\") " pod="calico-system/calico-node-qm6gs" Sep 9 05:36:12.755549 kubelet[2710]: I0909 05:36:12.755461 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3acab5fa-ecad-4661-93d8-b9a0b7747af5-node-certs\") pod \"calico-node-qm6gs\" (UID: \"3acab5fa-ecad-4661-93d8-b9a0b7747af5\") " pod="calico-system/calico-node-qm6gs" Sep 9 05:36:12.755819 kubelet[2710]: I0909 05:36:12.755475 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3acab5fa-ecad-4661-93d8-b9a0b7747af5-policysync\") pod \"calico-node-qm6gs\" (UID: \"3acab5fa-ecad-4661-93d8-b9a0b7747af5\") " pod="calico-system/calico-node-qm6gs" Sep 9 05:36:12.755819 kubelet[2710]: I0909 05:36:12.755491 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3acab5fa-ecad-4661-93d8-b9a0b7747af5-cni-bin-dir\") pod \"calico-node-qm6gs\" (UID: \"3acab5fa-ecad-4661-93d8-b9a0b7747af5\") " pod="calico-system/calico-node-qm6gs" Sep 9 05:36:12.757788 kubelet[2710]: I0909 05:36:12.757574 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3acab5fa-ecad-4661-93d8-b9a0b7747af5-flexvol-driver-host\") pod \"calico-node-qm6gs\" (UID: \"3acab5fa-ecad-4661-93d8-b9a0b7747af5\") " pod="calico-system/calico-node-qm6gs" Sep 9 05:36:12.757788 kubelet[2710]: I0909 05:36:12.757631 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3acab5fa-ecad-4661-93d8-b9a0b7747af5-var-run-calico\") pod \"calico-node-qm6gs\" (UID: \"3acab5fa-ecad-4661-93d8-b9a0b7747af5\") " pod="calico-system/calico-node-qm6gs" Sep 9 05:36:12.757788 kubelet[2710]: I0909 05:36:12.757650 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3acab5fa-ecad-4661-93d8-b9a0b7747af5-tigera-ca-bundle\") pod \"calico-node-qm6gs\" (UID: \"3acab5fa-ecad-4661-93d8-b9a0b7747af5\") " pod="calico-system/calico-node-qm6gs" Sep 9 05:36:12.757788 kubelet[2710]: I0909 05:36:12.757668 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3acab5fa-ecad-4661-93d8-b9a0b7747af5-xtables-lock\") pod \"calico-node-qm6gs\" (UID: \"3acab5fa-ecad-4661-93d8-b9a0b7747af5\") " pod="calico-system/calico-node-qm6gs" Sep 9 05:36:12.757788 kubelet[2710]: I0909 05:36:12.757685 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74lz\" (UniqueName: \"kubernetes.io/projected/3acab5fa-ecad-4661-93d8-b9a0b7747af5-kube-api-access-k74lz\") pod \"calico-node-qm6gs\" (UID: \"3acab5fa-ecad-4661-93d8-b9a0b7747af5\") " pod="calico-system/calico-node-qm6gs" Sep 9 05:36:12.837242 containerd[1574]: time="2025-09-09T05:36:12.837008206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c8f97b8b-7wmmb,Uid:1ccae83c-4a21-4e2e-8fe8-8727200565b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b5acfc72cab231c531aeed9b697f8a3dea9869511abb828baf782ca7da9bf9b\"" Sep 9 05:36:12.844997 kubelet[2710]: E0909 05:36:12.844955 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:12.851630 containerd[1574]: time="2025-09-09T05:36:12.851589408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 05:36:12.872657 kubelet[2710]: E0909 05:36:12.872556 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:12.872657 kubelet[2710]: W0909 05:36:12.872591 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:12.872793 kubelet[2710]: E0909 05:36:12.872650 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:12.881657 kubelet[2710]: E0909 05:36:12.881609 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:12.881657 kubelet[2710]: W0909 05:36:12.881637 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:12.881879 kubelet[2710]: E0909 05:36:12.881691 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:12.985820 kubelet[2710]: E0909 05:36:12.985760 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c78xj" podUID="a830bdbb-cfd6-4f41-8465-5085b9d24e9d" Sep 9 05:36:13.046392 containerd[1574]: time="2025-09-09T05:36:13.046352171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qm6gs,Uid:3acab5fa-ecad-4661-93d8-b9a0b7747af5,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:13.048691 kubelet[2710]: E0909 05:36:13.047582 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.048691 kubelet[2710]: W0909 05:36:13.047652 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.048691 kubelet[2710]: E0909 05:36:13.047676 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.048691 kubelet[2710]: E0909 05:36:13.048067 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.048691 kubelet[2710]: W0909 05:36:13.048078 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.048691 kubelet[2710]: E0909 05:36:13.048115 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.048691 kubelet[2710]: E0909 05:36:13.048322 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.048691 kubelet[2710]: W0909 05:36:13.048334 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.048691 kubelet[2710]: E0909 05:36:13.048382 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.049031 kubelet[2710]: E0909 05:36:13.048836 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.049031 kubelet[2710]: W0909 05:36:13.048853 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.049078 kubelet[2710]: E0909 05:36:13.049031 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.051931 kubelet[2710]: E0909 05:36:13.051900 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.051931 kubelet[2710]: W0909 05:36:13.051922 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.052076 kubelet[2710]: E0909 05:36:13.051941 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.052281 kubelet[2710]: E0909 05:36:13.052238 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.052281 kubelet[2710]: W0909 05:36:13.052257 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.052281 kubelet[2710]: E0909 05:36:13.052271 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.052625 kubelet[2710]: E0909 05:36:13.052601 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.052722 kubelet[2710]: W0909 05:36:13.052664 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.052722 kubelet[2710]: E0909 05:36:13.052681 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.053676 kubelet[2710]: E0909 05:36:13.053651 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.053676 kubelet[2710]: W0909 05:36:13.053673 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.053790 kubelet[2710]: E0909 05:36:13.053690 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.054653 kubelet[2710]: E0909 05:36:13.054196 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.054653 kubelet[2710]: W0909 05:36:13.054233 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.054653 kubelet[2710]: E0909 05:36:13.054250 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.054653 kubelet[2710]: E0909 05:36:13.054566 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.054653 kubelet[2710]: W0909 05:36:13.054585 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.054653 kubelet[2710]: E0909 05:36:13.054602 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.055580 kubelet[2710]: E0909 05:36:13.054882 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.055580 kubelet[2710]: W0909 05:36:13.054904 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.055580 kubelet[2710]: E0909 05:36:13.054919 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.055580 kubelet[2710]: E0909 05:36:13.055567 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.055580 kubelet[2710]: W0909 05:36:13.055582 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.055740 kubelet[2710]: E0909 05:36:13.055598 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.056388 kubelet[2710]: E0909 05:36:13.056365 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.056388 kubelet[2710]: W0909 05:36:13.056387 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.056467 kubelet[2710]: E0909 05:36:13.056402 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.056890 kubelet[2710]: E0909 05:36:13.056865 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.056890 kubelet[2710]: W0909 05:36:13.056888 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.056980 kubelet[2710]: E0909 05:36:13.056903 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.057704 kubelet[2710]: E0909 05:36:13.057668 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.057704 kubelet[2710]: W0909 05:36:13.057704 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.057798 kubelet[2710]: E0909 05:36:13.057719 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.058036 kubelet[2710]: E0909 05:36:13.058017 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.058075 kubelet[2710]: W0909 05:36:13.058037 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.058075 kubelet[2710]: E0909 05:36:13.058053 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.059074 kubelet[2710]: E0909 05:36:13.059052 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.059074 kubelet[2710]: W0909 05:36:13.059073 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.059156 kubelet[2710]: E0909 05:36:13.059089 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.059430 kubelet[2710]: E0909 05:36:13.059410 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.059467 kubelet[2710]: W0909 05:36:13.059432 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.059492 kubelet[2710]: E0909 05:36:13.059474 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.059817 kubelet[2710]: E0909 05:36:13.059799 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.059891 kubelet[2710]: W0909 05:36:13.059877 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.059920 kubelet[2710]: E0909 05:36:13.059898 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.060545 kubelet[2710]: E0909 05:36:13.060473 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.060545 kubelet[2710]: W0909 05:36:13.060489 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.060545 kubelet[2710]: E0909 05:36:13.060510 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.060867 kubelet[2710]: E0909 05:36:13.060849 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.060867 kubelet[2710]: W0909 05:36:13.060866 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.060929 kubelet[2710]: E0909 05:36:13.060879 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.060929 kubelet[2710]: I0909 05:36:13.060910 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a830bdbb-cfd6-4f41-8465-5085b9d24e9d-kubelet-dir\") pod \"csi-node-driver-c78xj\" (UID: \"a830bdbb-cfd6-4f41-8465-5085b9d24e9d\") " pod="calico-system/csi-node-driver-c78xj" Sep 9 05:36:13.064615 kubelet[2710]: E0909 05:36:13.062930 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.065387 kubelet[2710]: W0909 05:36:13.065131 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.065387 kubelet[2710]: E0909 05:36:13.065171 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.065387 kubelet[2710]: I0909 05:36:13.065209 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a830bdbb-cfd6-4f41-8465-5085b9d24e9d-registration-dir\") pod \"csi-node-driver-c78xj\" (UID: \"a830bdbb-cfd6-4f41-8465-5085b9d24e9d\") " pod="calico-system/csi-node-driver-c78xj" Sep 9 05:36:13.065659 kubelet[2710]: E0909 05:36:13.065644 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.066228 kubelet[2710]: W0909 05:36:13.066041 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.066228 kubelet[2710]: E0909 05:36:13.066067 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.066228 kubelet[2710]: I0909 05:36:13.066096 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a830bdbb-cfd6-4f41-8465-5085b9d24e9d-socket-dir\") pod \"csi-node-driver-c78xj\" (UID: \"a830bdbb-cfd6-4f41-8465-5085b9d24e9d\") " pod="calico-system/csi-node-driver-c78xj" Sep 9 05:36:13.066735 kubelet[2710]: E0909 05:36:13.066587 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.066735 kubelet[2710]: W0909 05:36:13.066600 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.066735 kubelet[2710]: E0909 05:36:13.066613 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.066735 kubelet[2710]: I0909 05:36:13.066633 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxbj5\" (UniqueName: \"kubernetes.io/projected/a830bdbb-cfd6-4f41-8465-5085b9d24e9d-kube-api-access-jxbj5\") pod \"csi-node-driver-c78xj\" (UID: \"a830bdbb-cfd6-4f41-8465-5085b9d24e9d\") " pod="calico-system/csi-node-driver-c78xj" Sep 9 05:36:13.067717 kubelet[2710]: E0909 05:36:13.067137 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.067717 kubelet[2710]: W0909 05:36:13.067600 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.067717 kubelet[2710]: E0909 05:36:13.067648 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.067717 kubelet[2710]: I0909 05:36:13.067698 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a830bdbb-cfd6-4f41-8465-5085b9d24e9d-varrun\") pod \"csi-node-driver-c78xj\" (UID: \"a830bdbb-cfd6-4f41-8465-5085b9d24e9d\") " pod="calico-system/csi-node-driver-c78xj" Sep 9 05:36:13.070554 kubelet[2710]: E0909 05:36:13.067900 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.070554 kubelet[2710]: W0909 05:36:13.067912 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.070896 kubelet[2710]: E0909 05:36:13.070770 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.071028 kubelet[2710]: E0909 05:36:13.071011 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.071090 kubelet[2710]: W0909 05:36:13.071079 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.071190 kubelet[2710]: E0909 05:36:13.071161 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.071695 kubelet[2710]: E0909 05:36:13.071678 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.071767 kubelet[2710]: W0909 05:36:13.071757 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.072708 kubelet[2710]: E0909 05:36:13.072692 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.072852 kubelet[2710]: W0909 05:36:13.072839 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.073059 kubelet[2710]: E0909 05:36:13.073050 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.073105 kubelet[2710]: W0909 05:36:13.073097 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.073259 kubelet[2710]: E0909 05:36:13.073251 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.073300 kubelet[2710]: W0909 05:36:13.073293 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.073350 kubelet[2710]: E0909 05:36:13.073340 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.073621 kubelet[2710]: E0909 05:36:13.073559 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.073621 kubelet[2710]: W0909 05:36:13.073569 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.073621 kubelet[2710]: E0909 05:36:13.073570 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.073621 kubelet[2710]: E0909 05:36:13.073589 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.073621 kubelet[2710]: E0909 05:36:13.073580 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.073621 kubelet[2710]: E0909 05:36:13.073600 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.074686 kubelet[2710]: E0909 05:36:13.074663 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.074686 kubelet[2710]: W0909 05:36:13.074685 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.074787 kubelet[2710]: E0909 05:36:13.074721 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.074993 kubelet[2710]: E0909 05:36:13.074976 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.075026 kubelet[2710]: W0909 05:36:13.074997 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.075026 kubelet[2710]: E0909 05:36:13.075013 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.076152 kubelet[2710]: E0909 05:36:13.076131 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.076222 kubelet[2710]: W0909 05:36:13.076152 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.076222 kubelet[2710]: E0909 05:36:13.076172 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.092151 containerd[1574]: time="2025-09-09T05:36:13.092100876Z" level=info msg="connecting to shim efb57ebe6bc0b235d2555711a315651fe3c3afd67f351fe7edae439c550534d7" address="unix:///run/containerd/s/a04c6f6a0e7466d224a655e436255807052ef0c7e54e30beb71af2aad72d67b5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:13.144760 systemd[1]: Started cri-containerd-efb57ebe6bc0b235d2555711a315651fe3c3afd67f351fe7edae439c550534d7.scope - libcontainer container efb57ebe6bc0b235d2555711a315651fe3c3afd67f351fe7edae439c550534d7. Sep 9 05:36:13.168860 kubelet[2710]: E0909 05:36:13.168835 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.169273 kubelet[2710]: W0909 05:36:13.169038 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.169273 kubelet[2710]: E0909 05:36:13.169065 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.170506 kubelet[2710]: E0909 05:36:13.170459 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.170738 kubelet[2710]: W0909 05:36:13.170623 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.170738 kubelet[2710]: E0909 05:36:13.170652 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.171061 kubelet[2710]: E0909 05:36:13.171033 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.171061 kubelet[2710]: W0909 05:36:13.171046 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.171213 kubelet[2710]: E0909 05:36:13.171164 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.171490 kubelet[2710]: E0909 05:36:13.171460 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.171490 kubelet[2710]: W0909 05:36:13.171473 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.171757 kubelet[2710]: E0909 05:36:13.171656 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.172136 kubelet[2710]: E0909 05:36:13.172109 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.172136 kubelet[2710]: W0909 05:36:13.172121 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.172297 kubelet[2710]: E0909 05:36:13.172215 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.172428 kubelet[2710]: E0909 05:36:13.172419 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.172504 kubelet[2710]: W0909 05:36:13.172465 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.172585 kubelet[2710]: E0909 05:36:13.172542 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.172773 kubelet[2710]: E0909 05:36:13.172747 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.172773 kubelet[2710]: W0909 05:36:13.172757 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.172942 kubelet[2710]: E0909 05:36:13.172928 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.173183 kubelet[2710]: E0909 05:36:13.173159 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.173183 kubelet[2710]: W0909 05:36:13.173170 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.173381 kubelet[2710]: E0909 05:36:13.173322 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.173729 kubelet[2710]: E0909 05:36:13.173716 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.173866 kubelet[2710]: W0909 05:36:13.173786 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.173919 kubelet[2710]: E0909 05:36:13.173909 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.174156 kubelet[2710]: E0909 05:36:13.174145 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.174239 kubelet[2710]: W0909 05:36:13.174219 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.174669 kubelet[2710]: E0909 05:36:13.174462 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.174899 kubelet[2710]: E0909 05:36:13.174830 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.174961 kubelet[2710]: W0909 05:36:13.174948 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.175350 kubelet[2710]: E0909 05:36:13.175294 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.175460 kubelet[2710]: E0909 05:36:13.175337 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.175460 kubelet[2710]: W0909 05:36:13.175552 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.175793 kubelet[2710]: E0909 05:36:13.175673 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.176348 kubelet[2710]: E0909 05:36:13.176320 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.176348 kubelet[2710]: W0909 05:36:13.176333 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.176537 kubelet[2710]: E0909 05:36:13.176498 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.176963 kubelet[2710]: E0909 05:36:13.176924 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.176963 kubelet[2710]: W0909 05:36:13.176943 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.177279 kubelet[2710]: E0909 05:36:13.177219 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.177788 kubelet[2710]: E0909 05:36:13.177760 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.178048 kubelet[2710]: W0909 05:36:13.177891 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.178337 kubelet[2710]: E0909 05:36:13.178122 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.178871 kubelet[2710]: E0909 05:36:13.178837 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.179081 kubelet[2710]: W0909 05:36:13.178953 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.179496 kubelet[2710]: E0909 05:36:13.179459 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.179698 kubelet[2710]: E0909 05:36:13.179685 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.179934 kubelet[2710]: W0909 05:36:13.179894 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.180086 kubelet[2710]: E0909 05:36:13.180072 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.180359 kubelet[2710]: E0909 05:36:13.180339 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.180549 kubelet[2710]: W0909 05:36:13.180429 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.180838 kubelet[2710]: E0909 05:36:13.180669 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.181651 kubelet[2710]: E0909 05:36:13.181573 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.181651 kubelet[2710]: W0909 05:36:13.181592 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.182570 kubelet[2710]: E0909 05:36:13.181789 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.183257 kubelet[2710]: E0909 05:36:13.183226 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.183380 kubelet[2710]: W0909 05:36:13.183364 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.183975 kubelet[2710]: E0909 05:36:13.183954 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.184337 kubelet[2710]: E0909 05:36:13.184305 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.184529 kubelet[2710]: W0909 05:36:13.184475 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.185257 kubelet[2710]: E0909 05:36:13.185007 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.185473 kubelet[2710]: E0909 05:36:13.185448 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.185673 kubelet[2710]: W0909 05:36:13.185657 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.185828 kubelet[2710]: E0909 05:36:13.185813 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.186151 kubelet[2710]: E0909 05:36:13.186137 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.186310 kubelet[2710]: W0909 05:36:13.186244 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.186425 kubelet[2710]: E0909 05:36:13.186405 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.187050 kubelet[2710]: E0909 05:36:13.186985 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.187303 kubelet[2710]: W0909 05:36:13.187284 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.188332 kubelet[2710]: E0909 05:36:13.188117 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.190939 kubelet[2710]: E0909 05:36:13.190260 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.190939 kubelet[2710]: W0909 05:36:13.190287 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.190939 kubelet[2710]: E0909 05:36:13.190314 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.208292 kubelet[2710]: E0909 05:36:13.208265 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:13.209749 kubelet[2710]: W0909 05:36:13.208944 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:13.209749 kubelet[2710]: E0909 05:36:13.208975 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:13.233845 containerd[1574]: time="2025-09-09T05:36:13.233801229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qm6gs,Uid:3acab5fa-ecad-4661-93d8-b9a0b7747af5,Namespace:calico-system,Attempt:0,} returns sandbox id \"efb57ebe6bc0b235d2555711a315651fe3c3afd67f351fe7edae439c550534d7\"" Sep 9 05:36:14.150763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751409342.mount: Deactivated successfully. Sep 9 05:36:14.298740 kubelet[2710]: E0909 05:36:14.296993 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c78xj" podUID="a830bdbb-cfd6-4f41-8465-5085b9d24e9d" Sep 9 05:36:15.384922 containerd[1574]: time="2025-09-09T05:36:15.384864940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:15.386824 containerd[1574]: time="2025-09-09T05:36:15.386789127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 05:36:15.387610 containerd[1574]: time="2025-09-09T05:36:15.387583300Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:15.391092 containerd[1574]: time="2025-09-09T05:36:15.391039045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:15.391935 containerd[1574]: time="2025-09-09T05:36:15.391678665Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.539805654s" Sep 9 05:36:15.391935 containerd[1574]: time="2025-09-09T05:36:15.391842920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 05:36:15.394729 containerd[1574]: time="2025-09-09T05:36:15.394370947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 05:36:15.418695 containerd[1574]: time="2025-09-09T05:36:15.417486385Z" level=info msg="CreateContainer within sandbox \"4b5acfc72cab231c531aeed9b697f8a3dea9869511abb828baf782ca7da9bf9b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 05:36:15.430557 containerd[1574]: time="2025-09-09T05:36:15.429535952Z" level=info msg="Container ecf324d16f36c4b0e8360eb1f7e1e72939e0022b815305403c461380c8a143ce: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:15.456280 containerd[1574]: time="2025-09-09T05:36:15.456200224Z" level=info msg="CreateContainer within sandbox \"4b5acfc72cab231c531aeed9b697f8a3dea9869511abb828baf782ca7da9bf9b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ecf324d16f36c4b0e8360eb1f7e1e72939e0022b815305403c461380c8a143ce\"" Sep 9 05:36:15.458043 containerd[1574]: time="2025-09-09T05:36:15.457986569Z" level=info msg="StartContainer for \"ecf324d16f36c4b0e8360eb1f7e1e72939e0022b815305403c461380c8a143ce\"" Sep 9 05:36:15.462048 containerd[1574]: time="2025-09-09T05:36:15.461945067Z" level=info msg="connecting to shim ecf324d16f36c4b0e8360eb1f7e1e72939e0022b815305403c461380c8a143ce" address="unix:///run/containerd/s/41adf2afe67a8c4d8027e617c35f94d4411dfbee65f3114644bf9ce48195e887" protocol=ttrpc version=3 Sep 9 05:36:15.509986 systemd[1]: Started cri-containerd-ecf324d16f36c4b0e8360eb1f7e1e72939e0022b815305403c461380c8a143ce.scope - libcontainer container ecf324d16f36c4b0e8360eb1f7e1e72939e0022b815305403c461380c8a143ce. Sep 9 05:36:15.585603 containerd[1574]: time="2025-09-09T05:36:15.584766362Z" level=info msg="StartContainer for \"ecf324d16f36c4b0e8360eb1f7e1e72939e0022b815305403c461380c8a143ce\" returns successfully" Sep 9 05:36:16.296777 kubelet[2710]: E0909 05:36:16.296378 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c78xj" podUID="a830bdbb-cfd6-4f41-8465-5085b9d24e9d" Sep 9 05:36:16.442675 kubelet[2710]: E0909 05:36:16.442623 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:16.460421 kubelet[2710]: I0909 05:36:16.460332 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69c8f97b8b-7wmmb" podStartSLOduration=1.9138952740000001 podStartE2EDuration="4.460297626s" podCreationTimestamp="2025-09-09 05:36:12 +0000 UTC" firstStartedPulling="2025-09-09 05:36:12.846791863 +0000 UTC m=+22.701625858" lastFinishedPulling="2025-09-09 05:36:15.393194216 +0000 UTC m=+25.248028210" observedRunningTime="2025-09-09 05:36:16.459608188 +0000 UTC m=+26.314442210" watchObservedRunningTime="2025-09-09 05:36:16.460297626 +0000 UTC m=+26.315131638" Sep 9 05:36:16.490057 kubelet[2710]: E0909 05:36:16.489988 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.490462 kubelet[2710]: W0909 05:36:16.490027 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.490462 kubelet[2710]: E0909 05:36:16.490370 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.490963 kubelet[2710]: E0909 05:36:16.490901 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.490963 kubelet[2710]: W0909 05:36:16.490914 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.490963 kubelet[2710]: E0909 05:36:16.490928 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.491301 kubelet[2710]: E0909 05:36:16.491275 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.491419 kubelet[2710]: W0909 05:36:16.491287 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.491419 kubelet[2710]: E0909 05:36:16.491374 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.491741 kubelet[2710]: E0909 05:36:16.491708 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.491741 kubelet[2710]: W0909 05:36:16.491719 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.491876 kubelet[2710]: E0909 05:36:16.491818 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.492072 kubelet[2710]: E0909 05:36:16.492055 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.492190 kubelet[2710]: W0909 05:36:16.492126 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.492190 kubelet[2710]: E0909 05:36:16.492145 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.492443 kubelet[2710]: E0909 05:36:16.492429 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.492550 kubelet[2710]: W0909 05:36:16.492490 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.492550 kubelet[2710]: E0909 05:36:16.492502 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.492923 kubelet[2710]: E0909 05:36:16.492869 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.492923 kubelet[2710]: W0909 05:36:16.492879 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.492923 kubelet[2710]: E0909 05:36:16.492888 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.493282 kubelet[2710]: E0909 05:36:16.493212 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.493282 kubelet[2710]: W0909 05:36:16.493222 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.493282 kubelet[2710]: E0909 05:36:16.493232 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.493654 kubelet[2710]: E0909 05:36:16.493598 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.493654 kubelet[2710]: W0909 05:36:16.493607 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.493654 kubelet[2710]: E0909 05:36:16.493619 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.493948 kubelet[2710]: E0909 05:36:16.493897 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.493948 kubelet[2710]: W0909 05:36:16.493908 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.493948 kubelet[2710]: E0909 05:36:16.493916 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.494309 kubelet[2710]: E0909 05:36:16.494239 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.494309 kubelet[2710]: W0909 05:36:16.494254 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.494309 kubelet[2710]: E0909 05:36:16.494266 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.494716 kubelet[2710]: E0909 05:36:16.494647 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.494716 kubelet[2710]: W0909 05:36:16.494659 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.494716 kubelet[2710]: E0909 05:36:16.494669 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.495035 kubelet[2710]: E0909 05:36:16.494982 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.495035 kubelet[2710]: W0909 05:36:16.494992 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.495035 kubelet[2710]: E0909 05:36:16.495001 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.495353 kubelet[2710]: E0909 05:36:16.495295 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.495353 kubelet[2710]: W0909 05:36:16.495304 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.495353 kubelet[2710]: E0909 05:36:16.495314 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.495778 kubelet[2710]: E0909 05:36:16.495669 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.495778 kubelet[2710]: W0909 05:36:16.495681 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.495778 kubelet[2710]: E0909 05:36:16.495694 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.500514 kubelet[2710]: E0909 05:36:16.500355 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.500514 kubelet[2710]: W0909 05:36:16.500388 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.500514 kubelet[2710]: E0909 05:36:16.500412 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.501386 kubelet[2710]: E0909 05:36:16.501274 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.501386 kubelet[2710]: W0909 05:36:16.501308 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.501386 kubelet[2710]: E0909 05:36:16.501336 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.501685 kubelet[2710]: E0909 05:36:16.501561 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.501685 kubelet[2710]: W0909 05:36:16.501580 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.501685 kubelet[2710]: E0909 05:36:16.501599 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.502111 kubelet[2710]: E0909 05:36:16.502005 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.502111 kubelet[2710]: W0909 05:36:16.502021 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.502111 kubelet[2710]: E0909 05:36:16.502049 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.502741 kubelet[2710]: E0909 05:36:16.502636 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.502741 kubelet[2710]: W0909 05:36:16.502658 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.502741 kubelet[2710]: E0909 05:36:16.502686 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.503323 kubelet[2710]: E0909 05:36:16.503218 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.503323 kubelet[2710]: W0909 05:36:16.503243 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.503323 kubelet[2710]: E0909 05:36:16.503293 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.503854 kubelet[2710]: E0909 05:36:16.503774 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.503854 kubelet[2710]: W0909 05:36:16.503792 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.503854 kubelet[2710]: E0909 05:36:16.503825 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.504320 kubelet[2710]: E0909 05:36:16.504257 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.504320 kubelet[2710]: W0909 05:36:16.504272 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.504320 kubelet[2710]: E0909 05:36:16.504312 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.504914 kubelet[2710]: E0909 05:36:16.504847 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.504914 kubelet[2710]: W0909 05:36:16.504866 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.505156 kubelet[2710]: E0909 05:36:16.504891 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.505240 kubelet[2710]: E0909 05:36:16.505208 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.505289 kubelet[2710]: W0909 05:36:16.505240 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.505289 kubelet[2710]: E0909 05:36:16.505260 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.505663 kubelet[2710]: E0909 05:36:16.505544 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.505663 kubelet[2710]: W0909 05:36:16.505561 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.505663 kubelet[2710]: E0909 05:36:16.505586 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.506074 kubelet[2710]: E0909 05:36:16.506043 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.506318 kubelet[2710]: W0909 05:36:16.506247 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.506318 kubelet[2710]: E0909 05:36:16.506282 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.506671 kubelet[2710]: E0909 05:36:16.506644 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.506671 kubelet[2710]: W0909 05:36:16.506666 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.506671 kubelet[2710]: E0909 05:36:16.506692 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.506975 kubelet[2710]: E0909 05:36:16.506914 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.506975 kubelet[2710]: W0909 05:36:16.506926 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.506975 kubelet[2710]: E0909 05:36:16.506948 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.507146 kubelet[2710]: E0909 05:36:16.507128 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.507146 kubelet[2710]: W0909 05:36:16.507145 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.507312 kubelet[2710]: E0909 05:36:16.507177 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.507392 kubelet[2710]: E0909 05:36:16.507333 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.507392 kubelet[2710]: W0909 05:36:16.507342 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.507392 kubelet[2710]: E0909 05:36:16.507370 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.507687 kubelet[2710]: E0909 05:36:16.507666 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.507687 kubelet[2710]: W0909 05:36:16.507686 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.507787 kubelet[2710]: E0909 05:36:16.507701 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.508342 kubelet[2710]: E0909 05:36:16.508318 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:16.508342 kubelet[2710]: W0909 05:36:16.508338 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:16.508475 kubelet[2710]: E0909 05:36:16.508355 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:16.871170 containerd[1574]: time="2025-09-09T05:36:16.871091248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:16.872668 containerd[1574]: time="2025-09-09T05:36:16.872600775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 05:36:16.873816 containerd[1574]: time="2025-09-09T05:36:16.873731671Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:16.877473 containerd[1574]: time="2025-09-09T05:36:16.877099915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:16.877961 containerd[1574]: time="2025-09-09T05:36:16.877883531Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.482752952s" Sep 9 05:36:16.877961 containerd[1574]: time="2025-09-09T05:36:16.877927493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 05:36:16.881770 containerd[1574]: time="2025-09-09T05:36:16.881723861Z" level=info msg="CreateContainer within sandbox \"efb57ebe6bc0b235d2555711a315651fe3c3afd67f351fe7edae439c550534d7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 05:36:16.896451 containerd[1574]: time="2025-09-09T05:36:16.896333738Z" level=info msg="Container d025f54e560a76d6152ca76dbcd376075f4be9a88e4e0f9b1cf000f39d2f4b5b: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:16.910632 containerd[1574]: time="2025-09-09T05:36:16.910488804Z" level=info msg="CreateContainer within sandbox \"efb57ebe6bc0b235d2555711a315651fe3c3afd67f351fe7edae439c550534d7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d025f54e560a76d6152ca76dbcd376075f4be9a88e4e0f9b1cf000f39d2f4b5b\"" Sep 9 05:36:16.911708 containerd[1574]: time="2025-09-09T05:36:16.911665731Z" level=info msg="StartContainer for \"d025f54e560a76d6152ca76dbcd376075f4be9a88e4e0f9b1cf000f39d2f4b5b\"" Sep 9 05:36:16.915670 containerd[1574]: time="2025-09-09T05:36:16.915446696Z" level=info msg="connecting to shim d025f54e560a76d6152ca76dbcd376075f4be9a88e4e0f9b1cf000f39d2f4b5b" address="unix:///run/containerd/s/a04c6f6a0e7466d224a655e436255807052ef0c7e54e30beb71af2aad72d67b5" protocol=ttrpc version=3 Sep 9 05:36:16.965791 systemd[1]: Started cri-containerd-d025f54e560a76d6152ca76dbcd376075f4be9a88e4e0f9b1cf000f39d2f4b5b.scope - libcontainer container d025f54e560a76d6152ca76dbcd376075f4be9a88e4e0f9b1cf000f39d2f4b5b. Sep 9 05:36:17.025866 containerd[1574]: time="2025-09-09T05:36:17.025737661Z" level=info msg="StartContainer for \"d025f54e560a76d6152ca76dbcd376075f4be9a88e4e0f9b1cf000f39d2f4b5b\" returns successfully" Sep 9 05:36:17.044699 systemd[1]: cri-containerd-d025f54e560a76d6152ca76dbcd376075f4be9a88e4e0f9b1cf000f39d2f4b5b.scope: Deactivated successfully. Sep 9 05:36:17.059094 containerd[1574]: time="2025-09-09T05:36:17.058998687Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d025f54e560a76d6152ca76dbcd376075f4be9a88e4e0f9b1cf000f39d2f4b5b\" id:\"d025f54e560a76d6152ca76dbcd376075f4be9a88e4e0f9b1cf000f39d2f4b5b\" pid:3391 exited_at:{seconds:1757396177 nanos:48537191}" Sep 9 05:36:17.059513 containerd[1574]: time="2025-09-09T05:36:17.059044165Z" level=info msg="received exit event container_id:\"d025f54e560a76d6152ca76dbcd376075f4be9a88e4e0f9b1cf000f39d2f4b5b\" id:\"d025f54e560a76d6152ca76dbcd376075f4be9a88e4e0f9b1cf000f39d2f4b5b\" pid:3391 exited_at:{seconds:1757396177 nanos:48537191}" Sep 9 05:36:17.109352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d025f54e560a76d6152ca76dbcd376075f4be9a88e4e0f9b1cf000f39d2f4b5b-rootfs.mount: Deactivated successfully. Sep 9 05:36:17.446897 kubelet[2710]: I0909 05:36:17.446818 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:36:17.448808 kubelet[2710]: E0909 05:36:17.448649 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:17.449831 containerd[1574]: time="2025-09-09T05:36:17.449781404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 05:36:18.298307 kubelet[2710]: E0909 05:36:18.298229 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c78xj" podUID="a830bdbb-cfd6-4f41-8465-5085b9d24e9d" Sep 9 05:36:18.617462 kubelet[2710]: I0909 05:36:18.617079 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:36:18.619092 kubelet[2710]: E0909 05:36:18.619027 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:19.456608 kubelet[2710]: E0909 05:36:19.455500 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:20.297673 kubelet[2710]: E0909 05:36:20.297594 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c78xj" podUID="a830bdbb-cfd6-4f41-8465-5085b9d24e9d" Sep 9 05:36:20.883305 containerd[1574]: time="2025-09-09T05:36:20.883232606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:20.884267 containerd[1574]: time="2025-09-09T05:36:20.884202114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 05:36:20.884850 containerd[1574]: time="2025-09-09T05:36:20.884807276Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:20.886896 containerd[1574]: time="2025-09-09T05:36:20.886856087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:20.887853 containerd[1574]: time="2025-09-09T05:36:20.887737897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.437904911s" Sep 9 05:36:20.887853 containerd[1574]: time="2025-09-09T05:36:20.887768381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 05:36:20.892211 containerd[1574]: time="2025-09-09T05:36:20.892000380Z" level=info msg="CreateContainer within sandbox \"efb57ebe6bc0b235d2555711a315651fe3c3afd67f351fe7edae439c550534d7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 05:36:20.903850 containerd[1574]: time="2025-09-09T05:36:20.903797674Z" level=info msg="Container dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:20.913413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076790645.mount: Deactivated successfully. Sep 9 05:36:20.921365 containerd[1574]: time="2025-09-09T05:36:20.921304781Z" level=info msg="CreateContainer within sandbox \"efb57ebe6bc0b235d2555711a315651fe3c3afd67f351fe7edae439c550534d7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb\"" Sep 9 05:36:20.923606 containerd[1574]: time="2025-09-09T05:36:20.922584716Z" level=info msg="StartContainer for \"dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb\"" Sep 9 05:36:20.926651 containerd[1574]: time="2025-09-09T05:36:20.926486244Z" level=info msg="connecting to shim dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb" address="unix:///run/containerd/s/a04c6f6a0e7466d224a655e436255807052ef0c7e54e30beb71af2aad72d67b5" protocol=ttrpc version=3 Sep 9 05:36:20.976907 systemd[1]: Started cri-containerd-dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb.scope - libcontainer container dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb. Sep 9 05:36:21.058858 containerd[1574]: time="2025-09-09T05:36:21.058795920Z" level=info msg="StartContainer for \"dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb\" returns successfully" Sep 9 05:36:21.698793 systemd[1]: cri-containerd-dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb.scope: Deactivated successfully. Sep 9 05:36:21.699124 systemd[1]: cri-containerd-dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb.scope: Consumed 621ms CPU time, 163.7M memory peak, 6.4M read from disk, 171.3M written to disk. Sep 9 05:36:21.701417 containerd[1574]: time="2025-09-09T05:36:21.701344676Z" level=info msg="received exit event container_id:\"dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb\" id:\"dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb\" pid:3452 exited_at:{seconds:1757396181 nanos:700930920}" Sep 9 05:36:21.702587 containerd[1574]: time="2025-09-09T05:36:21.702532970Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb\" id:\"dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb\" pid:3452 exited_at:{seconds:1757396181 nanos:700930920}" Sep 9 05:36:21.757055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd47504ce1a9011e21d4b576beb540933b852ecaa1fe6ff30454afa345a8f2cb-rootfs.mount: Deactivated successfully. Sep 9 05:36:21.790757 kubelet[2710]: I0909 05:36:21.790698 2710 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 05:36:21.848843 kubelet[2710]: I0909 05:36:21.848781 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f-config-volume\") pod \"coredns-7c65d6cfc9-fmvjh\" (UID: \"d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f\") " pod="kube-system/coredns-7c65d6cfc9-fmvjh" Sep 9 05:36:21.848843 kubelet[2710]: I0909 05:36:21.848823 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b161cf8-ae97-452d-ac9c-603e0edd6644-config-volume\") pod \"coredns-7c65d6cfc9-xfkng\" (UID: \"5b161cf8-ae97-452d-ac9c-603e0edd6644\") " pod="kube-system/coredns-7c65d6cfc9-xfkng" Sep 9 05:36:21.849071 kubelet[2710]: I0909 05:36:21.848856 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmdxn\" (UniqueName: \"kubernetes.io/projected/d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f-kube-api-access-wmdxn\") pod \"coredns-7c65d6cfc9-fmvjh\" (UID: \"d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f\") " pod="kube-system/coredns-7c65d6cfc9-fmvjh" Sep 9 05:36:21.849071 kubelet[2710]: I0909 05:36:21.848875 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/52e91309-f573-4708-b86d-2973239db1d7-goldmane-key-pair\") pod \"goldmane-7988f88666-68l7r\" (UID: \"52e91309-f573-4708-b86d-2973239db1d7\") " pod="calico-system/goldmane-7988f88666-68l7r" Sep 9 05:36:21.849071 kubelet[2710]: I0909 05:36:21.848894 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52e91309-f573-4708-b86d-2973239db1d7-config\") pod \"goldmane-7988f88666-68l7r\" (UID: \"52e91309-f573-4708-b86d-2973239db1d7\") " pod="calico-system/goldmane-7988f88666-68l7r" Sep 9 05:36:21.849071 kubelet[2710]: I0909 05:36:21.848910 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52e91309-f573-4708-b86d-2973239db1d7-goldmane-ca-bundle\") pod \"goldmane-7988f88666-68l7r\" (UID: \"52e91309-f573-4708-b86d-2973239db1d7\") " pod="calico-system/goldmane-7988f88666-68l7r" Sep 9 05:36:21.849071 kubelet[2710]: I0909 05:36:21.848926 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpg5x\" (UniqueName: \"kubernetes.io/projected/52e91309-f573-4708-b86d-2973239db1d7-kube-api-access-rpg5x\") pod \"goldmane-7988f88666-68l7r\" (UID: \"52e91309-f573-4708-b86d-2973239db1d7\") " pod="calico-system/goldmane-7988f88666-68l7r" Sep 9 05:36:21.850953 kubelet[2710]: I0909 05:36:21.848948 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvqv\" (UniqueName: \"kubernetes.io/projected/5b161cf8-ae97-452d-ac9c-603e0edd6644-kube-api-access-txvqv\") pod \"coredns-7c65d6cfc9-xfkng\" (UID: \"5b161cf8-ae97-452d-ac9c-603e0edd6644\") " pod="kube-system/coredns-7c65d6cfc9-xfkng" Sep 9 05:36:21.859237 systemd[1]: Created slice kubepods-burstable-podd1a049d3_cfe6_4f6b_9ab6_0ed390e82b0f.slice - libcontainer container kubepods-burstable-podd1a049d3_cfe6_4f6b_9ab6_0ed390e82b0f.slice. Sep 9 05:36:21.878996 systemd[1]: Created slice kubepods-burstable-pod5b161cf8_ae97_452d_ac9c_603e0edd6644.slice - libcontainer container kubepods-burstable-pod5b161cf8_ae97_452d_ac9c_603e0edd6644.slice. Sep 9 05:36:21.892002 systemd[1]: Created slice kubepods-besteffort-pod52e91309_f573_4708_b86d_2973239db1d7.slice - libcontainer container kubepods-besteffort-pod52e91309_f573_4708_b86d_2973239db1d7.slice. Sep 9 05:36:21.907803 systemd[1]: Created slice kubepods-besteffort-pod14bedfcf_cf8c_4398_9812_3a0692728e52.slice - libcontainer container kubepods-besteffort-pod14bedfcf_cf8c_4398_9812_3a0692728e52.slice. Sep 9 05:36:21.925311 systemd[1]: Created slice kubepods-besteffort-pod2bf3b363_0425_4282_9c8a_2f3d556d118b.slice - libcontainer container kubepods-besteffort-pod2bf3b363_0425_4282_9c8a_2f3d556d118b.slice. Sep 9 05:36:21.937405 systemd[1]: Created slice kubepods-besteffort-pod07ad84ef_593a_4953_8004_2160c7e58b20.slice - libcontainer container kubepods-besteffort-pod07ad84ef_593a_4953_8004_2160c7e58b20.slice. Sep 9 05:36:21.951247 kubelet[2710]: I0909 05:36:21.950223 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms85w\" (UniqueName: \"kubernetes.io/projected/14bedfcf-cf8c-4398-9812-3a0692728e52-kube-api-access-ms85w\") pod \"calico-apiserver-5cb8568b76-29tg6\" (UID: \"14bedfcf-cf8c-4398-9812-3a0692728e52\") " pod="calico-apiserver/calico-apiserver-5cb8568b76-29tg6" Sep 9 05:36:21.951247 kubelet[2710]: I0909 05:36:21.950334 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbfj2\" (UniqueName: \"kubernetes.io/projected/07ad84ef-593a-4953-8004-2160c7e58b20-kube-api-access-pbfj2\") pod \"whisker-5d794b6796-l5trr\" (UID: \"07ad84ef-593a-4953-8004-2160c7e58b20\") " pod="calico-system/whisker-5d794b6796-l5trr" Sep 9 05:36:21.951247 kubelet[2710]: I0909 05:36:21.950381 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/14bedfcf-cf8c-4398-9812-3a0692728e52-calico-apiserver-certs\") pod \"calico-apiserver-5cb8568b76-29tg6\" (UID: \"14bedfcf-cf8c-4398-9812-3a0692728e52\") " pod="calico-apiserver/calico-apiserver-5cb8568b76-29tg6" Sep 9 05:36:21.951247 kubelet[2710]: I0909 05:36:21.950414 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/51653419-cbd4-45ba-b0da-614b9e41a0f4-calico-apiserver-certs\") pod \"calico-apiserver-5cb8568b76-bcp7l\" (UID: \"51653419-cbd4-45ba-b0da-614b9e41a0f4\") " pod="calico-apiserver/calico-apiserver-5cb8568b76-bcp7l" Sep 9 05:36:21.951247 kubelet[2710]: I0909 05:36:21.950501 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8652c\" (UniqueName: \"kubernetes.io/projected/2bf3b363-0425-4282-9c8a-2f3d556d118b-kube-api-access-8652c\") pod \"calico-kube-controllers-66974546f4-nfjvv\" (UID: \"2bf3b363-0425-4282-9c8a-2f3d556d118b\") " pod="calico-system/calico-kube-controllers-66974546f4-nfjvv" Sep 9 05:36:21.951612 kubelet[2710]: I0909 05:36:21.950582 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bf3b363-0425-4282-9c8a-2f3d556d118b-tigera-ca-bundle\") pod \"calico-kube-controllers-66974546f4-nfjvv\" (UID: \"2bf3b363-0425-4282-9c8a-2f3d556d118b\") " pod="calico-system/calico-kube-controllers-66974546f4-nfjvv" Sep 9 05:36:21.951612 kubelet[2710]: I0909 05:36:21.950681 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07ad84ef-593a-4953-8004-2160c7e58b20-whisker-backend-key-pair\") pod \"whisker-5d794b6796-l5trr\" (UID: \"07ad84ef-593a-4953-8004-2160c7e58b20\") " pod="calico-system/whisker-5d794b6796-l5trr" Sep 9 05:36:21.951612 kubelet[2710]: I0909 05:36:21.950735 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76q2s\" (UniqueName: \"kubernetes.io/projected/51653419-cbd4-45ba-b0da-614b9e41a0f4-kube-api-access-76q2s\") pod \"calico-apiserver-5cb8568b76-bcp7l\" (UID: \"51653419-cbd4-45ba-b0da-614b9e41a0f4\") " pod="calico-apiserver/calico-apiserver-5cb8568b76-bcp7l" Sep 9 05:36:21.951612 kubelet[2710]: I0909 05:36:21.950792 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ad84ef-593a-4953-8004-2160c7e58b20-whisker-ca-bundle\") pod \"whisker-5d794b6796-l5trr\" (UID: \"07ad84ef-593a-4953-8004-2160c7e58b20\") " pod="calico-system/whisker-5d794b6796-l5trr" Sep 9 05:36:21.952421 systemd[1]: Created slice kubepods-besteffort-pod51653419_cbd4_45ba_b0da_614b9e41a0f4.slice - libcontainer container kubepods-besteffort-pod51653419_cbd4_45ba_b0da_614b9e41a0f4.slice. Sep 9 05:36:22.176837 kubelet[2710]: E0909 05:36:22.176784 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:22.186308 containerd[1574]: time="2025-09-09T05:36:22.186206886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fmvjh,Uid:d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:22.187832 kubelet[2710]: E0909 05:36:22.187759 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:22.189149 containerd[1574]: time="2025-09-09T05:36:22.188704385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xfkng,Uid:5b161cf8-ae97-452d-ac9c-603e0edd6644,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:22.218915 containerd[1574]: time="2025-09-09T05:36:22.218433168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb8568b76-29tg6,Uid:14bedfcf-cf8c-4398-9812-3a0692728e52,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:36:22.238030 containerd[1574]: time="2025-09-09T05:36:22.236829361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-68l7r,Uid:52e91309-f573-4708-b86d-2973239db1d7,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:22.241071 containerd[1574]: time="2025-09-09T05:36:22.241014915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66974546f4-nfjvv,Uid:2bf3b363-0425-4282-9c8a-2f3d556d118b,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:22.249589 containerd[1574]: time="2025-09-09T05:36:22.249114825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d794b6796-l5trr,Uid:07ad84ef-593a-4953-8004-2160c7e58b20,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:22.312055 systemd[1]: Created slice kubepods-besteffort-poda830bdbb_cfd6_4f41_8465_5085b9d24e9d.slice - libcontainer container kubepods-besteffort-poda830bdbb_cfd6_4f41_8465_5085b9d24e9d.slice. Sep 9 05:36:22.356883 containerd[1574]: time="2025-09-09T05:36:22.356831369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c78xj,Uid:a830bdbb-cfd6-4f41-8465-5085b9d24e9d,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:22.357926 containerd[1574]: time="2025-09-09T05:36:22.357878582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb8568b76-bcp7l,Uid:51653419-cbd4-45ba-b0da-614b9e41a0f4,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:36:22.499731 containerd[1574]: time="2025-09-09T05:36:22.499067249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 05:36:22.650734 containerd[1574]: time="2025-09-09T05:36:22.650669257Z" level=error msg="Failed to destroy network for sandbox \"3c9d1de8330479e9e4c79a0ed4ac34e850bb3b0ef71fcbf524e181a501833eac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.653860 containerd[1574]: time="2025-09-09T05:36:22.653787355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xfkng,Uid:5b161cf8-ae97-452d-ac9c-603e0edd6644,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c9d1de8330479e9e4c79a0ed4ac34e850bb3b0ef71fcbf524e181a501833eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.654481 kubelet[2710]: E0909 05:36:22.654424 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c9d1de8330479e9e4c79a0ed4ac34e850bb3b0ef71fcbf524e181a501833eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.657085 kubelet[2710]: E0909 05:36:22.654504 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c9d1de8330479e9e4c79a0ed4ac34e850bb3b0ef71fcbf524e181a501833eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xfkng" Sep 9 05:36:22.657085 kubelet[2710]: E0909 05:36:22.654545 2710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c9d1de8330479e9e4c79a0ed4ac34e850bb3b0ef71fcbf524e181a501833eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xfkng" Sep 9 05:36:22.657085 kubelet[2710]: E0909 05:36:22.654595 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-xfkng_kube-system(5b161cf8-ae97-452d-ac9c-603e0edd6644)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-xfkng_kube-system(5b161cf8-ae97-452d-ac9c-603e0edd6644)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c9d1de8330479e9e4c79a0ed4ac34e850bb3b0ef71fcbf524e181a501833eac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xfkng" podUID="5b161cf8-ae97-452d-ac9c-603e0edd6644" Sep 9 05:36:22.664835 containerd[1574]: time="2025-09-09T05:36:22.664780247Z" level=error msg="Failed to destroy network for sandbox \"24fa7d45d394f0f229aff21a854a7836050a982af1d6696bf6fd4c43273d3c85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.671324 containerd[1574]: time="2025-09-09T05:36:22.669175196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb8568b76-29tg6,Uid:14bedfcf-cf8c-4398-9812-3a0692728e52,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24fa7d45d394f0f229aff21a854a7836050a982af1d6696bf6fd4c43273d3c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.673990 kubelet[2710]: E0909 05:36:22.669971 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24fa7d45d394f0f229aff21a854a7836050a982af1d6696bf6fd4c43273d3c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.673990 kubelet[2710]: E0909 05:36:22.670031 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24fa7d45d394f0f229aff21a854a7836050a982af1d6696bf6fd4c43273d3c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cb8568b76-29tg6" Sep 9 05:36:22.673990 kubelet[2710]: E0909 05:36:22.670054 2710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24fa7d45d394f0f229aff21a854a7836050a982af1d6696bf6fd4c43273d3c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cb8568b76-29tg6" Sep 9 05:36:22.674236 kubelet[2710]: E0909 05:36:22.670100 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cb8568b76-29tg6_calico-apiserver(14bedfcf-cf8c-4398-9812-3a0692728e52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cb8568b76-29tg6_calico-apiserver(14bedfcf-cf8c-4398-9812-3a0692728e52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24fa7d45d394f0f229aff21a854a7836050a982af1d6696bf6fd4c43273d3c85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cb8568b76-29tg6" podUID="14bedfcf-cf8c-4398-9812-3a0692728e52" Sep 9 05:36:22.681615 containerd[1574]: time="2025-09-09T05:36:22.681553389Z" level=error msg="Failed to destroy network for sandbox \"098c94c4cf0001bb0c5d98d48b7fa9a78544a52dbdfa1c0fbf312c439dbf190a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.691060 containerd[1574]: time="2025-09-09T05:36:22.690947328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c78xj,Uid:a830bdbb-cfd6-4f41-8465-5085b9d24e9d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"098c94c4cf0001bb0c5d98d48b7fa9a78544a52dbdfa1c0fbf312c439dbf190a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.691333 kubelet[2710]: E0909 05:36:22.691294 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"098c94c4cf0001bb0c5d98d48b7fa9a78544a52dbdfa1c0fbf312c439dbf190a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.691405 kubelet[2710]: E0909 05:36:22.691389 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"098c94c4cf0001bb0c5d98d48b7fa9a78544a52dbdfa1c0fbf312c439dbf190a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c78xj" Sep 9 05:36:22.691457 kubelet[2710]: E0909 05:36:22.691414 2710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"098c94c4cf0001bb0c5d98d48b7fa9a78544a52dbdfa1c0fbf312c439dbf190a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c78xj" Sep 9 05:36:22.691498 kubelet[2710]: E0909 05:36:22.691476 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c78xj_calico-system(a830bdbb-cfd6-4f41-8465-5085b9d24e9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c78xj_calico-system(a830bdbb-cfd6-4f41-8465-5085b9d24e9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"098c94c4cf0001bb0c5d98d48b7fa9a78544a52dbdfa1c0fbf312c439dbf190a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c78xj" podUID="a830bdbb-cfd6-4f41-8465-5085b9d24e9d" Sep 9 05:36:22.713603 containerd[1574]: time="2025-09-09T05:36:22.713509519Z" level=error msg="Failed to destroy network for sandbox \"03ecd8468e7517bf319f44712c7ae5541227e950d2b7aaee600593af7a302e59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.715246 containerd[1574]: time="2025-09-09T05:36:22.715168809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fmvjh,Uid:d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ecd8468e7517bf319f44712c7ae5541227e950d2b7aaee600593af7a302e59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.716717 kubelet[2710]: E0909 05:36:22.716151 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ecd8468e7517bf319f44712c7ae5541227e950d2b7aaee600593af7a302e59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.716717 kubelet[2710]: E0909 05:36:22.716247 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ecd8468e7517bf319f44712c7ae5541227e950d2b7aaee600593af7a302e59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fmvjh" Sep 9 05:36:22.716717 kubelet[2710]: E0909 05:36:22.716285 2710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ecd8468e7517bf319f44712c7ae5541227e950d2b7aaee600593af7a302e59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fmvjh" Sep 9 05:36:22.716958 kubelet[2710]: E0909 05:36:22.716338 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-fmvjh_kube-system(d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-fmvjh_kube-system(d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03ecd8468e7517bf319f44712c7ae5541227e950d2b7aaee600593af7a302e59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fmvjh" podUID="d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f" Sep 9 05:36:22.724330 containerd[1574]: time="2025-09-09T05:36:22.724278289Z" level=error msg="Failed to destroy network for sandbox \"5f5c16167d6c4100c8b80d17ffca8162ab570e43e7773e3a19fadda2c631edcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.728496 containerd[1574]: time="2025-09-09T05:36:22.728398286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-68l7r,Uid:52e91309-f573-4708-b86d-2973239db1d7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f5c16167d6c4100c8b80d17ffca8162ab570e43e7773e3a19fadda2c631edcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.730753 containerd[1574]: time="2025-09-09T05:36:22.729585537Z" level=error msg="Failed to destroy network for sandbox \"55ba0ee9b2a3ad050410312bec25fb5ba61517df673890397513cfbe40e19382\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.730890 kubelet[2710]: E0909 05:36:22.730231 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f5c16167d6c4100c8b80d17ffca8162ab570e43e7773e3a19fadda2c631edcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.730890 kubelet[2710]: E0909 05:36:22.730380 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f5c16167d6c4100c8b80d17ffca8162ab570e43e7773e3a19fadda2c631edcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-68l7r" Sep 9 05:36:22.730890 kubelet[2710]: E0909 05:36:22.730403 2710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f5c16167d6c4100c8b80d17ffca8162ab570e43e7773e3a19fadda2c631edcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-68l7r" Sep 9 05:36:22.730987 kubelet[2710]: E0909 05:36:22.730472 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-68l7r_calico-system(52e91309-f573-4708-b86d-2973239db1d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-68l7r_calico-system(52e91309-f573-4708-b86d-2973239db1d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f5c16167d6c4100c8b80d17ffca8162ab570e43e7773e3a19fadda2c631edcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-68l7r" podUID="52e91309-f573-4708-b86d-2973239db1d7" Sep 9 05:36:22.731942 containerd[1574]: time="2025-09-09T05:36:22.731051002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66974546f4-nfjvv,Uid:2bf3b363-0425-4282-9c8a-2f3d556d118b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ba0ee9b2a3ad050410312bec25fb5ba61517df673890397513cfbe40e19382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.732460 kubelet[2710]: E0909 05:36:22.731837 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ba0ee9b2a3ad050410312bec25fb5ba61517df673890397513cfbe40e19382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.732460 kubelet[2710]: E0909 05:36:22.731887 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ba0ee9b2a3ad050410312bec25fb5ba61517df673890397513cfbe40e19382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66974546f4-nfjvv" Sep 9 05:36:22.732460 kubelet[2710]: E0909 05:36:22.731926 2710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ba0ee9b2a3ad050410312bec25fb5ba61517df673890397513cfbe40e19382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66974546f4-nfjvv" Sep 9 05:36:22.733881 kubelet[2710]: E0909 05:36:22.731983 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66974546f4-nfjvv_calico-system(2bf3b363-0425-4282-9c8a-2f3d556d118b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66974546f4-nfjvv_calico-system(2bf3b363-0425-4282-9c8a-2f3d556d118b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55ba0ee9b2a3ad050410312bec25fb5ba61517df673890397513cfbe40e19382\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66974546f4-nfjvv" podUID="2bf3b363-0425-4282-9c8a-2f3d556d118b" Sep 9 05:36:22.735682 containerd[1574]: time="2025-09-09T05:36:22.735583398Z" level=error msg="Failed to destroy network for sandbox \"c2d857f2133cd40f5e780d6423de4b21ad0ff0cf7a63f243b783c0a1e30fdd4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.738069 containerd[1574]: time="2025-09-09T05:36:22.736634382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d794b6796-l5trr,Uid:07ad84ef-593a-4953-8004-2160c7e58b20,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d857f2133cd40f5e780d6423de4b21ad0ff0cf7a63f243b783c0a1e30fdd4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.738918 kubelet[2710]: E0909 05:36:22.738866 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d857f2133cd40f5e780d6423de4b21ad0ff0cf7a63f243b783c0a1e30fdd4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.739768 kubelet[2710]: E0909 05:36:22.738943 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d857f2133cd40f5e780d6423de4b21ad0ff0cf7a63f243b783c0a1e30fdd4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d794b6796-l5trr" Sep 9 05:36:22.739768 kubelet[2710]: E0909 05:36:22.738993 2710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d857f2133cd40f5e780d6423de4b21ad0ff0cf7a63f243b783c0a1e30fdd4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d794b6796-l5trr" Sep 9 05:36:22.739768 kubelet[2710]: E0909 05:36:22.739043 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d794b6796-l5trr_calico-system(07ad84ef-593a-4953-8004-2160c7e58b20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d794b6796-l5trr_calico-system(07ad84ef-593a-4953-8004-2160c7e58b20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2d857f2133cd40f5e780d6423de4b21ad0ff0cf7a63f243b783c0a1e30fdd4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d794b6796-l5trr" podUID="07ad84ef-593a-4953-8004-2160c7e58b20" Sep 9 05:36:22.754678 containerd[1574]: time="2025-09-09T05:36:22.753343254Z" level=error msg="Failed to destroy network for sandbox \"ef37b0c916acd60adc7bf8b35fdc0f8e366a0ccc67d9fe6c273b4a0364f04403\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.756370 containerd[1574]: time="2025-09-09T05:36:22.756309970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb8568b76-bcp7l,Uid:51653419-cbd4-45ba-b0da-614b9e41a0f4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef37b0c916acd60adc7bf8b35fdc0f8e366a0ccc67d9fe6c273b4a0364f04403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.756798 kubelet[2710]: E0909 05:36:22.756732 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef37b0c916acd60adc7bf8b35fdc0f8e366a0ccc67d9fe6c273b4a0364f04403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:22.756912 kubelet[2710]: E0909 05:36:22.756813 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef37b0c916acd60adc7bf8b35fdc0f8e366a0ccc67d9fe6c273b4a0364f04403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cb8568b76-bcp7l" Sep 9 05:36:22.756912 kubelet[2710]: E0909 05:36:22.756842 2710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef37b0c916acd60adc7bf8b35fdc0f8e366a0ccc67d9fe6c273b4a0364f04403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cb8568b76-bcp7l" Sep 9 05:36:22.757207 kubelet[2710]: E0909 05:36:22.756908 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cb8568b76-bcp7l_calico-apiserver(51653419-cbd4-45ba-b0da-614b9e41a0f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cb8568b76-bcp7l_calico-apiserver(51653419-cbd4-45ba-b0da-614b9e41a0f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef37b0c916acd60adc7bf8b35fdc0f8e366a0ccc67d9fe6c273b4a0364f04403\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cb8568b76-bcp7l" podUID="51653419-cbd4-45ba-b0da-614b9e41a0f4" Sep 9 05:36:29.125186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734728674.mount: Deactivated successfully. Sep 9 05:36:29.179899 containerd[1574]: time="2025-09-09T05:36:29.179709830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:29.181434 containerd[1574]: time="2025-09-09T05:36:29.181382173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 05:36:29.181745 containerd[1574]: time="2025-09-09T05:36:29.181601970Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:29.194997 containerd[1574]: time="2025-09-09T05:36:29.194211155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:29.194997 containerd[1574]: time="2025-09-09T05:36:29.194834641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.695724925s" Sep 9 05:36:29.194997 containerd[1574]: time="2025-09-09T05:36:29.194871899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 05:36:29.234595 containerd[1574]: time="2025-09-09T05:36:29.234504053Z" level=info msg="CreateContainer within sandbox \"efb57ebe6bc0b235d2555711a315651fe3c3afd67f351fe7edae439c550534d7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 05:36:29.245125 containerd[1574]: time="2025-09-09T05:36:29.245064355Z" level=info msg="Container 59508c86b637ab39405e6f43aa931e814c7549a724ceda478573a08c509af2d3: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:29.278362 containerd[1574]: time="2025-09-09T05:36:29.278318886Z" level=info msg="CreateContainer within sandbox \"efb57ebe6bc0b235d2555711a315651fe3c3afd67f351fe7edae439c550534d7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"59508c86b637ab39405e6f43aa931e814c7549a724ceda478573a08c509af2d3\"" Sep 9 05:36:29.279845 containerd[1574]: time="2025-09-09T05:36:29.279788387Z" level=info msg="StartContainer for \"59508c86b637ab39405e6f43aa931e814c7549a724ceda478573a08c509af2d3\"" Sep 9 05:36:29.285219 containerd[1574]: time="2025-09-09T05:36:29.285159324Z" level=info msg="connecting to shim 59508c86b637ab39405e6f43aa931e814c7549a724ceda478573a08c509af2d3" address="unix:///run/containerd/s/a04c6f6a0e7466d224a655e436255807052ef0c7e54e30beb71af2aad72d67b5" protocol=ttrpc version=3 Sep 9 05:36:29.402992 systemd[1]: Started cri-containerd-59508c86b637ab39405e6f43aa931e814c7549a724ceda478573a08c509af2d3.scope - libcontainer container 59508c86b637ab39405e6f43aa931e814c7549a724ceda478573a08c509af2d3. Sep 9 05:36:29.485714 containerd[1574]: time="2025-09-09T05:36:29.485675119Z" level=info msg="StartContainer for \"59508c86b637ab39405e6f43aa931e814c7549a724ceda478573a08c509af2d3\" returns successfully" Sep 9 05:36:29.569857 kubelet[2710]: I0909 05:36:29.569417 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qm6gs" podStartSLOduration=1.609698677 podStartE2EDuration="17.56938875s" podCreationTimestamp="2025-09-09 05:36:12 +0000 UTC" firstStartedPulling="2025-09-09 05:36:13.236567761 +0000 UTC m=+23.091401753" lastFinishedPulling="2025-09-09 05:36:29.196257835 +0000 UTC m=+39.051091826" observedRunningTime="2025-09-09 05:36:29.568150403 +0000 UTC m=+39.422984432" watchObservedRunningTime="2025-09-09 05:36:29.56938875 +0000 UTC m=+39.424222770" Sep 9 05:36:29.614834 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 05:36:29.615179 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 05:36:29.911866 kubelet[2710]: I0909 05:36:29.911248 2710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbfj2\" (UniqueName: \"kubernetes.io/projected/07ad84ef-593a-4953-8004-2160c7e58b20-kube-api-access-pbfj2\") pod \"07ad84ef-593a-4953-8004-2160c7e58b20\" (UID: \"07ad84ef-593a-4953-8004-2160c7e58b20\") " Sep 9 05:36:29.912386 kubelet[2710]: I0909 05:36:29.912320 2710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ad84ef-593a-4953-8004-2160c7e58b20-whisker-ca-bundle\") pod \"07ad84ef-593a-4953-8004-2160c7e58b20\" (UID: \"07ad84ef-593a-4953-8004-2160c7e58b20\") " Sep 9 05:36:29.913085 kubelet[2710]: I0909 05:36:29.912972 2710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07ad84ef-593a-4953-8004-2160c7e58b20-whisker-backend-key-pair\") pod \"07ad84ef-593a-4953-8004-2160c7e58b20\" (UID: \"07ad84ef-593a-4953-8004-2160c7e58b20\") " Sep 9 05:36:29.916096 kubelet[2710]: I0909 05:36:29.912911 2710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ad84ef-593a-4953-8004-2160c7e58b20-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "07ad84ef-593a-4953-8004-2160c7e58b20" (UID: "07ad84ef-593a-4953-8004-2160c7e58b20"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 05:36:29.918377 kubelet[2710]: I0909 05:36:29.918265 2710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07ad84ef-593a-4953-8004-2160c7e58b20-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "07ad84ef-593a-4953-8004-2160c7e58b20" (UID: "07ad84ef-593a-4953-8004-2160c7e58b20"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 05:36:29.920795 kubelet[2710]: I0909 05:36:29.920744 2710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ad84ef-593a-4953-8004-2160c7e58b20-kube-api-access-pbfj2" (OuterVolumeSpecName: "kube-api-access-pbfj2") pod "07ad84ef-593a-4953-8004-2160c7e58b20" (UID: "07ad84ef-593a-4953-8004-2160c7e58b20"). InnerVolumeSpecName "kube-api-access-pbfj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 05:36:30.017576 kubelet[2710]: I0909 05:36:30.017209 2710 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07ad84ef-593a-4953-8004-2160c7e58b20-whisker-backend-key-pair\") on node \"ci-4452.0.0-n-58b1c71666\" DevicePath \"\"" Sep 9 05:36:30.018139 kubelet[2710]: I0909 05:36:30.017929 2710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbfj2\" (UniqueName: \"kubernetes.io/projected/07ad84ef-593a-4953-8004-2160c7e58b20-kube-api-access-pbfj2\") on node \"ci-4452.0.0-n-58b1c71666\" DevicePath \"\"" Sep 9 05:36:30.018139 kubelet[2710]: I0909 05:36:30.017962 2710 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ad84ef-593a-4953-8004-2160c7e58b20-whisker-ca-bundle\") on node \"ci-4452.0.0-n-58b1c71666\" DevicePath \"\"" Sep 9 05:36:30.127969 systemd[1]: var-lib-kubelet-pods-07ad84ef\x2d593a\x2d4953\x2d8004\x2d2160c7e58b20-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpbfj2.mount: Deactivated successfully. Sep 9 05:36:30.128153 systemd[1]: var-lib-kubelet-pods-07ad84ef\x2d593a\x2d4953\x2d8004\x2d2160c7e58b20-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 05:36:30.308748 systemd[1]: Removed slice kubepods-besteffort-pod07ad84ef_593a_4953_8004_2160c7e58b20.slice - libcontainer container kubepods-besteffort-pod07ad84ef_593a_4953_8004_2160c7e58b20.slice. Sep 9 05:36:30.528456 kubelet[2710]: I0909 05:36:30.528407 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:36:30.649299 systemd[1]: Created slice kubepods-besteffort-pod3f8deda9_77b9_46ec_b649_b8fb0c5c18f7.slice - libcontainer container kubepods-besteffort-pod3f8deda9_77b9_46ec_b649_b8fb0c5c18f7.slice. Sep 9 05:36:30.722671 kubelet[2710]: I0909 05:36:30.722512 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f8deda9-77b9-46ec-b649-b8fb0c5c18f7-whisker-ca-bundle\") pod \"whisker-745564d87f-mpb96\" (UID: \"3f8deda9-77b9-46ec-b649-b8fb0c5c18f7\") " pod="calico-system/whisker-745564d87f-mpb96" Sep 9 05:36:30.722671 kubelet[2710]: I0909 05:36:30.722680 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f8deda9-77b9-46ec-b649-b8fb0c5c18f7-whisker-backend-key-pair\") pod \"whisker-745564d87f-mpb96\" (UID: \"3f8deda9-77b9-46ec-b649-b8fb0c5c18f7\") " pod="calico-system/whisker-745564d87f-mpb96" Sep 9 05:36:30.723293 kubelet[2710]: I0909 05:36:30.722742 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjjgg\" (UniqueName: \"kubernetes.io/projected/3f8deda9-77b9-46ec-b649-b8fb0c5c18f7-kube-api-access-hjjgg\") pod \"whisker-745564d87f-mpb96\" (UID: \"3f8deda9-77b9-46ec-b649-b8fb0c5c18f7\") " pod="calico-system/whisker-745564d87f-mpb96" Sep 9 05:36:30.958295 containerd[1574]: time="2025-09-09T05:36:30.957554255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745564d87f-mpb96,Uid:3f8deda9-77b9-46ec-b649-b8fb0c5c18f7,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:31.302027 systemd-networkd[1466]: calif868c11ba16: Link UP Sep 9 05:36:31.307992 systemd-networkd[1466]: calif868c11ba16: Gained carrier Sep 9 05:36:31.354737 containerd[1574]: 2025-09-09 05:36:30.991 [INFO][3775] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 05:36:31.354737 containerd[1574]: 2025-09-09 05:36:31.018 [INFO][3775] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-eth0 whisker-745564d87f- calico-system 3f8deda9-77b9-46ec-b649-b8fb0c5c18f7 947 0 2025-09-09 05:36:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:745564d87f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4452.0.0-n-58b1c71666 whisker-745564d87f-mpb96 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif868c11ba16 [] [] }} ContainerID="37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" Namespace="calico-system" Pod="whisker-745564d87f-mpb96" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-" Sep 9 05:36:31.354737 containerd[1574]: 2025-09-09 05:36:31.018 [INFO][3775] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" Namespace="calico-system" Pod="whisker-745564d87f-mpb96" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-eth0" Sep 9 05:36:31.354737 containerd[1574]: 2025-09-09 05:36:31.195 [INFO][3787] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" HandleID="k8s-pod-network.37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" Workload="ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-eth0" Sep 9 05:36:31.355851 containerd[1574]: 2025-09-09 05:36:31.196 [INFO][3787] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" HandleID="k8s-pod-network.37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" Workload="ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001026e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-58b1c71666", "pod":"whisker-745564d87f-mpb96", "timestamp":"2025-09-09 05:36:31.195224237 +0000 UTC"}, Hostname:"ci-4452.0.0-n-58b1c71666", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:31.355851 containerd[1574]: 2025-09-09 05:36:31.197 [INFO][3787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:31.355851 containerd[1574]: 2025-09-09 05:36:31.197 [INFO][3787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:31.355851 containerd[1574]: 2025-09-09 05:36:31.197 [INFO][3787] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-58b1c71666' Sep 9 05:36:31.355851 containerd[1574]: 2025-09-09 05:36:31.226 [INFO][3787] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:31.355851 containerd[1574]: 2025-09-09 05:36:31.243 [INFO][3787] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:31.355851 containerd[1574]: 2025-09-09 05:36:31.251 [INFO][3787] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:31.355851 containerd[1574]: 2025-09-09 05:36:31.254 [INFO][3787] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:31.355851 containerd[1574]: 2025-09-09 05:36:31.258 [INFO][3787] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:31.356744 containerd[1574]: 2025-09-09 05:36:31.258 [INFO][3787] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:31.356744 containerd[1574]: 2025-09-09 05:36:31.260 [INFO][3787] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9 Sep 9 05:36:31.356744 containerd[1574]: 2025-09-09 05:36:31.266 [INFO][3787] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:31.356744 containerd[1574]: 2025-09-09 05:36:31.277 [INFO][3787] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.65/26] block=192.168.94.64/26 handle="k8s-pod-network.37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:31.356744 containerd[1574]: 2025-09-09 05:36:31.277 [INFO][3787] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.65/26] handle="k8s-pod-network.37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:31.356744 containerd[1574]: 2025-09-09 05:36:31.278 [INFO][3787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:31.356744 containerd[1574]: 2025-09-09 05:36:31.278 [INFO][3787] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.65/26] IPv6=[] ContainerID="37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" HandleID="k8s-pod-network.37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" Workload="ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-eth0" Sep 9 05:36:31.358019 containerd[1574]: 2025-09-09 05:36:31.284 [INFO][3775] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" Namespace="calico-system" Pod="whisker-745564d87f-mpb96" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-eth0", GenerateName:"whisker-745564d87f-", Namespace:"calico-system", SelfLink:"", UID:"3f8deda9-77b9-46ec-b649-b8fb0c5c18f7", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"745564d87f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"", Pod:"whisker-745564d87f-mpb96", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif868c11ba16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:31.358019 containerd[1574]: 2025-09-09 05:36:31.284 [INFO][3775] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.65/32] ContainerID="37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" Namespace="calico-system" Pod="whisker-745564d87f-mpb96" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-eth0" Sep 9 05:36:31.358160 containerd[1574]: 2025-09-09 05:36:31.284 [INFO][3775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif868c11ba16 ContainerID="37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" Namespace="calico-system" Pod="whisker-745564d87f-mpb96" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-eth0" Sep 9 05:36:31.358160 containerd[1574]: 2025-09-09 05:36:31.316 [INFO][3775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" Namespace="calico-system" Pod="whisker-745564d87f-mpb96" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-eth0" Sep 9 05:36:31.359027 containerd[1574]: 2025-09-09 05:36:31.318 [INFO][3775] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" Namespace="calico-system" Pod="whisker-745564d87f-mpb96" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-eth0", GenerateName:"whisker-745564d87f-", Namespace:"calico-system", SelfLink:"", UID:"3f8deda9-77b9-46ec-b649-b8fb0c5c18f7", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"745564d87f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9", Pod:"whisker-745564d87f-mpb96", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif868c11ba16", MAC:"82:34:23:93:01:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:31.359112 containerd[1574]: 2025-09-09 05:36:31.347 [INFO][3775] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" Namespace="calico-system" Pod="whisker-745564d87f-mpb96" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-whisker--745564d87f--mpb96-eth0" Sep 9 05:36:31.472615 containerd[1574]: time="2025-09-09T05:36:31.469823945Z" level=info msg="connecting to shim 37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9" address="unix:///run/containerd/s/c3984719621c867457ba43d7bb4f632ed1e2296e008ad09331777b3bbb392923" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:31.597878 systemd[1]: Started cri-containerd-37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9.scope - libcontainer container 37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9. Sep 9 05:36:31.763553 containerd[1574]: time="2025-09-09T05:36:31.763264229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745564d87f-mpb96,Uid:3f8deda9-77b9-46ec-b649-b8fb0c5c18f7,Namespace:calico-system,Attempt:0,} returns sandbox id \"37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9\"" Sep 9 05:36:31.771160 containerd[1574]: time="2025-09-09T05:36:31.771106520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 05:36:32.304931 kubelet[2710]: I0909 05:36:32.304812 2710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07ad84ef-593a-4953-8004-2160c7e58b20" path="/var/lib/kubelet/pods/07ad84ef-593a-4953-8004-2160c7e58b20/volumes" Sep 9 05:36:32.402672 systemd-networkd[1466]: vxlan.calico: Link UP Sep 9 05:36:32.403064 systemd-networkd[1466]: vxlan.calico: Gained carrier Sep 9 05:36:33.063732 systemd-networkd[1466]: calif868c11ba16: Gained IPv6LL Sep 9 05:36:33.199212 kubelet[2710]: I0909 05:36:33.199143 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:36:33.356548 containerd[1574]: time="2025-09-09T05:36:33.356374655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:33.358648 containerd[1574]: time="2025-09-09T05:36:33.358499522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 05:36:33.359250 containerd[1574]: time="2025-09-09T05:36:33.359218101Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:33.362470 containerd[1574]: time="2025-09-09T05:36:33.362090766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:33.363824 containerd[1574]: time="2025-09-09T05:36:33.363763244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.592598462s" Sep 9 05:36:33.363824 containerd[1574]: time="2025-09-09T05:36:33.363818266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 05:36:33.368275 containerd[1574]: time="2025-09-09T05:36:33.368222391Z" level=info msg="CreateContainer within sandbox \"37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 05:36:33.398343 containerd[1574]: time="2025-09-09T05:36:33.397393544Z" level=info msg="Container fb21e20ba6f3b3bf2c257d4f927b1bdbec5f40583aded1b63e423c5bc867b224: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:33.413971 containerd[1574]: time="2025-09-09T05:36:33.413912185Z" level=info msg="CreateContainer within sandbox \"37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"fb21e20ba6f3b3bf2c257d4f927b1bdbec5f40583aded1b63e423c5bc867b224\"" Sep 9 05:36:33.415824 containerd[1574]: time="2025-09-09T05:36:33.415775965Z" level=info msg="StartContainer for \"fb21e20ba6f3b3bf2c257d4f927b1bdbec5f40583aded1b63e423c5bc867b224\"" Sep 9 05:36:33.418853 containerd[1574]: time="2025-09-09T05:36:33.418803009Z" level=info msg="connecting to shim fb21e20ba6f3b3bf2c257d4f927b1bdbec5f40583aded1b63e423c5bc867b224" address="unix:///run/containerd/s/c3984719621c867457ba43d7bb4f632ed1e2296e008ad09331777b3bbb392923" protocol=ttrpc version=3 Sep 9 05:36:33.432363 containerd[1574]: time="2025-09-09T05:36:33.432280542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59508c86b637ab39405e6f43aa931e814c7549a724ceda478573a08c509af2d3\" id:\"7a5e931187abd2919a268fac4f149698b2c59738752af73b01da2130daa052b1\" pid:4061 exited_at:{seconds:1757396193 nanos:431931794}" Sep 9 05:36:33.458809 systemd[1]: Started cri-containerd-fb21e20ba6f3b3bf2c257d4f927b1bdbec5f40583aded1b63e423c5bc867b224.scope - libcontainer container fb21e20ba6f3b3bf2c257d4f927b1bdbec5f40583aded1b63e423c5bc867b224. Sep 9 05:36:33.568335 containerd[1574]: time="2025-09-09T05:36:33.568207815Z" level=info msg="StartContainer for \"fb21e20ba6f3b3bf2c257d4f927b1bdbec5f40583aded1b63e423c5bc867b224\" returns successfully" Sep 9 05:36:33.572343 containerd[1574]: time="2025-09-09T05:36:33.572271558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 05:36:33.604185 containerd[1574]: time="2025-09-09T05:36:33.604119932Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59508c86b637ab39405e6f43aa931e814c7549a724ceda478573a08c509af2d3\" id:\"7ec863389eece70cfb2a8539a3aa6f21675f94018912ccfc7163a540387f3e6e\" pid:4105 exited_at:{seconds:1757396193 nanos:602983667}" Sep 9 05:36:33.767845 systemd-networkd[1466]: vxlan.calico: Gained IPv6LL Sep 9 05:36:34.299849 kubelet[2710]: E0909 05:36:34.299813 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:34.301846 containerd[1574]: time="2025-09-09T05:36:34.301053329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb8568b76-bcp7l,Uid:51653419-cbd4-45ba-b0da-614b9e41a0f4,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:36:34.301846 containerd[1574]: time="2025-09-09T05:36:34.301689151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fmvjh,Uid:d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:34.509728 systemd-networkd[1466]: cali52de7c3ef4c: Link UP Sep 9 05:36:34.513220 systemd-networkd[1466]: cali52de7c3ef4c: Gained carrier Sep 9 05:36:34.538943 containerd[1574]: 2025-09-09 05:36:34.383 [INFO][4130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-eth0 calico-apiserver-5cb8568b76- calico-apiserver 51653419-cbd4-45ba-b0da-614b9e41a0f4 880 0 2025-09-09 05:36:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cb8568b76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452.0.0-n-58b1c71666 calico-apiserver-5cb8568b76-bcp7l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali52de7c3ef4c [] [] }} ContainerID="b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-bcp7l" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-" Sep 9 05:36:34.538943 containerd[1574]: 2025-09-09 05:36:34.387 [INFO][4130] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-bcp7l" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-eth0" Sep 9 05:36:34.538943 containerd[1574]: 2025-09-09 05:36:34.435 [INFO][4153] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" HandleID="k8s-pod-network.b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" Workload="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-eth0" Sep 9 05:36:34.540603 containerd[1574]: 2025-09-09 05:36:34.435 [INFO][4153] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" HandleID="k8s-pod-network.b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" Workload="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f190), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452.0.0-n-58b1c71666", "pod":"calico-apiserver-5cb8568b76-bcp7l", "timestamp":"2025-09-09 05:36:34.435579682 +0000 UTC"}, Hostname:"ci-4452.0.0-n-58b1c71666", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:34.540603 containerd[1574]: 2025-09-09 05:36:34.436 [INFO][4153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:34.540603 containerd[1574]: 2025-09-09 05:36:34.436 [INFO][4153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:34.540603 containerd[1574]: 2025-09-09 05:36:34.436 [INFO][4153] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-58b1c71666' Sep 9 05:36:34.540603 containerd[1574]: 2025-09-09 05:36:34.448 [INFO][4153] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.540603 containerd[1574]: 2025-09-09 05:36:34.458 [INFO][4153] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.540603 containerd[1574]: 2025-09-09 05:36:34.465 [INFO][4153] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.540603 containerd[1574]: 2025-09-09 05:36:34.469 [INFO][4153] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.540603 containerd[1574]: 2025-09-09 05:36:34.473 [INFO][4153] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.541170 containerd[1574]: 2025-09-09 05:36:34.473 [INFO][4153] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.541170 containerd[1574]: 2025-09-09 05:36:34.476 [INFO][4153] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8 Sep 9 05:36:34.541170 containerd[1574]: 2025-09-09 05:36:34.483 [INFO][4153] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.541170 containerd[1574]: 2025-09-09 05:36:34.495 [INFO][4153] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.66/26] block=192.168.94.64/26 handle="k8s-pod-network.b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.541170 containerd[1574]: 2025-09-09 05:36:34.496 [INFO][4153] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.66/26] handle="k8s-pod-network.b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.541170 containerd[1574]: 2025-09-09 05:36:34.496 [INFO][4153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:34.541170 containerd[1574]: 2025-09-09 05:36:34.496 [INFO][4153] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.66/26] IPv6=[] ContainerID="b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" HandleID="k8s-pod-network.b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" Workload="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-eth0" Sep 9 05:36:34.541886 containerd[1574]: 2025-09-09 05:36:34.502 [INFO][4130] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-bcp7l" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-eth0", GenerateName:"calico-apiserver-5cb8568b76-", Namespace:"calico-apiserver", SelfLink:"", UID:"51653419-cbd4-45ba-b0da-614b9e41a0f4", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cb8568b76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"", Pod:"calico-apiserver-5cb8568b76-bcp7l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52de7c3ef4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:34.542214 containerd[1574]: 2025-09-09 05:36:34.502 [INFO][4130] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.66/32] ContainerID="b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-bcp7l" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-eth0" Sep 9 05:36:34.542214 containerd[1574]: 2025-09-09 05:36:34.502 [INFO][4130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52de7c3ef4c ContainerID="b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-bcp7l" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-eth0" Sep 9 05:36:34.542214 containerd[1574]: 2025-09-09 05:36:34.512 [INFO][4130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-bcp7l" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-eth0" Sep 9 05:36:34.542387 containerd[1574]: 2025-09-09 05:36:34.513 [INFO][4130] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-bcp7l" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-eth0", GenerateName:"calico-apiserver-5cb8568b76-", Namespace:"calico-apiserver", SelfLink:"", UID:"51653419-cbd4-45ba-b0da-614b9e41a0f4", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cb8568b76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8", Pod:"calico-apiserver-5cb8568b76-bcp7l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52de7c3ef4c", MAC:"c6:b8:39:21:b4:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:34.542697 containerd[1574]: 2025-09-09 05:36:34.533 [INFO][4130] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-bcp7l" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--bcp7l-eth0" Sep 9 05:36:34.596647 containerd[1574]: time="2025-09-09T05:36:34.596051087Z" level=info msg="connecting to shim b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8" address="unix:///run/containerd/s/52e8b468fbbeefc8bae7bbf81ed3a7d724e2c70a56b5eada8650e98f1bddeb73" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:34.658163 systemd-networkd[1466]: calife4fc18f28f: Link UP Sep 9 05:36:34.662722 systemd-networkd[1466]: calife4fc18f28f: Gained carrier Sep 9 05:36:34.674279 systemd[1]: Started cri-containerd-b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8.scope - libcontainer container b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8. Sep 9 05:36:34.700269 containerd[1574]: 2025-09-09 05:36:34.390 [INFO][4140] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-eth0 coredns-7c65d6cfc9- kube-system d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f 872 0 2025-09-09 05:35:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4452.0.0-n-58b1c71666 coredns-7c65d6cfc9-fmvjh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calife4fc18f28f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fmvjh" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-" Sep 9 05:36:34.700269 containerd[1574]: 2025-09-09 05:36:34.390 [INFO][4140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fmvjh" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-eth0" Sep 9 05:36:34.700269 containerd[1574]: 2025-09-09 05:36:34.445 [INFO][4158] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" HandleID="k8s-pod-network.44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" Workload="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-eth0" Sep 9 05:36:34.701125 containerd[1574]: 2025-09-09 05:36:34.446 [INFO][4158] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" HandleID="k8s-pod-network.44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" Workload="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5920), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4452.0.0-n-58b1c71666", "pod":"coredns-7c65d6cfc9-fmvjh", "timestamp":"2025-09-09 05:36:34.445477513 +0000 UTC"}, Hostname:"ci-4452.0.0-n-58b1c71666", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:34.701125 containerd[1574]: 2025-09-09 05:36:34.447 [INFO][4158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:34.701125 containerd[1574]: 2025-09-09 05:36:34.496 [INFO][4158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:34.701125 containerd[1574]: 2025-09-09 05:36:34.497 [INFO][4158] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-58b1c71666' Sep 9 05:36:34.701125 containerd[1574]: 2025-09-09 05:36:34.549 [INFO][4158] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.701125 containerd[1574]: 2025-09-09 05:36:34.573 [INFO][4158] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.701125 containerd[1574]: 2025-09-09 05:36:34.586 [INFO][4158] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.701125 containerd[1574]: 2025-09-09 05:36:34.593 [INFO][4158] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.701125 containerd[1574]: 2025-09-09 05:36:34.600 [INFO][4158] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.701938 containerd[1574]: 2025-09-09 05:36:34.600 [INFO][4158] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.701938 containerd[1574]: 2025-09-09 05:36:34.605 [INFO][4158] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600 Sep 9 05:36:34.701938 containerd[1574]: 2025-09-09 05:36:34.617 [INFO][4158] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.701938 containerd[1574]: 2025-09-09 05:36:34.640 [INFO][4158] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.67/26] block=192.168.94.64/26 handle="k8s-pod-network.44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.701938 containerd[1574]: 2025-09-09 05:36:34.640 [INFO][4158] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.67/26] handle="k8s-pod-network.44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:34.701938 containerd[1574]: 2025-09-09 05:36:34.640 [INFO][4158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:34.701938 containerd[1574]: 2025-09-09 05:36:34.640 [INFO][4158] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.67/26] IPv6=[] ContainerID="44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" HandleID="k8s-pod-network.44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" Workload="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-eth0" Sep 9 05:36:34.704081 containerd[1574]: 2025-09-09 05:36:34.649 [INFO][4140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fmvjh" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 35, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"", Pod:"coredns-7c65d6cfc9-fmvjh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife4fc18f28f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:34.704081 containerd[1574]: 2025-09-09 05:36:34.650 [INFO][4140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.67/32] ContainerID="44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fmvjh" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-eth0" Sep 9 05:36:34.704081 containerd[1574]: 2025-09-09 05:36:34.650 [INFO][4140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife4fc18f28f ContainerID="44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fmvjh" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-eth0" Sep 9 05:36:34.704081 containerd[1574]: 2025-09-09 05:36:34.667 [INFO][4140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fmvjh" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-eth0" Sep 9 05:36:34.704081 containerd[1574]: 2025-09-09 05:36:34.671 [INFO][4140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fmvjh" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 35, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600", Pod:"coredns-7c65d6cfc9-fmvjh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife4fc18f28f", MAC:"f6:4d:6c:81:d3:91", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:34.704081 containerd[1574]: 2025-09-09 05:36:34.696 [INFO][4140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fmvjh" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--fmvjh-eth0" Sep 9 05:36:34.820178 containerd[1574]: time="2025-09-09T05:36:34.819584688Z" level=info msg="connecting to shim 44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600" address="unix:///run/containerd/s/61d43a74a5dd6bb67ce3b9a88f9b5110d4843251e3f08e050998f065ce50fa2f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:34.827402 containerd[1574]: time="2025-09-09T05:36:34.827287781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb8568b76-bcp7l,Uid:51653419-cbd4-45ba-b0da-614b9e41a0f4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8\"" Sep 9 05:36:34.891638 systemd[1]: Started cri-containerd-44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600.scope - libcontainer container 44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600. Sep 9 05:36:35.010766 containerd[1574]: time="2025-09-09T05:36:35.010594365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fmvjh,Uid:d1a049d3-cfe6-4f6b-9ab6-0ed390e82b0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600\"" Sep 9 05:36:35.013108 kubelet[2710]: E0909 05:36:35.013067 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:35.020488 containerd[1574]: time="2025-09-09T05:36:35.020415410Z" level=info msg="CreateContainer within sandbox \"44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:36:35.063508 containerd[1574]: time="2025-09-09T05:36:35.062874595Z" level=info msg="Container d999d63d16aef27f6109a9cb044f51e5171d76b5cfa339f0127d324de1e31b9c: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:35.110823 containerd[1574]: time="2025-09-09T05:36:35.110754451Z" level=info msg="CreateContainer within sandbox \"44a212182f66d6bafc022b7d1d8dafcc60b9f36d382f2297ec08a502dd415600\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d999d63d16aef27f6109a9cb044f51e5171d76b5cfa339f0127d324de1e31b9c\"" Sep 9 05:36:35.113496 containerd[1574]: time="2025-09-09T05:36:35.112731385Z" level=info msg="StartContainer for \"d999d63d16aef27f6109a9cb044f51e5171d76b5cfa339f0127d324de1e31b9c\"" Sep 9 05:36:35.116135 containerd[1574]: time="2025-09-09T05:36:35.116089917Z" level=info msg="connecting to shim d999d63d16aef27f6109a9cb044f51e5171d76b5cfa339f0127d324de1e31b9c" address="unix:///run/containerd/s/61d43a74a5dd6bb67ce3b9a88f9b5110d4843251e3f08e050998f065ce50fa2f" protocol=ttrpc version=3 Sep 9 05:36:35.150815 systemd[1]: Started cri-containerd-d999d63d16aef27f6109a9cb044f51e5171d76b5cfa339f0127d324de1e31b9c.scope - libcontainer container d999d63d16aef27f6109a9cb044f51e5171d76b5cfa339f0127d324de1e31b9c. Sep 9 05:36:35.248098 containerd[1574]: time="2025-09-09T05:36:35.247790698Z" level=info msg="StartContainer for \"d999d63d16aef27f6109a9cb044f51e5171d76b5cfa339f0127d324de1e31b9c\" returns successfully" Sep 9 05:36:35.298355 containerd[1574]: time="2025-09-09T05:36:35.298300769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66974546f4-nfjvv,Uid:2bf3b363-0425-4282-9c8a-2f3d556d118b,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:35.300331 containerd[1574]: time="2025-09-09T05:36:35.300281975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c78xj,Uid:a830bdbb-cfd6-4f41-8465-5085b9d24e9d,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:35.301265 containerd[1574]: time="2025-09-09T05:36:35.301146076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-68l7r,Uid:52e91309-f573-4708-b86d-2973239db1d7,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:35.576408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677915126.mount: Deactivated successfully. Sep 9 05:36:35.598367 kubelet[2710]: E0909 05:36:35.597339 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:35.662671 kubelet[2710]: I0909 05:36:35.662127 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fmvjh" podStartSLOduration=39.661910227999996 podStartE2EDuration="39.661910228s" podCreationTimestamp="2025-09-09 05:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:35.631913765 +0000 UTC m=+45.486747781" watchObservedRunningTime="2025-09-09 05:36:35.661910228 +0000 UTC m=+45.516744257" Sep 9 05:36:35.778205 systemd-networkd[1466]: calie9720dd6802: Link UP Sep 9 05:36:35.781978 systemd-networkd[1466]: calie9720dd6802: Gained carrier Sep 9 05:36:35.815851 systemd-networkd[1466]: cali52de7c3ef4c: Gained IPv6LL Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.493 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-eth0 calico-kube-controllers-66974546f4- calico-system 2bf3b363-0425-4282-9c8a-2f3d556d118b 878 0 2025-09-09 05:36:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66974546f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4452.0.0-n-58b1c71666 calico-kube-controllers-66974546f4-nfjvv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie9720dd6802 [] [] }} ContainerID="59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" Namespace="calico-system" Pod="calico-kube-controllers-66974546f4-nfjvv" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-" Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.493 [INFO][4310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" Namespace="calico-system" Pod="calico-kube-controllers-66974546f4-nfjvv" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-eth0" Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.612 [INFO][4355] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" HandleID="k8s-pod-network.59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" Workload="ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-eth0" Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.612 [INFO][4355] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" HandleID="k8s-pod-network.59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" Workload="ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123cf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-58b1c71666", "pod":"calico-kube-controllers-66974546f4-nfjvv", "timestamp":"2025-09-09 05:36:35.612086286 +0000 UTC"}, Hostname:"ci-4452.0.0-n-58b1c71666", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.612 [INFO][4355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.612 [INFO][4355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.612 [INFO][4355] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-58b1c71666' Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.640 [INFO][4355] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.673 [INFO][4355] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.704 [INFO][4355] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.717 [INFO][4355] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.726 [INFO][4355] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.726 [INFO][4355] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.730 [INFO][4355] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60 Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.740 [INFO][4355] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.758 [INFO][4355] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.68/26] block=192.168.94.64/26 handle="k8s-pod-network.59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.758 [INFO][4355] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.68/26] handle="k8s-pod-network.59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.758 [INFO][4355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:35.845512 containerd[1574]: 2025-09-09 05:36:35.758 [INFO][4355] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.68/26] IPv6=[] ContainerID="59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" HandleID="k8s-pod-network.59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" Workload="ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-eth0" Sep 9 05:36:35.848957 containerd[1574]: 2025-09-09 05:36:35.770 [INFO][4310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" Namespace="calico-system" Pod="calico-kube-controllers-66974546f4-nfjvv" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-eth0", GenerateName:"calico-kube-controllers-66974546f4-", Namespace:"calico-system", SelfLink:"", UID:"2bf3b363-0425-4282-9c8a-2f3d556d118b", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66974546f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"", Pod:"calico-kube-controllers-66974546f4-nfjvv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9720dd6802", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:35.848957 containerd[1574]: 2025-09-09 05:36:35.771 [INFO][4310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.68/32] ContainerID="59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" Namespace="calico-system" Pod="calico-kube-controllers-66974546f4-nfjvv" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-eth0" Sep 9 05:36:35.848957 containerd[1574]: 2025-09-09 05:36:35.771 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9720dd6802 ContainerID="59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" Namespace="calico-system" Pod="calico-kube-controllers-66974546f4-nfjvv" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-eth0" Sep 9 05:36:35.848957 containerd[1574]: 2025-09-09 05:36:35.783 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" Namespace="calico-system" Pod="calico-kube-controllers-66974546f4-nfjvv" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-eth0" Sep 9 05:36:35.848957 containerd[1574]: 2025-09-09 05:36:35.788 [INFO][4310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" Namespace="calico-system" Pod="calico-kube-controllers-66974546f4-nfjvv" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-eth0", GenerateName:"calico-kube-controllers-66974546f4-", Namespace:"calico-system", SelfLink:"", UID:"2bf3b363-0425-4282-9c8a-2f3d556d118b", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66974546f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60", Pod:"calico-kube-controllers-66974546f4-nfjvv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9720dd6802", MAC:"ea:c9:8c:2b:f1:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:35.848957 containerd[1574]: 2025-09-09 05:36:35.824 [INFO][4310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" Namespace="calico-system" Pod="calico-kube-controllers-66974546f4-nfjvv" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--kube--controllers--66974546f4--nfjvv-eth0" Sep 9 05:36:35.921649 systemd-networkd[1466]: cali4d9c8c2ada2: Link UP Sep 9 05:36:35.926056 systemd-networkd[1466]: cali4d9c8c2ada2: Gained carrier Sep 9 05:36:35.952085 containerd[1574]: time="2025-09-09T05:36:35.952021379Z" level=info msg="connecting to shim 59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60" address="unix:///run/containerd/s/314db72165d987e692299e97434e0cbe0b1fcf780bdbe69868f0059f88155171" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.491 [INFO][4319] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-eth0 csi-node-driver- calico-system a830bdbb-cfd6-4f41-8465-5085b9d24e9d 765 0 2025-09-09 05:36:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4452.0.0-n-58b1c71666 csi-node-driver-c78xj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4d9c8c2ada2 [] [] }} ContainerID="be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" Namespace="calico-system" Pod="csi-node-driver-c78xj" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-" Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.491 [INFO][4319] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" Namespace="calico-system" Pod="csi-node-driver-c78xj" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-eth0" Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.623 [INFO][4353] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" HandleID="k8s-pod-network.be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" Workload="ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-eth0" Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.624 [INFO][4353] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" HandleID="k8s-pod-network.be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" Workload="ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000307ef0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-58b1c71666", "pod":"csi-node-driver-c78xj", "timestamp":"2025-09-09 05:36:35.623837534 +0000 UTC"}, Hostname:"ci-4452.0.0-n-58b1c71666", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.624 [INFO][4353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.758 [INFO][4353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.758 [INFO][4353] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-58b1c71666' Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.795 [INFO][4353] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.828 [INFO][4353] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.844 [INFO][4353] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.851 [INFO][4353] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.859 [INFO][4353] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.859 [INFO][4353] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.861 [INFO][4353] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1 Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.870 [INFO][4353] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.886 [INFO][4353] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.69/26] block=192.168.94.64/26 handle="k8s-pod-network.be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.887 [INFO][4353] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.69/26] handle="k8s-pod-network.be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.888 [INFO][4353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:35.986392 containerd[1574]: 2025-09-09 05:36:35.888 [INFO][4353] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.69/26] IPv6=[] ContainerID="be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" HandleID="k8s-pod-network.be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" Workload="ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-eth0" Sep 9 05:36:35.987060 containerd[1574]: 2025-09-09 05:36:35.895 [INFO][4319] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" Namespace="calico-system" Pod="csi-node-driver-c78xj" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a830bdbb-cfd6-4f41-8465-5085b9d24e9d", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"", Pod:"csi-node-driver-c78xj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d9c8c2ada2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:35.987060 containerd[1574]: 2025-09-09 05:36:35.896 [INFO][4319] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.69/32] ContainerID="be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" Namespace="calico-system" Pod="csi-node-driver-c78xj" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-eth0" Sep 9 05:36:35.987060 containerd[1574]: 2025-09-09 05:36:35.896 [INFO][4319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d9c8c2ada2 ContainerID="be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" Namespace="calico-system" Pod="csi-node-driver-c78xj" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-eth0" Sep 9 05:36:35.987060 containerd[1574]: 2025-09-09 05:36:35.931 [INFO][4319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" Namespace="calico-system" Pod="csi-node-driver-c78xj" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-eth0" Sep 9 05:36:35.987060 containerd[1574]: 2025-09-09 05:36:35.933 [INFO][4319] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" Namespace="calico-system" Pod="csi-node-driver-c78xj" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a830bdbb-cfd6-4f41-8465-5085b9d24e9d", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1", Pod:"csi-node-driver-c78xj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d9c8c2ada2", MAC:"46:f0:df:bc:16:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:35.987060 containerd[1574]: 2025-09-09 05:36:35.962 [INFO][4319] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" Namespace="calico-system" Pod="csi-node-driver-c78xj" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-csi--node--driver--c78xj-eth0" Sep 9 05:36:36.014596 systemd-networkd[1466]: calife4fc18f28f: Gained IPv6LL Sep 9 05:36:36.019008 systemd[1]: Started cri-containerd-59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60.scope - libcontainer container 59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60. Sep 9 05:36:36.058729 containerd[1574]: time="2025-09-09T05:36:36.057955743Z" level=info msg="connecting to shim be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1" address="unix:///run/containerd/s/6f7bfdb9e8c0fdc108ff1dcd59fcf128947bc93866a5b5d31d535b81585223d1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:36.088923 systemd-networkd[1466]: cali998e2749f1e: Link UP Sep 9 05:36:36.100710 systemd-networkd[1466]: cali998e2749f1e: Gained carrier Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:35.505 [INFO][4317] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-eth0 goldmane-7988f88666- calico-system 52e91309-f573-4708-b86d-2973239db1d7 881 0 2025-09-09 05:36:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4452.0.0-n-58b1c71666 goldmane-7988f88666-68l7r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali998e2749f1e [] [] }} ContainerID="ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" Namespace="calico-system" Pod="goldmane-7988f88666-68l7r" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-" Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:35.508 [INFO][4317] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" Namespace="calico-system" Pod="goldmane-7988f88666-68l7r" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-eth0" Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:35.744 [INFO][4362] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" HandleID="k8s-pod-network.ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" Workload="ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-eth0" Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:35.746 [INFO][4362] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" HandleID="k8s-pod-network.ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" Workload="ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00047fd90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-58b1c71666", "pod":"goldmane-7988f88666-68l7r", "timestamp":"2025-09-09 05:36:35.744613419 +0000 UTC"}, Hostname:"ci-4452.0.0-n-58b1c71666", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:35.746 [INFO][4362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:35.888 [INFO][4362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:35.888 [INFO][4362] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-58b1c71666' Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:35.920 [INFO][4362] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:35.945 [INFO][4362] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:35.984 [INFO][4362] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:35.992 [INFO][4362] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:35.998 [INFO][4362] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:35.998 [INFO][4362] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:36.004 [INFO][4362] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6 Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:36.049 [INFO][4362] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:36.067 [INFO][4362] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.70/26] block=192.168.94.64/26 handle="k8s-pod-network.ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:36.067 [INFO][4362] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.70/26] handle="k8s-pod-network.ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:36.068 [INFO][4362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:36.155880 containerd[1574]: 2025-09-09 05:36:36.068 [INFO][4362] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.70/26] IPv6=[] ContainerID="ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" HandleID="k8s-pod-network.ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" Workload="ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-eth0" Sep 9 05:36:36.156877 containerd[1574]: 2025-09-09 05:36:36.076 [INFO][4317] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" Namespace="calico-system" Pod="goldmane-7988f88666-68l7r" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"52e91309-f573-4708-b86d-2973239db1d7", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"", Pod:"goldmane-7988f88666-68l7r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali998e2749f1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:36.156877 containerd[1574]: 2025-09-09 05:36:36.079 [INFO][4317] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.70/32] ContainerID="ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" Namespace="calico-system" Pod="goldmane-7988f88666-68l7r" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-eth0" Sep 9 05:36:36.156877 containerd[1574]: 2025-09-09 05:36:36.079 [INFO][4317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali998e2749f1e ContainerID="ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" Namespace="calico-system" Pod="goldmane-7988f88666-68l7r" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-eth0" Sep 9 05:36:36.156877 containerd[1574]: 2025-09-09 05:36:36.103 [INFO][4317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" Namespace="calico-system" Pod="goldmane-7988f88666-68l7r" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-eth0" Sep 9 05:36:36.156877 containerd[1574]: 2025-09-09 05:36:36.107 [INFO][4317] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" Namespace="calico-system" Pod="goldmane-7988f88666-68l7r" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"52e91309-f573-4708-b86d-2973239db1d7", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6", Pod:"goldmane-7988f88666-68l7r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali998e2749f1e", MAC:"ba:21:7d:99:d9:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:36.156877 containerd[1574]: 2025-09-09 05:36:36.136 [INFO][4317] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" Namespace="calico-system" Pod="goldmane-7988f88666-68l7r" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-goldmane--7988f88666--68l7r-eth0" Sep 9 05:36:36.158676 systemd[1]: Started cri-containerd-be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1.scope - libcontainer container be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1. Sep 9 05:36:36.217576 containerd[1574]: time="2025-09-09T05:36:36.217505694Z" level=info msg="connecting to shim ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6" address="unix:///run/containerd/s/79bc1dc2a8de380401a07702669121e4b2e977c0695e233c594292780d3878cc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:36.269993 containerd[1574]: time="2025-09-09T05:36:36.269932674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c78xj,Uid:a830bdbb-cfd6-4f41-8465-5085b9d24e9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1\"" Sep 9 05:36:36.298700 kubelet[2710]: E0909 05:36:36.298322 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:36.298764 systemd[1]: Started cri-containerd-ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6.scope - libcontainer container ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6. Sep 9 05:36:36.303541 containerd[1574]: time="2025-09-09T05:36:36.303367907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xfkng,Uid:5b161cf8-ae97-452d-ac9c-603e0edd6644,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:36.419410 containerd[1574]: time="2025-09-09T05:36:36.419043747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66974546f4-nfjvv,Uid:2bf3b363-0425-4282-9c8a-2f3d556d118b,Namespace:calico-system,Attempt:0,} returns sandbox id \"59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60\"" Sep 9 05:36:36.490561 containerd[1574]: time="2025-09-09T05:36:36.490488390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-68l7r,Uid:52e91309-f573-4708-b86d-2973239db1d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6\"" Sep 9 05:36:36.621341 systemd-networkd[1466]: calie106b5b31ed: Link UP Sep 9 05:36:36.621944 systemd-networkd[1466]: calie106b5b31ed: Gained carrier Sep 9 05:36:36.641633 kubelet[2710]: E0909 05:36:36.641599 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.479 [INFO][4528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-eth0 coredns-7c65d6cfc9- kube-system 5b161cf8-ae97-452d-ac9c-603e0edd6644 877 0 2025-09-09 05:35:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4452.0.0-n-58b1c71666 coredns-7c65d6cfc9-xfkng eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie106b5b31ed [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xfkng" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-" Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.479 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xfkng" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-eth0" Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.542 [INFO][4560] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" HandleID="k8s-pod-network.50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" Workload="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-eth0" Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.542 [INFO][4560] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" HandleID="k8s-pod-network.50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" Workload="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5960), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4452.0.0-n-58b1c71666", "pod":"coredns-7c65d6cfc9-xfkng", "timestamp":"2025-09-09 05:36:36.542033195 +0000 UTC"}, Hostname:"ci-4452.0.0-n-58b1c71666", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.542 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.542 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.542 [INFO][4560] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-58b1c71666' Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.554 [INFO][4560] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.563 [INFO][4560] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.574 [INFO][4560] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.578 [INFO][4560] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.581 [INFO][4560] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.581 [INFO][4560] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.584 [INFO][4560] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433 Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.594 [INFO][4560] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.606 [INFO][4560] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.71/26] block=192.168.94.64/26 handle="k8s-pod-network.50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.606 [INFO][4560] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.71/26] handle="k8s-pod-network.50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.606 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:36.663716 containerd[1574]: 2025-09-09 05:36:36.606 [INFO][4560] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.71/26] IPv6=[] ContainerID="50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" HandleID="k8s-pod-network.50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" Workload="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-eth0" Sep 9 05:36:36.664954 containerd[1574]: 2025-09-09 05:36:36.611 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xfkng" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5b161cf8-ae97-452d-ac9c-603e0edd6644", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 35, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"", Pod:"coredns-7c65d6cfc9-xfkng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie106b5b31ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:36.664954 containerd[1574]: 2025-09-09 05:36:36.612 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.71/32] ContainerID="50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xfkng" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-eth0" Sep 9 05:36:36.664954 containerd[1574]: 2025-09-09 05:36:36.612 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie106b5b31ed ContainerID="50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xfkng" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-eth0" Sep 9 05:36:36.664954 containerd[1574]: 2025-09-09 05:36:36.622 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xfkng" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-eth0" Sep 9 05:36:36.664954 containerd[1574]: 2025-09-09 05:36:36.622 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xfkng" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5b161cf8-ae97-452d-ac9c-603e0edd6644", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 35, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433", Pod:"coredns-7c65d6cfc9-xfkng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie106b5b31ed", MAC:"66:dc:83:00:71:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:36.664954 containerd[1574]: 2025-09-09 05:36:36.652 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xfkng" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-coredns--7c65d6cfc9--xfkng-eth0" Sep 9 05:36:36.733166 containerd[1574]: time="2025-09-09T05:36:36.732955735Z" level=info msg="connecting to shim 50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433" address="unix:///run/containerd/s/153d20177f43af198b173e39c4dcdf342a1d033f9a45b47fc3bc68cb38997caa" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:36.789832 systemd[1]: Started cri-containerd-50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433.scope - libcontainer container 50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433. Sep 9 05:36:36.916352 containerd[1574]: time="2025-09-09T05:36:36.916253882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xfkng,Uid:5b161cf8-ae97-452d-ac9c-603e0edd6644,Namespace:kube-system,Attempt:0,} returns sandbox id \"50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433\"" Sep 9 05:36:36.921156 kubelet[2710]: E0909 05:36:36.920658 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:36.930212 containerd[1574]: time="2025-09-09T05:36:36.930141625Z" level=info msg="CreateContainer within sandbox \"50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:36:36.969875 containerd[1574]: time="2025-09-09T05:36:36.969824796Z" level=info msg="Container 6b3521030c0daa6384a51793bead937c161791f9132c44266da2d53e971beb1e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:36.977665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2653217563.mount: Deactivated successfully. Sep 9 05:36:36.985564 containerd[1574]: time="2025-09-09T05:36:36.985076664Z" level=info msg="CreateContainer within sandbox \"50002530a85a357cfb69320f7cb84b28c9ed22a54d17c923665989d26318a433\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b3521030c0daa6384a51793bead937c161791f9132c44266da2d53e971beb1e\"" Sep 9 05:36:36.987347 containerd[1574]: time="2025-09-09T05:36:36.987312535Z" level=info msg="StartContainer for \"6b3521030c0daa6384a51793bead937c161791f9132c44266da2d53e971beb1e\"" Sep 9 05:36:37.012666 containerd[1574]: time="2025-09-09T05:36:37.012505852Z" level=info msg="connecting to shim 6b3521030c0daa6384a51793bead937c161791f9132c44266da2d53e971beb1e" address="unix:///run/containerd/s/153d20177f43af198b173e39c4dcdf342a1d033f9a45b47fc3bc68cb38997caa" protocol=ttrpc version=3 Sep 9 05:36:37.057101 systemd[1]: Started cri-containerd-6b3521030c0daa6384a51793bead937c161791f9132c44266da2d53e971beb1e.scope - libcontainer container 6b3521030c0daa6384a51793bead937c161791f9132c44266da2d53e971beb1e. Sep 9 05:36:37.127648 containerd[1574]: time="2025-09-09T05:36:37.127591186Z" level=info msg="StartContainer for \"6b3521030c0daa6384a51793bead937c161791f9132c44266da2d53e971beb1e\" returns successfully" Sep 9 05:36:37.297702 containerd[1574]: time="2025-09-09T05:36:37.297557148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb8568b76-29tg6,Uid:14bedfcf-cf8c-4398-9812-3a0692728e52,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:36:37.348826 containerd[1574]: time="2025-09-09T05:36:37.348337183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 05:36:37.353364 containerd[1574]: time="2025-09-09T05:36:37.353293518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:37.356756 containerd[1574]: time="2025-09-09T05:36:37.356658853Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:37.359567 containerd[1574]: time="2025-09-09T05:36:37.359449188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.787102994s" Sep 9 05:36:37.359567 containerd[1574]: time="2025-09-09T05:36:37.359495399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 05:36:37.360082 containerd[1574]: time="2025-09-09T05:36:37.360046172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:37.366450 containerd[1574]: time="2025-09-09T05:36:37.366391420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:36:37.393974 containerd[1574]: time="2025-09-09T05:36:37.393880402Z" level=info msg="CreateContainer within sandbox \"37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 05:36:37.405470 containerd[1574]: time="2025-09-09T05:36:37.405418934Z" level=info msg="Container 126f6098c910948bc03d267a767337085a0b42593872e0791497037f717ce2ba: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:37.427597 containerd[1574]: time="2025-09-09T05:36:37.427467926Z" level=info msg="CreateContainer within sandbox \"37d8113a30a40b4012cbd13f6d5e64d3637682a916a2db5e68d1bc28f23dadd9\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"126f6098c910948bc03d267a767337085a0b42593872e0791497037f717ce2ba\"" Sep 9 05:36:37.430420 containerd[1574]: time="2025-09-09T05:36:37.430261156Z" level=info msg="StartContainer for \"126f6098c910948bc03d267a767337085a0b42593872e0791497037f717ce2ba\"" Sep 9 05:36:37.441487 containerd[1574]: time="2025-09-09T05:36:37.441399282Z" level=info msg="connecting to shim 126f6098c910948bc03d267a767337085a0b42593872e0791497037f717ce2ba" address="unix:///run/containerd/s/c3984719621c867457ba43d7bb4f632ed1e2296e008ad09331777b3bbb392923" protocol=ttrpc version=3 Sep 9 05:36:37.478949 systemd[1]: Started cri-containerd-126f6098c910948bc03d267a767337085a0b42593872e0791497037f717ce2ba.scope - libcontainer container 126f6098c910948bc03d267a767337085a0b42593872e0791497037f717ce2ba. Sep 9 05:36:37.545923 systemd-networkd[1466]: cali4d9c8c2ada2: Gained IPv6LL Sep 9 05:36:37.561762 systemd-networkd[1466]: calic9d7786eceb: Link UP Sep 9 05:36:37.576678 systemd-networkd[1466]: calic9d7786eceb: Gained carrier Sep 9 05:36:37.581613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2948633824.mount: Deactivated successfully. Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.369 [INFO][4653] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-eth0 calico-apiserver-5cb8568b76- calico-apiserver 14bedfcf-cf8c-4398-9812-3a0692728e52 879 0 2025-09-09 05:36:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cb8568b76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452.0.0-n-58b1c71666 calico-apiserver-5cb8568b76-29tg6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic9d7786eceb [] [] }} ContainerID="da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-29tg6" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-" Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.369 [INFO][4653] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-29tg6" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-eth0" Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.414 [INFO][4670] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" HandleID="k8s-pod-network.da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" Workload="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-eth0" Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.414 [INFO][4670] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" HandleID="k8s-pod-network.da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" Workload="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452.0.0-n-58b1c71666", "pod":"calico-apiserver-5cb8568b76-29tg6", "timestamp":"2025-09-09 05:36:37.414034782 +0000 UTC"}, Hostname:"ci-4452.0.0-n-58b1c71666", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.414 [INFO][4670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.414 [INFO][4670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.414 [INFO][4670] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-58b1c71666' Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.436 [INFO][4670] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.454 [INFO][4670] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.461 [INFO][4670] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.498 [INFO][4670] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.502 [INFO][4670] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.502 [INFO][4670] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.508 [INFO][4670] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1 Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.513 [INFO][4670] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.536 [INFO][4670] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.72/26] block=192.168.94.64/26 handle="k8s-pod-network.da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.536 [INFO][4670] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.72/26] handle="k8s-pod-network.da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" host="ci-4452.0.0-n-58b1c71666" Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.536 [INFO][4670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:37.610017 containerd[1574]: 2025-09-09 05:36:37.536 [INFO][4670] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.72/26] IPv6=[] ContainerID="da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" HandleID="k8s-pod-network.da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" Workload="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-eth0" Sep 9 05:36:37.615631 containerd[1574]: 2025-09-09 05:36:37.551 [INFO][4653] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-29tg6" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-eth0", GenerateName:"calico-apiserver-5cb8568b76-", Namespace:"calico-apiserver", SelfLink:"", UID:"14bedfcf-cf8c-4398-9812-3a0692728e52", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cb8568b76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"", Pod:"calico-apiserver-5cb8568b76-29tg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9d7786eceb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:37.615631 containerd[1574]: 2025-09-09 05:36:37.551 [INFO][4653] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.72/32] ContainerID="da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-29tg6" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-eth0" Sep 9 05:36:37.615631 containerd[1574]: 2025-09-09 05:36:37.551 [INFO][4653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9d7786eceb ContainerID="da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-29tg6" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-eth0" Sep 9 05:36:37.615631 containerd[1574]: 2025-09-09 05:36:37.557 [INFO][4653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-29tg6" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-eth0" Sep 9 05:36:37.615631 containerd[1574]: 2025-09-09 05:36:37.558 [INFO][4653] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-29tg6" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-eth0", GenerateName:"calico-apiserver-5cb8568b76-", Namespace:"calico-apiserver", SelfLink:"", UID:"14bedfcf-cf8c-4398-9812-3a0692728e52", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cb8568b76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-58b1c71666", ContainerID:"da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1", Pod:"calico-apiserver-5cb8568b76-29tg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9d7786eceb", MAC:"56:dc:42:38:e6:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:37.615631 containerd[1574]: 2025-09-09 05:36:37.596 [INFO][4653] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8568b76-29tg6" WorkloadEndpoint="ci--4452.0.0--n--58b1c71666-k8s-calico--apiserver--5cb8568b76--29tg6-eth0" Sep 9 05:36:37.657193 kubelet[2710]: E0909 05:36:37.656611 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:37.671938 systemd-networkd[1466]: calie9720dd6802: Gained IPv6LL Sep 9 05:36:37.675215 kubelet[2710]: E0909 05:36:37.675174 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:37.693743 kubelet[2710]: I0909 05:36:37.693121 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xfkng" podStartSLOduration=41.69309443 podStartE2EDuration="41.69309443s" podCreationTimestamp="2025-09-09 05:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:37.692488862 +0000 UTC m=+47.547322891" watchObservedRunningTime="2025-09-09 05:36:37.69309443 +0000 UTC m=+47.547928480" Sep 9 05:36:37.701396 containerd[1574]: time="2025-09-09T05:36:37.700061833Z" level=info msg="connecting to shim da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1" address="unix:///run/containerd/s/c342fd6e6a1393857161c56ae67e5c5457599bd700e182471660f93a7c09adaa" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:37.738927 containerd[1574]: time="2025-09-09T05:36:37.738761289Z" level=info msg="StartContainer for \"126f6098c910948bc03d267a767337085a0b42593872e0791497037f717ce2ba\" returns successfully" Sep 9 05:36:37.783182 systemd[1]: Started cri-containerd-da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1.scope - libcontainer container da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1. Sep 9 05:36:37.931669 systemd-networkd[1466]: cali998e2749f1e: Gained IPv6LL Sep 9 05:36:37.961986 containerd[1574]: time="2025-09-09T05:36:37.961940499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb8568b76-29tg6,Uid:14bedfcf-cf8c-4398-9812-3a0692728e52,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1\"" Sep 9 05:36:38.503686 systemd-networkd[1466]: calie106b5b31ed: Gained IPv6LL Sep 9 05:36:38.694066 kubelet[2710]: E0909 05:36:38.692442 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:38.730377 kubelet[2710]: I0909 05:36:38.729799 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-745564d87f-mpb96" podStartSLOduration=3.136508081 podStartE2EDuration="8.729686865s" podCreationTimestamp="2025-09-09 05:36:30 +0000 UTC" firstStartedPulling="2025-09-09 05:36:31.770275509 +0000 UTC m=+41.625109505" lastFinishedPulling="2025-09-09 05:36:37.363454256 +0000 UTC m=+47.218288289" observedRunningTime="2025-09-09 05:36:38.723307425 +0000 UTC m=+48.578141451" watchObservedRunningTime="2025-09-09 05:36:38.729686865 +0000 UTC m=+48.584520898" Sep 9 05:36:39.401008 systemd-networkd[1466]: calic9d7786eceb: Gained IPv6LL Sep 9 05:36:39.698345 kubelet[2710]: E0909 05:36:39.697771 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:41.048926 containerd[1574]: time="2025-09-09T05:36:41.048832190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:41.051751 containerd[1574]: time="2025-09-09T05:36:41.051657039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 05:36:41.053191 containerd[1574]: time="2025-09-09T05:36:41.052954173Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:41.061398 containerd[1574]: time="2025-09-09T05:36:41.060002362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:41.061398 containerd[1574]: time="2025-09-09T05:36:41.060482724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.694027235s" Sep 9 05:36:41.061895 containerd[1574]: time="2025-09-09T05:36:41.061797326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 05:36:41.065137 containerd[1574]: time="2025-09-09T05:36:41.065050505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 05:36:41.068414 containerd[1574]: time="2025-09-09T05:36:41.067653374Z" level=info msg="CreateContainer within sandbox \"b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:36:41.095300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800224237.mount: Deactivated successfully. Sep 9 05:36:41.122072 containerd[1574]: time="2025-09-09T05:36:41.121988867Z" level=info msg="Container 86882c34d9b45c4dff3a2f2fce38883851cd91746839d6b12e1d73122280b14b: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:41.131620 containerd[1574]: time="2025-09-09T05:36:41.131567290Z" level=info msg="CreateContainer within sandbox \"b9c2fb6056589c3c58a1cacbd48e39051ab4b0814a82bff338d2f8fbb44de4d8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"86882c34d9b45c4dff3a2f2fce38883851cd91746839d6b12e1d73122280b14b\"" Sep 9 05:36:41.133260 containerd[1574]: time="2025-09-09T05:36:41.133178794Z" level=info msg="StartContainer for \"86882c34d9b45c4dff3a2f2fce38883851cd91746839d6b12e1d73122280b14b\"" Sep 9 05:36:41.134867 containerd[1574]: time="2025-09-09T05:36:41.134652120Z" level=info msg="connecting to shim 86882c34d9b45c4dff3a2f2fce38883851cd91746839d6b12e1d73122280b14b" address="unix:///run/containerd/s/52e8b468fbbeefc8bae7bbf81ed3a7d724e2c70a56b5eada8650e98f1bddeb73" protocol=ttrpc version=3 Sep 9 05:36:41.208135 systemd[1]: Started cri-containerd-86882c34d9b45c4dff3a2f2fce38883851cd91746839d6b12e1d73122280b14b.scope - libcontainer container 86882c34d9b45c4dff3a2f2fce38883851cd91746839d6b12e1d73122280b14b. Sep 9 05:36:41.438552 containerd[1574]: time="2025-09-09T05:36:41.437763075Z" level=info msg="StartContainer for \"86882c34d9b45c4dff3a2f2fce38883851cd91746839d6b12e1d73122280b14b\" returns successfully" Sep 9 05:36:42.734211 containerd[1574]: time="2025-09-09T05:36:42.733545431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:42.735651 containerd[1574]: time="2025-09-09T05:36:42.735603563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 05:36:42.736568 containerd[1574]: time="2025-09-09T05:36:42.736504144Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:42.745553 containerd[1574]: time="2025-09-09T05:36:42.745259084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:42.752438 containerd[1574]: time="2025-09-09T05:36:42.752366190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.687234379s" Sep 9 05:36:42.753621 containerd[1574]: time="2025-09-09T05:36:42.753568145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 05:36:42.763020 containerd[1574]: time="2025-09-09T05:36:42.762775775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 05:36:42.767783 kubelet[2710]: I0909 05:36:42.767075 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:36:42.769597 containerd[1574]: time="2025-09-09T05:36:42.769038899Z" level=info msg="CreateContainer within sandbox \"be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 05:36:42.789750 containerd[1574]: time="2025-09-09T05:36:42.789703977Z" level=info msg="Container 8ede9626430ec87f5e60c552fc9a86b429f16c6923f0b38ae8682045236b2d43: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:42.800402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851690939.mount: Deactivated successfully. Sep 9 05:36:42.847828 containerd[1574]: time="2025-09-09T05:36:42.847754026Z" level=info msg="CreateContainer within sandbox \"be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8ede9626430ec87f5e60c552fc9a86b429f16c6923f0b38ae8682045236b2d43\"" Sep 9 05:36:42.850867 containerd[1574]: time="2025-09-09T05:36:42.850810289Z" level=info msg="StartContainer for \"8ede9626430ec87f5e60c552fc9a86b429f16c6923f0b38ae8682045236b2d43\"" Sep 9 05:36:42.858304 containerd[1574]: time="2025-09-09T05:36:42.856784650Z" level=info msg="connecting to shim 8ede9626430ec87f5e60c552fc9a86b429f16c6923f0b38ae8682045236b2d43" address="unix:///run/containerd/s/6f7bfdb9e8c0fdc108ff1dcd59fcf128947bc93866a5b5d31d535b81585223d1" protocol=ttrpc version=3 Sep 9 05:36:42.914089 systemd[1]: Started cri-containerd-8ede9626430ec87f5e60c552fc9a86b429f16c6923f0b38ae8682045236b2d43.scope - libcontainer container 8ede9626430ec87f5e60c552fc9a86b429f16c6923f0b38ae8682045236b2d43. Sep 9 05:36:43.063669 containerd[1574]: time="2025-09-09T05:36:43.062465642Z" level=info msg="StartContainer for \"8ede9626430ec87f5e60c552fc9a86b429f16c6923f0b38ae8682045236b2d43\" returns successfully" Sep 9 05:36:46.381100 containerd[1574]: time="2025-09-09T05:36:46.379842072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:46.387452 containerd[1574]: time="2025-09-09T05:36:46.387347910Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:46.393546 containerd[1574]: time="2025-09-09T05:36:46.392685544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 05:36:46.394406 containerd[1574]: time="2025-09-09T05:36:46.394351403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:46.396214 containerd[1574]: time="2025-09-09T05:36:46.396143116Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.633319365s" Sep 9 05:36:46.396214 containerd[1574]: time="2025-09-09T05:36:46.396210350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 05:36:46.400756 containerd[1574]: time="2025-09-09T05:36:46.400699957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 05:36:46.513484 containerd[1574]: time="2025-09-09T05:36:46.512083249Z" level=info msg="CreateContainer within sandbox \"59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 05:36:46.575799 containerd[1574]: time="2025-09-09T05:36:46.575721274Z" level=info msg="Container bc29a6231d361c031a6ffc96615b8cfafe11e8cc35f9a00afc8fee494714bcc2: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:46.608806 containerd[1574]: time="2025-09-09T05:36:46.607119710Z" level=info msg="CreateContainer within sandbox \"59e3a2c0ce286c8f36781b815ae0e60ed4be6fcc14dbd991c443a060a8e1ac60\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bc29a6231d361c031a6ffc96615b8cfafe11e8cc35f9a00afc8fee494714bcc2\"" Sep 9 05:36:46.610431 containerd[1574]: time="2025-09-09T05:36:46.610373500Z" level=info msg="StartContainer for \"bc29a6231d361c031a6ffc96615b8cfafe11e8cc35f9a00afc8fee494714bcc2\"" Sep 9 05:36:46.615346 containerd[1574]: time="2025-09-09T05:36:46.615161083Z" level=info msg="connecting to shim bc29a6231d361c031a6ffc96615b8cfafe11e8cc35f9a00afc8fee494714bcc2" address="unix:///run/containerd/s/314db72165d987e692299e97434e0cbe0b1fcf780bdbe69868f0059f88155171" protocol=ttrpc version=3 Sep 9 05:36:46.728646 systemd[1]: Started cri-containerd-bc29a6231d361c031a6ffc96615b8cfafe11e8cc35f9a00afc8fee494714bcc2.scope - libcontainer container bc29a6231d361c031a6ffc96615b8cfafe11e8cc35f9a00afc8fee494714bcc2. Sep 9 05:36:46.893440 containerd[1574]: time="2025-09-09T05:36:46.893395054Z" level=info msg="StartContainer for \"bc29a6231d361c031a6ffc96615b8cfafe11e8cc35f9a00afc8fee494714bcc2\" returns successfully" Sep 9 05:36:48.069818 kubelet[2710]: I0909 05:36:48.058894 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cb8568b76-bcp7l" podStartSLOduration=33.810202901 podStartE2EDuration="40.038330157s" podCreationTimestamp="2025-09-09 05:36:08 +0000 UTC" firstStartedPulling="2025-09-09 05:36:34.835769354 +0000 UTC m=+44.690603364" lastFinishedPulling="2025-09-09 05:36:41.063896611 +0000 UTC m=+50.918730620" observedRunningTime="2025-09-09 05:36:41.781790522 +0000 UTC m=+51.636624555" watchObservedRunningTime="2025-09-09 05:36:48.038330157 +0000 UTC m=+57.893164173" Sep 9 05:36:48.073757 kubelet[2710]: I0909 05:36:48.072894 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66974546f4-nfjvv" podStartSLOduration=25.100210541 podStartE2EDuration="35.072870128s" podCreationTimestamp="2025-09-09 05:36:13 +0000 UTC" firstStartedPulling="2025-09-09 05:36:36.426943891 +0000 UTC m=+46.281777892" lastFinishedPulling="2025-09-09 05:36:46.399603462 +0000 UTC m=+56.254437479" observedRunningTime="2025-09-09 05:36:48.024369047 +0000 UTC m=+57.879203070" watchObservedRunningTime="2025-09-09 05:36:48.072870128 +0000 UTC m=+57.927704141" Sep 9 05:36:48.224767 containerd[1574]: time="2025-09-09T05:36:48.224701048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc29a6231d361c031a6ffc96615b8cfafe11e8cc35f9a00afc8fee494714bcc2\" id:\"66b3d1924964db3abaf1d5e1394b1fe49ed3d80c4f90521f992b1ab032f2daff\" pid:4933 exited_at:{seconds:1757396208 nanos:222620911}" Sep 9 05:36:48.566380 systemd[1]: Started sshd@7-143.198.157.2:22-139.178.89.65:51502.service - OpenSSH per-connection server daemon (139.178.89.65:51502). Sep 9 05:36:48.914947 sshd[4943]: Accepted publickey for core from 139.178.89.65 port 51502 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:36:48.924509 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:36:48.956401 systemd-logind[1552]: New session 8 of user core. Sep 9 05:36:48.962191 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:36:50.030658 sshd[4946]: Connection closed by 139.178.89.65 port 51502 Sep 9 05:36:50.030995 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Sep 9 05:36:50.056641 systemd-logind[1552]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:36:50.057390 systemd[1]: sshd@7-143.198.157.2:22-139.178.89.65:51502.service: Deactivated successfully. Sep 9 05:36:50.065365 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:36:50.072990 systemd-logind[1552]: Removed session 8. Sep 9 05:36:51.644811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount844275999.mount: Deactivated successfully. Sep 9 05:36:52.648807 containerd[1574]: time="2025-09-09T05:36:52.647921630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc29a6231d361c031a6ffc96615b8cfafe11e8cc35f9a00afc8fee494714bcc2\" id:\"811c742a514252d75fd05ef369e02133ae28c26bfa9f51b78b2790c8892a5c09\" pid:4981 exited_at:{seconds:1757396212 nanos:647331596}" Sep 9 05:36:53.084412 containerd[1574]: time="2025-09-09T05:36:53.083918342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:53.090613 containerd[1574]: time="2025-09-09T05:36:53.090496269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 05:36:53.112262 containerd[1574]: time="2025-09-09T05:36:53.112186403Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:53.119690 containerd[1574]: time="2025-09-09T05:36:53.119619591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.718727803s" Sep 9 05:36:53.119690 containerd[1574]: time="2025-09-09T05:36:53.119665273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 05:36:53.123425 containerd[1574]: time="2025-09-09T05:36:53.123369119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:36:53.133662 containerd[1574]: time="2025-09-09T05:36:53.133613504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:53.135490 containerd[1574]: time="2025-09-09T05:36:53.135439805Z" level=info msg="CreateContainer within sandbox \"ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 05:36:53.147895 containerd[1574]: time="2025-09-09T05:36:53.147824790Z" level=info msg="Container 41dc95f7e7ac66fa478610e3798df2c389dff04c6d4a8aefeebb5cb4af2671b0: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:53.165117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280846668.mount: Deactivated successfully. Sep 9 05:36:53.170134 containerd[1574]: time="2025-09-09T05:36:53.170043610Z" level=info msg="CreateContainer within sandbox \"ad36ff8219bfb494d39df5a774990f4c8ea808b307f4109adc1ca158a08a4ad6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"41dc95f7e7ac66fa478610e3798df2c389dff04c6d4a8aefeebb5cb4af2671b0\"" Sep 9 05:36:53.173600 containerd[1574]: time="2025-09-09T05:36:53.172671798Z" level=info msg="StartContainer for \"41dc95f7e7ac66fa478610e3798df2c389dff04c6d4a8aefeebb5cb4af2671b0\"" Sep 9 05:36:53.175439 containerd[1574]: time="2025-09-09T05:36:53.174104934Z" level=info msg="connecting to shim 41dc95f7e7ac66fa478610e3798df2c389dff04c6d4a8aefeebb5cb4af2671b0" address="unix:///run/containerd/s/79bc1dc2a8de380401a07702669121e4b2e977c0695e233c594292780d3878cc" protocol=ttrpc version=3 Sep 9 05:36:53.217000 systemd[1]: Started cri-containerd-41dc95f7e7ac66fa478610e3798df2c389dff04c6d4a8aefeebb5cb4af2671b0.scope - libcontainer container 41dc95f7e7ac66fa478610e3798df2c389dff04c6d4a8aefeebb5cb4af2671b0. Sep 9 05:36:53.336799 containerd[1574]: time="2025-09-09T05:36:53.336659682Z" level=info msg="StartContainer for \"41dc95f7e7ac66fa478610e3798df2c389dff04c6d4a8aefeebb5cb4af2671b0\" returns successfully" Sep 9 05:36:53.530574 containerd[1574]: time="2025-09-09T05:36:53.530349132Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:53.532702 containerd[1574]: time="2025-09-09T05:36:53.531323147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 05:36:53.533986 containerd[1574]: time="2025-09-09T05:36:53.533938203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 410.514835ms" Sep 9 05:36:53.533986 containerd[1574]: time="2025-09-09T05:36:53.533985359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 05:36:53.536696 containerd[1574]: time="2025-09-09T05:36:53.536658106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 05:36:53.544147 containerd[1574]: time="2025-09-09T05:36:53.544065624Z" level=info msg="CreateContainer within sandbox \"da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:36:53.599823 containerd[1574]: time="2025-09-09T05:36:53.597843713Z" level=info msg="Container e77863331ad986d2c34152e8417fb59b1fe103a1373e125268111caa808884db: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:53.616687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4193231266.mount: Deactivated successfully. Sep 9 05:36:53.627943 containerd[1574]: time="2025-09-09T05:36:53.627866613Z" level=info msg="CreateContainer within sandbox \"da57be279ceae8decc13f15e49c2e9dbd1670c70e78d50c2ef6ce45cc3820df1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e77863331ad986d2c34152e8417fb59b1fe103a1373e125268111caa808884db\"" Sep 9 05:36:53.632148 containerd[1574]: time="2025-09-09T05:36:53.632009636Z" level=info msg="StartContainer for \"e77863331ad986d2c34152e8417fb59b1fe103a1373e125268111caa808884db\"" Sep 9 05:36:53.637460 containerd[1574]: time="2025-09-09T05:36:53.637412651Z" level=info msg="connecting to shim e77863331ad986d2c34152e8417fb59b1fe103a1373e125268111caa808884db" address="unix:///run/containerd/s/c342fd6e6a1393857161c56ae67e5c5457599bd700e182471660f93a7c09adaa" protocol=ttrpc version=3 Sep 9 05:36:53.691412 systemd[1]: Started cri-containerd-e77863331ad986d2c34152e8417fb59b1fe103a1373e125268111caa808884db.scope - libcontainer container e77863331ad986d2c34152e8417fb59b1fe103a1373e125268111caa808884db. Sep 9 05:36:53.802315 containerd[1574]: time="2025-09-09T05:36:53.802090534Z" level=info msg="StartContainer for \"e77863331ad986d2c34152e8417fb59b1fe103a1373e125268111caa808884db\" returns successfully" Sep 9 05:36:53.965135 kubelet[2710]: I0909 05:36:53.964826 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cb8568b76-29tg6" podStartSLOduration=30.392568989 podStartE2EDuration="45.964800883s" podCreationTimestamp="2025-09-09 05:36:08 +0000 UTC" firstStartedPulling="2025-09-09 05:36:37.964140333 +0000 UTC m=+47.818974325" lastFinishedPulling="2025-09-09 05:36:53.536372212 +0000 UTC m=+63.391206219" observedRunningTime="2025-09-09 05:36:53.941112722 +0000 UTC m=+63.795946734" watchObservedRunningTime="2025-09-09 05:36:53.964800883 +0000 UTC m=+63.819634893" Sep 9 05:36:53.966619 kubelet[2710]: I0909 05:36:53.966553 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-68l7r" podStartSLOduration=25.338636794 podStartE2EDuration="41.966322134s" podCreationTimestamp="2025-09-09 05:36:12 +0000 UTC" firstStartedPulling="2025-09-09 05:36:36.494423853 +0000 UTC m=+46.349257845" lastFinishedPulling="2025-09-09 05:36:53.122109176 +0000 UTC m=+62.976943185" observedRunningTime="2025-09-09 05:36:53.965110521 +0000 UTC m=+63.819944537" watchObservedRunningTime="2025-09-09 05:36:53.966322134 +0000 UTC m=+63.821156160" Sep 9 05:36:54.284246 containerd[1574]: time="2025-09-09T05:36:54.283908872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41dc95f7e7ac66fa478610e3798df2c389dff04c6d4a8aefeebb5cb4af2671b0\" id:\"6a6242fce16682951a270d9d6b2a79041854b85c391bfde3748f4ad79fbaf16c\" pid:5074 exit_status:1 exited_at:{seconds:1757396214 nanos:279743044}" Sep 9 05:36:55.053994 systemd[1]: Started sshd@8-143.198.157.2:22-139.178.89.65:48944.service - OpenSSH per-connection server daemon (139.178.89.65:48944). Sep 9 05:36:55.389836 sshd[5113]: Accepted publickey for core from 139.178.89.65 port 48944 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:36:55.406063 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:36:55.423847 systemd-logind[1552]: New session 9 of user core. Sep 9 05:36:55.432026 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:36:55.507445 containerd[1574]: time="2025-09-09T05:36:55.507375857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41dc95f7e7ac66fa478610e3798df2c389dff04c6d4a8aefeebb5cb4af2671b0\" id:\"eadd505412034608c85d99c923f9a6a6e03dd51496aa116db4e06c239ed7f481\" pid:5106 exit_status:1 exited_at:{seconds:1757396215 nanos:471237652}" Sep 9 05:36:56.398411 sshd[5123]: Connection closed by 139.178.89.65 port 48944 Sep 9 05:36:56.399140 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Sep 9 05:36:56.418931 systemd-logind[1552]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:36:56.421046 systemd[1]: sshd@8-143.198.157.2:22-139.178.89.65:48944.service: Deactivated successfully. Sep 9 05:36:56.427880 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:36:56.432610 systemd-logind[1552]: Removed session 9. Sep 9 05:36:56.444994 containerd[1574]: time="2025-09-09T05:36:56.444263721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:56.446084 containerd[1574]: time="2025-09-09T05:36:56.446044412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 05:36:56.448695 containerd[1574]: time="2025-09-09T05:36:56.448645387Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:56.458949 containerd[1574]: time="2025-09-09T05:36:56.458883515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:56.460487 containerd[1574]: time="2025-09-09T05:36:56.460413175Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.923706516s" Sep 9 05:36:56.460487 containerd[1574]: time="2025-09-09T05:36:56.460483016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 05:36:56.465642 containerd[1574]: time="2025-09-09T05:36:56.465578280Z" level=info msg="CreateContainer within sandbox \"be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 05:36:56.501568 containerd[1574]: time="2025-09-09T05:36:56.497737507Z" level=info msg="Container f868e55c597c9c25b50e161c9e5663778942a790e68fba0861496f74c6af5538: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:56.522283 containerd[1574]: time="2025-09-09T05:36:56.522205898Z" level=info msg="CreateContainer within sandbox \"be57abc7e721c6919ee4370fc20d996273df81d5c7b439e33e0744e6970de0b1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f868e55c597c9c25b50e161c9e5663778942a790e68fba0861496f74c6af5538\"" Sep 9 05:36:56.523763 containerd[1574]: time="2025-09-09T05:36:56.523729625Z" level=info msg="StartContainer for \"f868e55c597c9c25b50e161c9e5663778942a790e68fba0861496f74c6af5538\"" Sep 9 05:36:56.525570 containerd[1574]: time="2025-09-09T05:36:56.525437672Z" level=info msg="connecting to shim f868e55c597c9c25b50e161c9e5663778942a790e68fba0861496f74c6af5538" address="unix:///run/containerd/s/6f7bfdb9e8c0fdc108ff1dcd59fcf128947bc93866a5b5d31d535b81585223d1" protocol=ttrpc version=3 Sep 9 05:36:56.587804 systemd[1]: Started cri-containerd-f868e55c597c9c25b50e161c9e5663778942a790e68fba0861496f74c6af5538.scope - libcontainer container f868e55c597c9c25b50e161c9e5663778942a790e68fba0861496f74c6af5538. Sep 9 05:36:56.652986 containerd[1574]: time="2025-09-09T05:36:56.652826108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41dc95f7e7ac66fa478610e3798df2c389dff04c6d4a8aefeebb5cb4af2671b0\" id:\"9b979241de96d608a9b12e0ff4ee17b98314cf34c47806a73420edb902b7f190\" pid:5144 exit_status:1 exited_at:{seconds:1757396216 nanos:652333592}" Sep 9 05:36:56.771300 containerd[1574]: time="2025-09-09T05:36:56.771191768Z" level=info msg="StartContainer for \"f868e55c597c9c25b50e161c9e5663778942a790e68fba0861496f74c6af5538\" returns successfully" Sep 9 05:36:57.287546 kubelet[2710]: I0909 05:36:57.287435 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-c78xj" podStartSLOduration=25.101345642 podStartE2EDuration="45.287404107s" podCreationTimestamp="2025-09-09 05:36:12 +0000 UTC" firstStartedPulling="2025-09-09 05:36:36.276408531 +0000 UTC m=+46.131242523" lastFinishedPulling="2025-09-09 05:36:56.462466996 +0000 UTC m=+66.317300988" observedRunningTime="2025-09-09 05:36:56.982710734 +0000 UTC m=+66.837544759" watchObservedRunningTime="2025-09-09 05:36:57.287404107 +0000 UTC m=+67.142238127" Sep 9 05:36:57.910728 kubelet[2710]: I0909 05:36:57.905438 2710 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 05:36:57.913679 kubelet[2710]: I0909 05:36:57.913545 2710 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 05:37:01.417113 systemd[1]: Started sshd@9-143.198.157.2:22-139.178.89.65:57448.service - OpenSSH per-connection server daemon (139.178.89.65:57448). Sep 9 05:37:01.717602 sshd[5199]: Accepted publickey for core from 139.178.89.65 port 57448 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:01.721683 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:01.751217 systemd-logind[1552]: New session 10 of user core. Sep 9 05:37:01.755085 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:37:02.536009 sshd[5203]: Connection closed by 139.178.89.65 port 57448 Sep 9 05:37:02.536782 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:02.554151 systemd[1]: sshd@9-143.198.157.2:22-139.178.89.65:57448.service: Deactivated successfully. Sep 9 05:37:02.561780 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:37:02.566838 systemd-logind[1552]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:37:02.574453 systemd[1]: Started sshd@10-143.198.157.2:22-139.178.89.65:57456.service - OpenSSH per-connection server daemon (139.178.89.65:57456). Sep 9 05:37:02.582179 systemd-logind[1552]: Removed session 10. Sep 9 05:37:02.685586 sshd[5216]: Accepted publickey for core from 139.178.89.65 port 57456 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:02.687535 sshd-session[5216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:02.698241 systemd-logind[1552]: New session 11 of user core. Sep 9 05:37:02.704447 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:37:02.997493 sshd[5219]: Connection closed by 139.178.89.65 port 57456 Sep 9 05:37:02.999046 sshd-session[5216]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:03.014339 systemd[1]: sshd@10-143.198.157.2:22-139.178.89.65:57456.service: Deactivated successfully. Sep 9 05:37:03.019800 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:37:03.026593 systemd-logind[1552]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:37:03.032994 systemd[1]: Started sshd@11-143.198.157.2:22-139.178.89.65:57466.service - OpenSSH per-connection server daemon (139.178.89.65:57466). Sep 9 05:37:03.038085 systemd-logind[1552]: Removed session 11. Sep 9 05:37:03.123672 sshd[5229]: Accepted publickey for core from 139.178.89.65 port 57466 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:03.126938 sshd-session[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:03.146652 systemd-logind[1552]: New session 12 of user core. Sep 9 05:37:03.159834 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:37:03.557341 sshd[5232]: Connection closed by 139.178.89.65 port 57466 Sep 9 05:37:03.561714 sshd-session[5229]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:03.571066 systemd[1]: sshd@11-143.198.157.2:22-139.178.89.65:57466.service: Deactivated successfully. Sep 9 05:37:03.575333 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:37:03.577276 systemd-logind[1552]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:37:03.583045 systemd-logind[1552]: Removed session 12. Sep 9 05:37:03.993908 containerd[1574]: time="2025-09-09T05:37:03.993837891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59508c86b637ab39405e6f43aa931e814c7549a724ceda478573a08c509af2d3\" id:\"65ecf6e5f525ff66278f6a392125828ee06759d4417eef4831000dbf48028f59\" pid:5251 exited_at:{seconds:1757396223 nanos:993061610}" Sep 9 05:37:08.420328 containerd[1574]: time="2025-09-09T05:37:08.420274403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41dc95f7e7ac66fa478610e3798df2c389dff04c6d4a8aefeebb5cb4af2671b0\" id:\"3fb00afb4fbadc899280e5b1355e354509396d6e9e7ac1bbace48f9a294516bb\" pid:5283 exited_at:{seconds:1757396228 nanos:419624324}" Sep 9 05:37:08.574908 systemd[1]: Started sshd@12-143.198.157.2:22-139.178.89.65:57470.service - OpenSSH per-connection server daemon (139.178.89.65:57470). Sep 9 05:37:08.717095 sshd[5295]: Accepted publickey for core from 139.178.89.65 port 57470 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:08.722625 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:08.730715 systemd-logind[1552]: New session 13 of user core. Sep 9 05:37:08.735110 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:37:09.435548 sshd[5298]: Connection closed by 139.178.89.65 port 57470 Sep 9 05:37:09.436257 sshd-session[5295]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:09.451461 systemd[1]: sshd@12-143.198.157.2:22-139.178.89.65:57470.service: Deactivated successfully. Sep 9 05:37:09.452232 systemd-logind[1552]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:37:09.456108 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:37:09.464744 systemd-logind[1552]: Removed session 13. Sep 9 05:37:14.458381 systemd[1]: Started sshd@13-143.198.157.2:22-139.178.89.65:56224.service - OpenSSH per-connection server daemon (139.178.89.65:56224). Sep 9 05:37:14.604033 sshd[5317]: Accepted publickey for core from 139.178.89.65 port 56224 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:14.606000 sshd-session[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:14.615578 systemd-logind[1552]: New session 14 of user core. Sep 9 05:37:14.625955 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:37:14.973293 sshd[5320]: Connection closed by 139.178.89.65 port 56224 Sep 9 05:37:14.975764 sshd-session[5317]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:14.984314 systemd[1]: sshd@13-143.198.157.2:22-139.178.89.65:56224.service: Deactivated successfully. Sep 9 05:37:14.992064 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:37:14.996812 systemd-logind[1552]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:37:15.001355 systemd-logind[1552]: Removed session 14. Sep 9 05:37:15.382325 kubelet[2710]: E0909 05:37:15.381073 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:37:16.070656 kubelet[2710]: I0909 05:37:16.070598 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:37:18.307253 kubelet[2710]: E0909 05:37:18.306703 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:37:19.996135 systemd[1]: Started sshd@14-143.198.157.2:22-139.178.89.65:46524.service - OpenSSH per-connection server daemon (139.178.89.65:46524). Sep 9 05:37:20.139138 sshd[5336]: Accepted publickey for core from 139.178.89.65 port 46524 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:20.140137 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:20.149635 systemd-logind[1552]: New session 15 of user core. Sep 9 05:37:20.157766 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:37:20.383290 sshd[5339]: Connection closed by 139.178.89.65 port 46524 Sep 9 05:37:20.382540 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:20.389930 systemd[1]: sshd@14-143.198.157.2:22-139.178.89.65:46524.service: Deactivated successfully. Sep 9 05:37:20.395391 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:37:20.397751 systemd-logind[1552]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:37:20.401356 systemd-logind[1552]: Removed session 15. Sep 9 05:37:22.508604 containerd[1574]: time="2025-09-09T05:37:22.508430050Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc29a6231d361c031a6ffc96615b8cfafe11e8cc35f9a00afc8fee494714bcc2\" id:\"dad79879d2dc4639bbab36d3c8c2d84f71e68792b619253d41104ee49a50ccc1\" pid:5374 exited_at:{seconds:1757396242 nanos:450099854}" Sep 9 05:37:22.681212 containerd[1574]: time="2025-09-09T05:37:22.680883026Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41dc95f7e7ac66fa478610e3798df2c389dff04c6d4a8aefeebb5cb4af2671b0\" id:\"92e0e41b9c5f5542a73042f1168a071808305842ff5232cd4e540d4d6ddaec10\" pid:5376 exited_at:{seconds:1757396242 nanos:680467492}" Sep 9 05:37:25.403985 systemd[1]: Started sshd@15-143.198.157.2:22-139.178.89.65:46538.service - OpenSSH per-connection server daemon (139.178.89.65:46538). Sep 9 05:37:25.554382 sshd[5397]: Accepted publickey for core from 139.178.89.65 port 46538 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:25.558494 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:25.570762 systemd-logind[1552]: New session 16 of user core. Sep 9 05:37:25.577587 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:37:26.077251 sshd[5400]: Connection closed by 139.178.89.65 port 46538 Sep 9 05:37:26.077777 sshd-session[5397]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:26.088689 systemd[1]: sshd@15-143.198.157.2:22-139.178.89.65:46538.service: Deactivated successfully. Sep 9 05:37:26.093012 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:37:26.095586 systemd-logind[1552]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:37:26.099365 systemd[1]: Started sshd@16-143.198.157.2:22-139.178.89.65:46548.service - OpenSSH per-connection server daemon (139.178.89.65:46548). Sep 9 05:37:26.106257 systemd-logind[1552]: Removed session 16. Sep 9 05:37:26.235555 sshd[5412]: Accepted publickey for core from 139.178.89.65 port 46548 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:26.239155 sshd-session[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:26.249096 systemd-logind[1552]: New session 17 of user core. Sep 9 05:37:26.256806 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:37:26.705156 sshd[5415]: Connection closed by 139.178.89.65 port 46548 Sep 9 05:37:26.714137 sshd-session[5412]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:26.747163 systemd[1]: sshd@16-143.198.157.2:22-139.178.89.65:46548.service: Deactivated successfully. Sep 9 05:37:26.756873 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:37:26.762907 systemd-logind[1552]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:37:26.779345 systemd-logind[1552]: Removed session 17. Sep 9 05:37:26.781501 systemd[1]: Started sshd@17-143.198.157.2:22-139.178.89.65:46562.service - OpenSSH per-connection server daemon (139.178.89.65:46562). Sep 9 05:37:27.010591 sshd[5427]: Accepted publickey for core from 139.178.89.65 port 46562 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:27.011355 sshd-session[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:27.025277 systemd-logind[1552]: New session 18 of user core. Sep 9 05:37:27.028869 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:37:27.315486 kubelet[2710]: E0909 05:37:27.315320 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:37:30.560935 kubelet[2710]: E0909 05:37:30.560875 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:37:30.571190 kubelet[2710]: E0909 05:37:30.561565 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:37:30.806203 sshd[5430]: Connection closed by 139.178.89.65 port 46562 Sep 9 05:37:30.811391 sshd-session[5427]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:30.858139 systemd[1]: sshd@17-143.198.157.2:22-139.178.89.65:46562.service: Deactivated successfully. Sep 9 05:37:30.866373 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:37:30.868132 systemd[1]: session-18.scope: Consumed 826ms CPU time, 81.4M memory peak. Sep 9 05:37:30.890059 systemd[1]: Started sshd@18-143.198.157.2:22-139.178.89.65:51008.service - OpenSSH per-connection server daemon (139.178.89.65:51008). Sep 9 05:37:30.892715 systemd-logind[1552]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:37:30.908198 systemd-logind[1552]: Removed session 18. Sep 9 05:37:31.022782 sshd[5449]: Accepted publickey for core from 139.178.89.65 port 51008 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:31.027423 sshd-session[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:31.055228 systemd-logind[1552]: New session 19 of user core. Sep 9 05:37:31.057778 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:37:32.765310 sshd[5453]: Connection closed by 139.178.89.65 port 51008 Sep 9 05:37:32.767155 sshd-session[5449]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:32.790345 systemd[1]: sshd@18-143.198.157.2:22-139.178.89.65:51008.service: Deactivated successfully. Sep 9 05:37:32.794074 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:37:32.794635 systemd[1]: session-19.scope: Consumed 874ms CPU time, 69.5M memory peak. Sep 9 05:37:32.802066 systemd-logind[1552]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:37:32.804315 systemd[1]: Started sshd@19-143.198.157.2:22-139.178.89.65:51018.service - OpenSSH per-connection server daemon (139.178.89.65:51018). Sep 9 05:37:32.818800 systemd-logind[1552]: Removed session 19. Sep 9 05:37:33.005358 sshd[5465]: Accepted publickey for core from 139.178.89.65 port 51018 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:33.007638 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:33.015374 systemd-logind[1552]: New session 20 of user core. Sep 9 05:37:33.022051 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:37:33.243555 sshd[5468]: Connection closed by 139.178.89.65 port 51018 Sep 9 05:37:33.243395 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:33.259358 systemd[1]: sshd@19-143.198.157.2:22-139.178.89.65:51018.service: Deactivated successfully. Sep 9 05:37:33.261717 systemd-logind[1552]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:37:33.264431 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:37:33.279849 systemd-logind[1552]: Removed session 20. Sep 9 05:37:33.805799 containerd[1574]: time="2025-09-09T05:37:33.805726361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59508c86b637ab39405e6f43aa931e814c7549a724ceda478573a08c509af2d3\" id:\"d3f590e840d13fd71c0e0411adcf265476c4aa324a3b85857a3e5dd121bcba7b\" pid:5491 exited_at:{seconds:1757396253 nanos:805091955}" Sep 9 05:37:38.263059 systemd[1]: Started sshd@20-143.198.157.2:22-139.178.89.65:51024.service - OpenSSH per-connection server daemon (139.178.89.65:51024). Sep 9 05:37:38.367073 sshd[5507]: Accepted publickey for core from 139.178.89.65 port 51024 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:38.370808 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:38.378509 systemd-logind[1552]: New session 21 of user core. Sep 9 05:37:38.384737 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:37:38.605571 sshd[5510]: Connection closed by 139.178.89.65 port 51024 Sep 9 05:37:38.606513 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:38.611757 systemd-logind[1552]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:37:38.615904 systemd[1]: sshd@20-143.198.157.2:22-139.178.89.65:51024.service: Deactivated successfully. Sep 9 05:37:38.619842 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:37:38.626343 systemd-logind[1552]: Removed session 21. Sep 9 05:37:42.133309 containerd[1574]: time="2025-09-09T05:37:42.133264195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc29a6231d361c031a6ffc96615b8cfafe11e8cc35f9a00afc8fee494714bcc2\" id:\"502c97ccaef85f25c1f1e56231781768be3802b7d15047cec4d7c164b63bf482\" pid:5535 exited_at:{seconds:1757396262 nanos:132505752}" Sep 9 05:37:43.623428 systemd[1]: Started sshd@21-143.198.157.2:22-139.178.89.65:56486.service - OpenSSH per-connection server daemon (139.178.89.65:56486). Sep 9 05:37:43.766410 sshd[5545]: Accepted publickey for core from 139.178.89.65 port 56486 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:43.768406 sshd-session[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:43.777310 systemd-logind[1552]: New session 22 of user core. Sep 9 05:37:43.783415 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:37:44.001890 sshd[5548]: Connection closed by 139.178.89.65 port 56486 Sep 9 05:37:44.003458 sshd-session[5545]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:44.013042 systemd[1]: sshd@21-143.198.157.2:22-139.178.89.65:56486.service: Deactivated successfully. Sep 9 05:37:44.019297 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:37:44.026409 systemd-logind[1552]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:37:44.031448 systemd-logind[1552]: Removed session 22. Sep 9 05:37:49.020655 systemd[1]: Started sshd@22-143.198.157.2:22-139.178.89.65:56492.service - OpenSSH per-connection server daemon (139.178.89.65:56492). Sep 9 05:37:49.164045 sshd[5560]: Accepted publickey for core from 139.178.89.65 port 56492 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:49.167497 sshd-session[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:49.176403 systemd-logind[1552]: New session 23 of user core. Sep 9 05:37:49.184739 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:37:49.407990 kubelet[2710]: E0909 05:37:49.407759 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:37:49.472736 sshd[5563]: Connection closed by 139.178.89.65 port 56492 Sep 9 05:37:49.474544 sshd-session[5560]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:49.485619 systemd-logind[1552]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:37:49.485907 systemd[1]: sshd@22-143.198.157.2:22-139.178.89.65:56492.service: Deactivated successfully. Sep 9 05:37:49.491306 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:37:49.497215 systemd-logind[1552]: Removed session 23.