Jun 21 05:27:50.878514 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 23:59:04 -00 2025 Jun 21 05:27:50.878558 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:27:50.878573 kernel: BIOS-provided physical RAM map: Jun 21 05:27:50.878606 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 21 05:27:50.878617 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 21 05:27:50.878628 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 21 05:27:50.878641 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jun 21 05:27:50.878659 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jun 21 05:27:50.878680 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 21 05:27:50.878691 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 21 05:27:50.878702 kernel: NX (Execute Disable) protection: active Jun 21 05:27:50.878712 kernel: APIC: Static calls initialized Jun 21 05:27:50.878723 kernel: SMBIOS 2.8 present. Jun 21 05:27:50.878734 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jun 21 05:27:50.878752 kernel: DMI: Memory slots populated: 1/1 Jun 21 05:27:50.878765 kernel: Hypervisor detected: KVM Jun 21 05:27:50.878780 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 21 05:27:50.878792 kernel: kvm-clock: using sched offset of 4707042626 cycles Jun 21 05:27:50.878806 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 21 05:27:50.878819 kernel: tsc: Detected 2494.136 MHz processor Jun 21 05:27:50.878831 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 21 05:27:50.878843 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 21 05:27:50.878856 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jun 21 05:27:50.878873 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 21 05:27:50.878885 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 21 05:27:50.878898 kernel: ACPI: Early table checksum verification disabled Jun 21 05:27:50.878910 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jun 21 05:27:50.878923 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:50.878936 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:50.878949 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:50.878961 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 21 05:27:50.878972 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:50.878985 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:50.878996 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:50.879006 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:27:50.879017 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jun 21 05:27:50.879027 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jun 21 05:27:50.879038 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 21 05:27:50.879048 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jun 21 05:27:50.879060 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jun 21 05:27:50.879079 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jun 21 05:27:50.879091 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jun 21 05:27:50.879104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 21 05:27:50.879133 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 21 05:27:50.879147 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Jun 21 05:27:50.879164 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Jun 21 05:27:50.879177 kernel: Zone ranges: Jun 21 05:27:50.879190 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 21 05:27:50.879203 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jun 21 05:27:50.879216 kernel: Normal empty Jun 21 05:27:50.879229 kernel: Device empty Jun 21 05:27:50.879242 kernel: Movable zone start for each node Jun 21 05:27:50.879256 kernel: Early memory node ranges Jun 21 05:27:50.879269 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 21 05:27:50.879283 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jun 21 05:27:50.879355 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jun 21 05:27:50.879369 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 21 05:27:50.879393 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 21 05:27:50.879406 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jun 21 05:27:50.879419 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 21 05:27:50.879432 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 21 05:27:50.879450 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 21 05:27:50.879463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 21 05:27:50.879480 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 21 05:27:50.879505 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 21 05:27:50.879521 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 21 05:27:50.879535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 21 05:27:50.879549 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 21 05:27:50.879562 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 21 05:27:50.879575 kernel: TSC deadline timer available Jun 21 05:27:50.879588 kernel: CPU topo: Max. logical packages: 1 Jun 21 05:27:50.879600 kernel: CPU topo: Max. logical dies: 1 Jun 21 05:27:50.879614 kernel: CPU topo: Max. dies per package: 1 Jun 21 05:27:50.879627 kernel: CPU topo: Max. threads per core: 1 Jun 21 05:27:50.879644 kernel: CPU topo: Num. cores per package: 2 Jun 21 05:27:50.879657 kernel: CPU topo: Num. threads per package: 2 Jun 21 05:27:50.879670 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 21 05:27:50.879689 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 21 05:27:50.879702 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 21 05:27:50.879715 kernel: Booting paravirtualized kernel on KVM Jun 21 05:27:50.879729 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 21 05:27:50.879743 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 21 05:27:50.879756 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 21 05:27:50.879773 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 21 05:27:50.879786 kernel: pcpu-alloc: [0] 0 1 Jun 21 05:27:50.879799 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 21 05:27:50.879816 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:27:50.879835 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 05:27:50.879848 kernel: random: crng init done Jun 21 05:27:50.879861 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 21 05:27:50.879875 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 21 05:27:50.879892 kernel: Fallback order for Node 0: 0 Jun 21 05:27:50.879905 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Jun 21 05:27:50.879918 kernel: Policy zone: DMA32 Jun 21 05:27:50.879931 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 05:27:50.879945 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 21 05:27:50.879958 kernel: Kernel/User page tables isolation: enabled Jun 21 05:27:50.879971 kernel: ftrace: allocating 40093 entries in 157 pages Jun 21 05:27:50.879984 kernel: ftrace: allocated 157 pages with 5 groups Jun 21 05:27:50.879997 kernel: Dynamic Preempt: voluntary Jun 21 05:27:50.880014 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 05:27:50.880028 kernel: rcu: RCU event tracing is enabled. Jun 21 05:27:50.880042 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 21 05:27:50.880055 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 05:27:50.880069 kernel: Rude variant of Tasks RCU enabled. Jun 21 05:27:50.880082 kernel: Tracing variant of Tasks RCU enabled. Jun 21 05:27:50.880096 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 05:27:50.880114 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 21 05:27:50.882200 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 05:27:50.882234 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 05:27:50.882248 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 05:27:50.882262 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 21 05:27:50.882275 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 05:27:50.882288 kernel: Console: colour VGA+ 80x25 Jun 21 05:27:50.882300 kernel: printk: legacy console [tty0] enabled Jun 21 05:27:50.882312 kernel: printk: legacy console [ttyS0] enabled Jun 21 05:27:50.882324 kernel: ACPI: Core revision 20240827 Jun 21 05:27:50.882337 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 21 05:27:50.882365 kernel: APIC: Switch to symmetric I/O mode setup Jun 21 05:27:50.882378 kernel: x2apic enabled Jun 21 05:27:50.882392 kernel: APIC: Switched APIC routing to: physical x2apic Jun 21 05:27:50.882409 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 21 05:27:50.882451 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Jun 21 05:27:50.882466 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494136) Jun 21 05:27:50.882485 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 21 05:27:50.882499 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 21 05:27:50.882514 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 21 05:27:50.882531 kernel: Spectre V2 : Mitigation: Retpolines Jun 21 05:27:50.882546 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 21 05:27:50.882560 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jun 21 05:27:50.882575 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 21 05:27:50.882589 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 21 05:27:50.882604 kernel: MDS: Mitigation: Clear CPU buffers Jun 21 05:27:50.882618 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 21 05:27:50.882635 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 21 05:27:50.882649 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 21 05:27:50.882663 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 21 05:27:50.882678 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 21 05:27:50.882692 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 21 05:27:50.882707 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 21 05:27:50.882720 kernel: Freeing SMP alternatives memory: 32K Jun 21 05:27:50.882734 kernel: pid_max: default: 32768 minimum: 301 Jun 21 05:27:50.882749 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 05:27:50.882768 kernel: landlock: Up and running. Jun 21 05:27:50.882782 kernel: SELinux: Initializing. Jun 21 05:27:50.882796 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 21 05:27:50.882810 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 21 05:27:50.882824 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jun 21 05:27:50.882838 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jun 21 05:27:50.882852 kernel: signal: max sigframe size: 1776 Jun 21 05:27:50.882866 kernel: rcu: Hierarchical SRCU implementation. Jun 21 05:27:50.882882 kernel: rcu: Max phase no-delay instances is 400. Jun 21 05:27:50.882901 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 05:27:50.882916 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 21 05:27:50.882930 kernel: smp: Bringing up secondary CPUs ... Jun 21 05:27:50.882944 kernel: smpboot: x86: Booting SMP configuration: Jun 21 05:27:50.882963 kernel: .... node #0, CPUs: #1 Jun 21 05:27:50.882977 kernel: smp: Brought up 1 node, 2 CPUs Jun 21 05:27:50.882991 kernel: smpboot: Total of 2 processors activated (9976.54 BogoMIPS) Jun 21 05:27:50.883006 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 125140K reserved, 0K cma-reserved) Jun 21 05:27:50.883021 kernel: devtmpfs: initialized Jun 21 05:27:50.883039 kernel: x86/mm: Memory block size: 128MB Jun 21 05:27:50.883060 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 05:27:50.883073 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 21 05:27:50.883087 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 05:27:50.883102 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 05:27:50.883116 kernel: audit: initializing netlink subsys (disabled) Jun 21 05:27:50.883153 kernel: audit: type=2000 audit(1750483667.634:1): state=initialized audit_enabled=0 res=1 Jun 21 05:27:50.883168 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 05:27:50.883183 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 21 05:27:50.883201 kernel: cpuidle: using governor menu Jun 21 05:27:50.883215 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 05:27:50.883229 kernel: dca service started, version 1.12.1 Jun 21 05:27:50.883243 kernel: PCI: Using configuration type 1 for base access Jun 21 05:27:50.883258 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 21 05:27:50.883277 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 05:27:50.883292 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 05:27:50.883306 kernel: ACPI: Added _OSI(Module Device) Jun 21 05:27:50.883321 kernel: ACPI: Added _OSI(Processor Device) Jun 21 05:27:50.883339 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 05:27:50.883354 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 05:27:50.883368 kernel: ACPI: Interpreter enabled Jun 21 05:27:50.883382 kernel: ACPI: PM: (supports S0 S5) Jun 21 05:27:50.883396 kernel: ACPI: Using IOAPIC for interrupt routing Jun 21 05:27:50.883411 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 21 05:27:50.883429 kernel: PCI: Using E820 reservations for host bridge windows Jun 21 05:27:50.883444 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 21 05:27:50.883458 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 21 05:27:50.883758 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 21 05:27:50.883914 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 21 05:27:50.884050 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 21 05:27:50.884074 kernel: acpiphp: Slot [3] registered Jun 21 05:27:50.884089 kernel: acpiphp: Slot [4] registered Jun 21 05:27:50.884103 kernel: acpiphp: Slot [5] registered Jun 21 05:27:50.884117 kernel: acpiphp: Slot [6] registered Jun 21 05:27:50.886206 kernel: acpiphp: Slot [7] registered Jun 21 05:27:50.886221 kernel: acpiphp: Slot [8] registered Jun 21 05:27:50.886234 kernel: acpiphp: Slot [9] registered Jun 21 05:27:50.886246 kernel: acpiphp: Slot [10] registered Jun 21 05:27:50.886259 kernel: acpiphp: Slot [11] registered Jun 21 05:27:50.886272 kernel: acpiphp: Slot [12] registered Jun 21 05:27:50.886281 kernel: acpiphp: Slot [13] registered Jun 21 05:27:50.886289 kernel: acpiphp: Slot [14] registered Jun 21 05:27:50.886298 kernel: acpiphp: Slot [15] registered Jun 21 05:27:50.886307 kernel: acpiphp: Slot [16] registered Jun 21 05:27:50.886319 kernel: acpiphp: Slot [17] registered Jun 21 05:27:50.886328 kernel: acpiphp: Slot [18] registered Jun 21 05:27:50.886337 kernel: acpiphp: Slot [19] registered Jun 21 05:27:50.886346 kernel: acpiphp: Slot [20] registered Jun 21 05:27:50.886356 kernel: acpiphp: Slot [21] registered Jun 21 05:27:50.886365 kernel: acpiphp: Slot [22] registered Jun 21 05:27:50.886373 kernel: acpiphp: Slot [23] registered Jun 21 05:27:50.886382 kernel: acpiphp: Slot [24] registered Jun 21 05:27:50.886391 kernel: acpiphp: Slot [25] registered Jun 21 05:27:50.886403 kernel: acpiphp: Slot [26] registered Jun 21 05:27:50.886412 kernel: acpiphp: Slot [27] registered Jun 21 05:27:50.886421 kernel: acpiphp: Slot [28] registered Jun 21 05:27:50.886429 kernel: acpiphp: Slot [29] registered Jun 21 05:27:50.886439 kernel: acpiphp: Slot [30] registered Jun 21 05:27:50.886447 kernel: acpiphp: Slot [31] registered Jun 21 05:27:50.886456 kernel: PCI host bridge to bus 0000:00 Jun 21 05:27:50.886629 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 21 05:27:50.886730 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 21 05:27:50.886870 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 21 05:27:50.886959 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 21 05:27:50.887088 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 21 05:27:50.887315 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 21 05:27:50.887521 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jun 21 05:27:50.887678 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jun 21 05:27:50.887849 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jun 21 05:27:50.888034 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Jun 21 05:27:50.888195 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jun 21 05:27:50.888333 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jun 21 05:27:50.888473 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jun 21 05:27:50.888611 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jun 21 05:27:50.888767 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jun 21 05:27:50.888888 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Jun 21 05:27:50.889052 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jun 21 05:27:50.891295 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 21 05:27:50.891466 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 21 05:27:50.891636 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jun 21 05:27:50.891782 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jun 21 05:27:50.891931 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Jun 21 05:27:50.892100 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Jun 21 05:27:50.893383 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Jun 21 05:27:50.893538 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 21 05:27:50.893689 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 21 05:27:50.893829 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Jun 21 05:27:50.893968 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Jun 21 05:27:50.895189 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Jun 21 05:27:50.895386 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 21 05:27:50.895581 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Jun 21 05:27:50.895729 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Jun 21 05:27:50.895868 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 21 05:27:50.896038 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jun 21 05:27:50.896193 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Jun 21 05:27:50.897278 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Jun 21 05:27:50.897437 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 21 05:27:50.897577 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 21 05:27:50.897718 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Jun 21 05:27:50.897881 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Jun 21 05:27:50.898031 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Jun 21 05:27:50.899304 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 21 05:27:50.899466 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Jun 21 05:27:50.899609 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Jun 21 05:27:50.899748 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Jun 21 05:27:50.899913 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jun 21 05:27:50.900065 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Jun 21 05:27:50.904042 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Jun 21 05:27:50.904086 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 21 05:27:50.904102 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 21 05:27:50.904117 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 21 05:27:50.904155 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 21 05:27:50.904176 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 21 05:27:50.904190 kernel: iommu: Default domain type: Translated Jun 21 05:27:50.904205 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 21 05:27:50.904218 kernel: PCI: Using ACPI for IRQ routing Jun 21 05:27:50.904241 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 21 05:27:50.904256 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 21 05:27:50.904270 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jun 21 05:27:50.904451 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 21 05:27:50.904650 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 21 05:27:50.904791 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 21 05:27:50.904808 kernel: vgaarb: loaded Jun 21 05:27:50.904823 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 21 05:27:50.904838 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 21 05:27:50.904860 kernel: clocksource: Switched to clocksource kvm-clock Jun 21 05:27:50.904874 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 05:27:50.904887 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 05:27:50.904900 kernel: pnp: PnP ACPI init Jun 21 05:27:50.904913 kernel: pnp: PnP ACPI: found 4 devices Jun 21 05:27:50.904928 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 21 05:27:50.904943 kernel: NET: Registered PF_INET protocol family Jun 21 05:27:50.904958 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 21 05:27:50.904969 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 21 05:27:50.904986 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 05:27:50.905007 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 21 05:27:50.905024 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 21 05:27:50.905042 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 21 05:27:50.905056 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 21 05:27:50.905070 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 21 05:27:50.905085 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 05:27:50.905099 kernel: NET: Registered PF_XDP protocol family Jun 21 05:27:50.905265 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 21 05:27:50.905395 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 21 05:27:50.905511 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 21 05:27:50.905630 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 21 05:27:50.905759 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 21 05:27:50.905908 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 21 05:27:50.906056 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 21 05:27:50.906076 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 21 05:27:50.906259 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 28116 usecs Jun 21 05:27:50.906281 kernel: PCI: CLS 0 bytes, default 64 Jun 21 05:27:50.906295 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 21 05:27:50.906310 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Jun 21 05:27:50.906324 kernel: Initialise system trusted keyrings Jun 21 05:27:50.906338 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 21 05:27:50.906352 kernel: Key type asymmetric registered Jun 21 05:27:50.906366 kernel: Asymmetric key parser 'x509' registered Jun 21 05:27:50.906381 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 21 05:27:50.906408 kernel: io scheduler mq-deadline registered Jun 21 05:27:50.906423 kernel: io scheduler kyber registered Jun 21 05:27:50.906437 kernel: io scheduler bfq registered Jun 21 05:27:50.906450 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 21 05:27:50.906465 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 21 05:27:50.906479 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 21 05:27:50.906493 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 21 05:27:50.906507 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 05:27:50.906522 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 05:27:50.906540 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 21 05:27:50.906554 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 21 05:27:50.906569 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 21 05:27:50.906760 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 21 05:27:50.906784 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 21 05:27:50.906917 kernel: rtc_cmos 00:03: registered as rtc0 Jun 21 05:27:50.907104 kernel: rtc_cmos 00:03: setting system clock to 2025-06-21T05:27:50 UTC (1750483670) Jun 21 05:27:50.910517 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jun 21 05:27:50.910562 kernel: intel_pstate: CPU model not supported Jun 21 05:27:50.910578 kernel: NET: Registered PF_INET6 protocol family Jun 21 05:27:50.910593 kernel: Segment Routing with IPv6 Jun 21 05:27:50.910606 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 05:27:50.910621 kernel: NET: Registered PF_PACKET protocol family Jun 21 05:27:50.910636 kernel: Key type dns_resolver registered Jun 21 05:27:50.910650 kernel: IPI shorthand broadcast: enabled Jun 21 05:27:50.910665 kernel: sched_clock: Marking stable (3846003754, 91288896)->(3958339561, -21046911) Jun 21 05:27:50.910678 kernel: registered taskstats version 1 Jun 21 05:27:50.910696 kernel: Loading compiled-in X.509 certificates Jun 21 05:27:50.910710 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: ec4617d162e00e1890f71f252cdf44036a7b66f7' Jun 21 05:27:50.910725 kernel: Demotion targets for Node 0: null Jun 21 05:27:50.910740 kernel: Key type .fscrypt registered Jun 21 05:27:50.910754 kernel: Key type fscrypt-provisioning registered Jun 21 05:27:50.910771 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 05:27:50.910810 kernel: ima: Allocated hash algorithm: sha1 Jun 21 05:27:50.910828 kernel: ima: No architecture policies found Jun 21 05:27:50.910842 kernel: clk: Disabling unused clocks Jun 21 05:27:50.910860 kernel: Warning: unable to open an initial console. Jun 21 05:27:50.910875 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 21 05:27:50.910890 kernel: Write protecting the kernel read-only data: 24576k Jun 21 05:27:50.910905 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 21 05:27:50.910920 kernel: Run /init as init process Jun 21 05:27:50.910934 kernel: with arguments: Jun 21 05:27:50.910948 kernel: /init Jun 21 05:27:50.910963 kernel: with environment: Jun 21 05:27:50.910978 kernel: HOME=/ Jun 21 05:27:50.910996 kernel: TERM=linux Jun 21 05:27:50.911008 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 05:27:50.911025 systemd[1]: Successfully made /usr/ read-only. Jun 21 05:27:50.911044 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 05:27:50.911059 systemd[1]: Detected virtualization kvm. Jun 21 05:27:50.911073 systemd[1]: Detected architecture x86-64. Jun 21 05:27:50.911087 systemd[1]: Running in initrd. Jun 21 05:27:50.911104 systemd[1]: No hostname configured, using default hostname. Jun 21 05:27:50.911132 systemd[1]: Hostname set to . Jun 21 05:27:50.911147 systemd[1]: Initializing machine ID from VM UUID. Jun 21 05:27:50.911161 systemd[1]: Queued start job for default target initrd.target. Jun 21 05:27:50.911172 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:27:50.911183 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:27:50.911200 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 05:27:50.911213 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 05:27:50.911234 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 05:27:50.911250 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 05:27:50.911267 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 05:27:50.911287 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 05:27:50.911306 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:27:50.911323 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:27:50.911338 systemd[1]: Reached target paths.target - Path Units. Jun 21 05:27:50.911354 systemd[1]: Reached target slices.target - Slice Units. Jun 21 05:27:50.911369 systemd[1]: Reached target swap.target - Swaps. Jun 21 05:27:50.911385 systemd[1]: Reached target timers.target - Timer Units. Jun 21 05:27:50.911401 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 05:27:50.911416 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 05:27:50.911435 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 05:27:50.911451 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 05:27:50.911466 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:27:50.911482 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 05:27:50.911498 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:27:50.911514 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 05:27:50.911529 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 05:27:50.911545 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 05:27:50.911560 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 05:27:50.911581 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 05:27:50.911596 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 05:27:50.911612 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 05:27:50.911628 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 05:27:50.911643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:27:50.911659 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 05:27:50.911722 systemd-journald[212]: Collecting audit messages is disabled. Jun 21 05:27:50.911758 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:27:50.911778 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 05:27:50.911795 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 05:27:50.911823 systemd-journald[212]: Journal started Jun 21 05:27:50.911857 systemd-journald[212]: Runtime Journal (/run/log/journal/8c2709627d22460ab86b9fc66fea438d) is 4.9M, max 39.5M, 34.6M free. Jun 21 05:27:50.916148 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 05:27:50.914478 systemd-modules-load[213]: Inserted module 'overlay' Jun 21 05:27:50.948139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:27:50.950346 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 05:27:50.951052 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 05:27:50.953145 kernel: Bridge firewalling registered Jun 21 05:27:50.953213 systemd-modules-load[213]: Inserted module 'br_netfilter' Jun 21 05:27:50.954811 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 05:27:50.957732 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 05:27:50.961320 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:27:50.964200 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 05:27:50.969337 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 05:27:50.992104 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:27:50.992524 systemd-tmpfiles[232]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 05:27:50.995210 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:27:51.000973 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 05:27:51.001983 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:27:51.004520 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 05:27:51.007271 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 05:27:51.040850 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:27:51.067021 systemd-resolved[250]: Positive Trust Anchors: Jun 21 05:27:51.067607 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 05:27:51.067648 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 05:27:51.073044 systemd-resolved[250]: Defaulting to hostname 'linux'. Jun 21 05:27:51.075412 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 05:27:51.076038 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:27:51.163171 kernel: SCSI subsystem initialized Jun 21 05:27:51.173204 kernel: Loading iSCSI transport class v2.0-870. Jun 21 05:27:51.188183 kernel: iscsi: registered transport (tcp) Jun 21 05:27:51.221422 kernel: iscsi: registered transport (qla4xxx) Jun 21 05:27:51.221522 kernel: QLogic iSCSI HBA Driver Jun 21 05:27:51.250356 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 05:27:51.287730 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:27:51.289148 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 05:27:51.361795 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 05:27:51.364574 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 05:27:51.432188 kernel: raid6: avx2x4 gen() 12613 MB/s Jun 21 05:27:51.447169 kernel: raid6: avx2x2 gen() 12351 MB/s Jun 21 05:27:51.464228 kernel: raid6: avx2x1 gen() 10623 MB/s Jun 21 05:27:51.464342 kernel: raid6: using algorithm avx2x4 gen() 12613 MB/s Jun 21 05:27:51.482435 kernel: raid6: .... xor() 5529 MB/s, rmw enabled Jun 21 05:27:51.482544 kernel: raid6: using avx2x2 recovery algorithm Jun 21 05:27:51.505182 kernel: xor: automatically using best checksumming function avx Jun 21 05:27:51.683173 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 05:27:51.692884 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 05:27:51.695773 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:27:51.728072 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jun 21 05:27:51.735950 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:27:51.739982 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 05:27:51.767921 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jun 21 05:27:51.799821 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 05:27:51.801959 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 05:27:51.872718 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:27:51.876526 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 05:27:51.965396 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jun 21 05:27:51.973919 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jun 21 05:27:51.986146 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Jun 21 05:27:51.986395 kernel: scsi host0: Virtio SCSI HBA Jun 21 05:27:51.997151 kernel: cryptd: max_cpu_qlen set to 1000 Jun 21 05:27:52.021225 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 21 05:27:52.021294 kernel: GPT:9289727 != 125829119 Jun 21 05:27:52.023263 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 21 05:27:52.029550 kernel: GPT:9289727 != 125829119 Jun 21 05:27:52.029655 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 21 05:27:52.029675 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:27:52.050148 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 21 05:27:52.056183 kernel: AES CTR mode by8 optimization enabled Jun 21 05:27:52.069184 kernel: ACPI: bus type USB registered Jun 21 05:27:52.085163 kernel: usbcore: registered new interface driver usbfs Jun 21 05:27:52.092223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:27:52.092468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:27:52.093252 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:27:52.097988 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:27:52.102186 kernel: usbcore: registered new interface driver hub Jun 21 05:27:52.105806 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jun 21 05:27:52.106242 kernel: usbcore: registered new device driver usb Jun 21 05:27:52.106269 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jun 21 05:27:52.104361 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:27:52.124146 kernel: libata version 3.00 loaded. Jun 21 05:27:52.129392 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 21 05:27:52.141143 kernel: scsi host1: ata_piix Jun 21 05:27:52.152164 kernel: scsi host2: ata_piix Jun 21 05:27:52.155955 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Jun 21 05:27:52.156032 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Jun 21 05:27:52.223026 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 21 05:27:52.224674 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:27:52.252739 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 05:27:52.266808 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 21 05:27:52.278270 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 21 05:27:52.279638 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 21 05:27:52.282237 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 05:27:52.310925 disk-uuid[610]: Primary Header is updated. Jun 21 05:27:52.310925 disk-uuid[610]: Secondary Entries is updated. Jun 21 05:27:52.310925 disk-uuid[610]: Secondary Header is updated. Jun 21 05:27:52.319193 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:27:52.329157 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:27:52.343665 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jun 21 05:27:52.343934 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jun 21 05:27:52.344983 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jun 21 05:27:52.347408 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jun 21 05:27:52.347745 kernel: hub 1-0:1.0: USB hub found Jun 21 05:27:52.349062 kernel: hub 1-0:1.0: 2 ports detected Jun 21 05:27:52.500599 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 05:27:52.526673 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 05:27:52.527188 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:27:52.528028 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 05:27:52.530308 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 05:27:52.571806 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 05:27:53.327747 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:27:53.328230 disk-uuid[611]: The operation has completed successfully. Jun 21 05:27:53.398562 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 05:27:53.398722 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 05:27:53.439471 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 05:27:53.472467 sh[635]: Success Jun 21 05:27:53.493278 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 05:27:53.493361 kernel: device-mapper: uevent: version 1.0.3 Jun 21 05:27:53.494960 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 05:27:53.505907 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jun 21 05:27:53.568382 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 05:27:53.573245 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 05:27:53.585403 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 05:27:53.602166 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 05:27:53.602254 kernel: BTRFS: device fsid bfb8168c-5be0-428c-83e7-820ccaf1f8e9 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (647) Jun 21 05:27:53.606156 kernel: BTRFS info (device dm-0): first mount of filesystem bfb8168c-5be0-428c-83e7-820ccaf1f8e9 Jun 21 05:27:53.606255 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:27:53.606276 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 05:27:53.616717 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 05:27:53.617365 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 05:27:53.617822 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 05:27:53.619026 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 05:27:53.623287 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 05:27:53.653150 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (680) Jun 21 05:27:53.655519 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:27:53.655591 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:27:53.655638 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 05:27:53.665186 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:27:53.666977 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 05:27:53.669328 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 05:27:53.774327 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 05:27:53.778407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 05:27:53.880274 systemd-networkd[816]: lo: Link UP Jun 21 05:27:53.880287 systemd-networkd[816]: lo: Gained carrier Jun 21 05:27:53.884544 systemd-networkd[816]: Enumeration completed Jun 21 05:27:53.885069 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 05:27:53.885831 systemd-networkd[816]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 21 05:27:53.885835 systemd-networkd[816]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jun 21 05:27:53.888536 systemd[1]: Reached target network.target - Network. Jun 21 05:27:53.889015 systemd-networkd[816]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:27:53.889021 systemd-networkd[816]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 05:27:53.890006 systemd-networkd[816]: eth0: Link UP Jun 21 05:27:53.890011 systemd-networkd[816]: eth0: Gained carrier Jun 21 05:27:53.890027 systemd-networkd[816]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 21 05:27:53.895566 systemd-networkd[816]: eth1: Link UP Jun 21 05:27:53.895996 systemd-networkd[816]: eth1: Gained carrier Jun 21 05:27:53.896018 systemd-networkd[816]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:27:53.910096 ignition[725]: Ignition 2.21.0 Jun 21 05:27:53.910138 ignition[725]: Stage: fetch-offline Jun 21 05:27:53.910212 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:27:53.911633 systemd-networkd[816]: eth1: DHCPv4 address 10.124.0.22/20 acquired from 169.254.169.253 Jun 21 05:27:53.910227 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:27:53.913687 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 05:27:53.910427 ignition[725]: parsed url from cmdline: "" Jun 21 05:27:53.910432 ignition[725]: no config URL provided Jun 21 05:27:53.910441 ignition[725]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 05:27:53.916258 systemd-networkd[816]: eth0: DHCPv4 address 143.198.235.111/20, gateway 143.198.224.1 acquired from 169.254.169.253 Jun 21 05:27:53.910453 ignition[725]: no config at "/usr/lib/ignition/user.ign" Jun 21 05:27:53.920220 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 21 05:27:53.910462 ignition[725]: failed to fetch config: resource requires networking Jun 21 05:27:53.910982 ignition[725]: Ignition finished successfully Jun 21 05:27:53.956450 ignition[825]: Ignition 2.21.0 Jun 21 05:27:53.956468 ignition[825]: Stage: fetch Jun 21 05:27:53.956693 ignition[825]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:27:53.956708 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:27:53.956827 ignition[825]: parsed url from cmdline: "" Jun 21 05:27:53.956831 ignition[825]: no config URL provided Jun 21 05:27:53.956837 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 05:27:53.956845 ignition[825]: no config at "/usr/lib/ignition/user.ign" Jun 21 05:27:53.956884 ignition[825]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jun 21 05:27:53.983678 ignition[825]: GET result: OK Jun 21 05:27:53.984498 ignition[825]: parsing config with SHA512: e57f0c2df27c726697ec3d0ea187b9252a33c7a5865d713cabf6e36ed9893a50bec2c23510e0edfa0028618615bf82c1c713e693e60022dc319081f5c2ff5d81 Jun 21 05:27:53.993330 unknown[825]: fetched base config from "system" Jun 21 05:27:53.993871 ignition[825]: fetch: fetch complete Jun 21 05:27:53.993343 unknown[825]: fetched base config from "system" Jun 21 05:27:53.993879 ignition[825]: fetch: fetch passed Jun 21 05:27:53.993356 unknown[825]: fetched user config from "digitalocean" Jun 21 05:27:53.993949 ignition[825]: Ignition finished successfully Jun 21 05:27:53.997539 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 21 05:27:54.001341 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 05:27:54.040142 ignition[832]: Ignition 2.21.0 Jun 21 05:27:54.040162 ignition[832]: Stage: kargs Jun 21 05:27:54.040379 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:27:54.040394 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:27:54.042690 ignition[832]: kargs: kargs passed Jun 21 05:27:54.042807 ignition[832]: Ignition finished successfully Jun 21 05:27:54.047550 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 05:27:54.049810 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 05:27:54.084938 ignition[839]: Ignition 2.21.0 Jun 21 05:27:54.085689 ignition[839]: Stage: disks Jun 21 05:27:54.085912 ignition[839]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:27:54.085925 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:27:54.088088 ignition[839]: disks: disks passed Jun 21 05:27:54.088216 ignition[839]: Ignition finished successfully Jun 21 05:27:54.090532 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 05:27:54.091210 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 05:27:54.091813 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 05:27:54.092506 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 05:27:54.093234 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 05:27:54.094203 systemd[1]: Reached target basic.target - Basic System. Jun 21 05:27:54.096889 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 05:27:54.133903 systemd-fsck[848]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 21 05:27:54.137239 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 05:27:54.140276 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 05:27:54.283155 kernel: EXT4-fs (vda9): mounted filesystem 6d18c974-0fd6-4e4a-98cf-62524fcf9e99 r/w with ordered data mode. Quota mode: none. Jun 21 05:27:54.284679 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 05:27:54.286277 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 05:27:54.289855 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 05:27:54.292029 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 05:27:54.296286 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jun 21 05:27:54.305339 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 21 05:27:54.305777 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 05:27:54.305887 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 05:27:54.321470 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (856) Jun 21 05:27:54.325000 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:27:54.325741 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 05:27:54.328793 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:27:54.328876 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 05:27:54.332700 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 05:27:54.354586 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 05:27:54.421090 coreos-metadata[859]: Jun 21 05:27:54.420 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 21 05:27:54.432158 initrd-setup-root[886]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 05:27:54.436154 coreos-metadata[859]: Jun 21 05:27:54.434 INFO Fetch successful Jun 21 05:27:54.439539 initrd-setup-root[893]: cut: /sysroot/etc/group: No such file or directory Jun 21 05:27:54.444003 coreos-metadata[859]: Jun 21 05:27:54.443 INFO wrote hostname ci-4372.0.0-e-bb84d467cd to /sysroot/etc/hostname Jun 21 05:27:54.445631 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 21 05:27:54.449850 coreos-metadata[858]: Jun 21 05:27:54.449 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 21 05:27:54.454109 initrd-setup-root[901]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 05:27:54.459872 initrd-setup-root[908]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 05:27:54.463992 coreos-metadata[858]: Jun 21 05:27:54.463 INFO Fetch successful Jun 21 05:27:54.470591 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jun 21 05:27:54.470766 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jun 21 05:27:54.609680 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 05:27:54.612662 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 05:27:54.615356 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 05:27:54.642783 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 05:27:54.646272 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:27:54.668817 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 05:27:54.690201 ignition[978]: INFO : Ignition 2.21.0 Jun 21 05:27:54.691317 ignition[978]: INFO : Stage: mount Jun 21 05:27:54.691830 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:27:54.693148 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:27:54.695053 ignition[978]: INFO : mount: mount passed Jun 21 05:27:54.695053 ignition[978]: INFO : Ignition finished successfully Jun 21 05:27:54.696566 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 05:27:54.699253 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 05:27:54.721300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 05:27:54.745165 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (989) Jun 21 05:27:54.747164 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:27:54.749530 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:27:54.749646 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 05:27:54.755967 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 05:27:54.809690 ignition[1006]: INFO : Ignition 2.21.0 Jun 21 05:27:54.811267 ignition[1006]: INFO : Stage: files Jun 21 05:27:54.811267 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:27:54.811267 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:27:54.813855 ignition[1006]: DEBUG : files: compiled without relabeling support, skipping Jun 21 05:27:54.816014 ignition[1006]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 05:27:54.816014 ignition[1006]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 05:27:54.820308 ignition[1006]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 05:27:54.821401 ignition[1006]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 05:27:54.822746 unknown[1006]: wrote ssh authorized keys file for user: core Jun 21 05:27:54.823535 ignition[1006]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 05:27:54.826448 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 21 05:27:54.827230 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 21 05:27:54.979702 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 05:27:55.086802 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 21 05:27:55.086802 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 21 05:27:55.089034 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 05:27:55.089034 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 05:27:55.089034 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 05:27:55.089034 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 05:27:55.089034 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 05:27:55.089034 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 05:27:55.089034 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 05:27:55.099417 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 05:27:55.099417 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 05:27:55.099417 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 05:27:55.099417 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 05:27:55.099417 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 05:27:55.099417 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 21 05:27:55.120016 systemd-networkd[816]: eth1: Gained IPv6LL Jun 21 05:27:55.247458 systemd-networkd[816]: eth0: Gained IPv6LL Jun 21 05:27:55.915272 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 21 05:27:56.999548 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 05:27:56.999548 ignition[1006]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 21 05:27:57.001476 ignition[1006]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 05:27:57.005041 ignition[1006]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 05:27:57.005041 ignition[1006]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 21 05:27:57.005041 ignition[1006]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 21 05:27:57.007047 ignition[1006]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 05:27:57.007047 ignition[1006]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 05:27:57.007047 ignition[1006]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 05:27:57.007047 ignition[1006]: INFO : files: files passed Jun 21 05:27:57.007047 ignition[1006]: INFO : Ignition finished successfully Jun 21 05:27:57.009962 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 05:27:57.013828 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 05:27:57.016401 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 05:27:57.037364 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 05:27:57.037497 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 05:27:57.047795 initrd-setup-root-after-ignition[1036]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:27:57.047795 initrd-setup-root-after-ignition[1036]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:27:57.050696 initrd-setup-root-after-ignition[1040]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:27:57.051656 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 05:27:57.053101 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 05:27:57.055619 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 05:27:57.120326 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 05:27:57.120490 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 05:27:57.122488 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 05:27:57.123057 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 05:27:57.124241 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 05:27:57.125761 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 05:27:57.157426 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 05:27:57.160880 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 05:27:57.196110 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:27:57.197387 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:27:57.197974 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 05:27:57.198491 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 05:27:57.198657 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 05:27:57.199748 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 05:27:57.200321 systemd[1]: Stopped target basic.target - Basic System. Jun 21 05:27:57.201052 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 05:27:57.202362 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 05:27:57.203062 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 05:27:57.203849 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 05:27:57.204810 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 05:27:57.205723 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 05:27:57.206978 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 05:27:57.207854 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 05:27:57.208691 systemd[1]: Stopped target swap.target - Swaps. Jun 21 05:27:57.209582 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 05:27:57.209828 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 05:27:57.210992 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:27:57.211793 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:27:57.212682 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 05:27:57.212824 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:27:57.213832 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 05:27:57.214095 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 05:27:57.215206 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 05:27:57.215441 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 05:27:57.216349 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 05:27:57.216480 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 05:27:57.217387 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 21 05:27:57.217598 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 21 05:27:57.221302 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 05:27:57.221764 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 05:27:57.222042 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:27:57.227515 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 05:27:57.228602 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 05:27:57.228892 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:27:57.234197 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 05:27:57.234386 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 05:27:57.242765 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 05:27:57.243514 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 05:27:57.267988 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 05:27:57.274065 ignition[1060]: INFO : Ignition 2.21.0 Jun 21 05:27:57.274807 ignition[1060]: INFO : Stage: umount Jun 21 05:27:57.275686 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:27:57.275686 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:27:57.279017 ignition[1060]: INFO : umount: umount passed Jun 21 05:27:57.279017 ignition[1060]: INFO : Ignition finished successfully Jun 21 05:27:57.282084 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 05:27:57.282313 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 05:27:57.284041 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 05:27:57.284259 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 05:27:57.284813 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 05:27:57.284875 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 05:27:57.285616 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 21 05:27:57.285692 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 21 05:27:57.286503 systemd[1]: Stopped target network.target - Network. Jun 21 05:27:57.287247 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 05:27:57.287327 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 05:27:57.287942 systemd[1]: Stopped target paths.target - Path Units. Jun 21 05:27:57.288573 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 05:27:57.292847 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:27:57.293446 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 05:27:57.302288 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 05:27:57.303026 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 05:27:57.303105 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 05:27:57.324070 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 05:27:57.324168 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 05:27:57.325104 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 05:27:57.325223 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 05:27:57.325846 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 05:27:57.326012 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 05:27:57.326836 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 05:27:57.327650 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 05:27:57.336364 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 05:27:57.336557 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 05:27:57.342162 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 05:27:57.342649 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 05:27:57.342840 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 05:27:57.345520 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 05:27:57.346838 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 05:27:57.347582 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 05:27:57.347645 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:27:57.349857 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 05:27:57.370914 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 05:27:57.370990 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 05:27:57.373559 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 05:27:57.373655 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:27:57.377241 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 05:27:57.377332 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 05:27:57.378543 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 05:27:57.378602 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:27:57.380946 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:27:57.384609 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 05:27:57.384697 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:27:57.385474 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 05:27:57.385620 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 05:27:57.388874 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 05:27:57.388987 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 05:27:57.391944 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 05:27:57.392753 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:27:57.394636 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 05:27:57.394725 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 05:27:57.396352 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 05:27:57.396408 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:27:57.396929 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 05:27:57.397004 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 05:27:57.397778 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 05:27:57.397849 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 05:27:57.399294 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 05:27:57.399376 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 05:27:57.403322 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 05:27:57.403831 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 05:27:57.403916 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:27:57.406557 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 05:27:57.406631 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:27:57.407441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:27:57.407498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:27:57.412648 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 21 05:27:57.412735 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 21 05:27:57.412782 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:27:57.413340 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 05:27:57.414780 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 05:27:57.425024 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 05:27:57.425241 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 05:27:57.426937 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 05:27:57.428921 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 05:27:57.465086 systemd[1]: Switching root. Jun 21 05:27:57.552028 systemd-journald[212]: Journal stopped Jun 21 05:27:59.024751 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Jun 21 05:27:59.024865 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 05:27:59.024900 kernel: SELinux: policy capability open_perms=1 Jun 21 05:27:59.024921 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 05:27:59.024941 kernel: SELinux: policy capability always_check_network=0 Jun 21 05:27:59.024970 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 05:27:59.024991 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 05:27:59.025012 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 05:27:59.025034 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 05:27:59.025059 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 05:27:59.025078 kernel: audit: type=1403 audit(1750483677.724:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 05:27:59.025099 systemd[1]: Successfully loaded SELinux policy in 39.917ms. Jun 21 05:27:59.025157 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.132ms. Jun 21 05:27:59.025187 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 05:27:59.025211 systemd[1]: Detected virtualization kvm. Jun 21 05:27:59.025230 systemd[1]: Detected architecture x86-64. Jun 21 05:27:59.025252 systemd[1]: Detected first boot. Jun 21 05:27:59.025273 systemd[1]: Hostname set to . Jun 21 05:27:59.025296 systemd[1]: Initializing machine ID from VM UUID. Jun 21 05:27:59.025318 zram_generator::config[1105]: No configuration found. Jun 21 05:27:59.025347 kernel: Guest personality initialized and is inactive Jun 21 05:27:59.025374 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 21 05:27:59.025397 kernel: Initialized host personality Jun 21 05:27:59.025416 kernel: NET: Registered PF_VSOCK protocol family Jun 21 05:27:59.025437 systemd[1]: Populated /etc with preset unit settings. Jun 21 05:27:59.025462 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 05:27:59.025486 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 05:27:59.025501 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 05:27:59.025515 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 05:27:59.025530 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 05:27:59.025549 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 05:27:59.025563 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 05:27:59.025578 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 05:27:59.025592 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 05:27:59.025618 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 05:27:59.025639 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 05:27:59.025658 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 05:27:59.025678 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:27:59.025702 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:27:59.025731 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 05:27:59.025755 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 05:27:59.025780 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 05:27:59.025826 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 05:27:59.025846 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 21 05:27:59.025866 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:27:59.025892 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:27:59.025914 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 05:27:59.025936 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 05:27:59.025958 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 05:27:59.025982 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 05:27:59.026006 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:27:59.026085 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 05:27:59.026112 systemd[1]: Reached target slices.target - Slice Units. Jun 21 05:27:59.026162 systemd[1]: Reached target swap.target - Swaps. Jun 21 05:27:59.026195 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 05:27:59.026219 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 05:27:59.026239 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 05:27:59.026262 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:27:59.026282 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 05:27:59.026305 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:27:59.026329 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 05:27:59.026352 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 05:27:59.026375 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 05:27:59.026405 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 05:27:59.026426 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:27:59.026442 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 05:27:59.026457 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 05:27:59.026472 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 05:27:59.026498 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 05:27:59.026520 systemd[1]: Reached target machines.target - Containers. Jun 21 05:27:59.026543 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 05:27:59.026565 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:27:59.026580 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 05:27:59.026595 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 05:27:59.026609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 05:27:59.026623 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 05:27:59.026639 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 05:27:59.026654 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 05:27:59.026674 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 05:27:59.026694 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 05:27:59.026712 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 05:27:59.026726 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 05:27:59.026740 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 05:27:59.026753 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 05:27:59.026769 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:27:59.026783 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 05:27:59.026801 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 05:27:59.026820 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 05:27:59.026845 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 05:27:59.026868 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 05:27:59.026886 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 05:27:59.026905 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 05:27:59.026919 systemd[1]: Stopped verity-setup.service. Jun 21 05:27:59.026942 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:27:59.026963 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 05:27:59.026983 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 05:27:59.027003 kernel: loop: module loaded Jun 21 05:27:59.027020 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 05:27:59.027034 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 05:27:59.027058 kernel: fuse: init (API version 7.41) Jun 21 05:27:59.027077 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 05:27:59.027096 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 05:27:59.027116 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:27:59.032444 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 05:27:59.032474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 05:27:59.032494 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 05:27:59.032516 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 05:27:59.032553 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 05:27:59.032573 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 05:27:59.032594 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 05:27:59.032616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 05:27:59.032647 kernel: ACPI: bus type drm_connector registered Jun 21 05:27:59.032682 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 05:27:59.032707 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 05:27:59.032729 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 05:27:59.032750 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 05:27:59.032778 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 05:27:59.032799 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 05:27:59.032820 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 05:27:59.032846 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:27:59.032867 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 05:27:59.032889 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 05:27:59.032912 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 05:27:59.032934 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:27:59.032956 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 05:27:59.032978 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 05:27:59.033005 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 05:27:59.033027 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 05:27:59.033049 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 05:27:59.033075 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:27:59.033193 systemd-journald[1182]: Collecting audit messages is disabled. Jun 21 05:27:59.033242 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 05:27:59.033263 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 05:27:59.033292 systemd-journald[1182]: Journal started Jun 21 05:27:59.033335 systemd-journald[1182]: Runtime Journal (/run/log/journal/8c2709627d22460ab86b9fc66fea438d) is 4.9M, max 39.5M, 34.6M free. Jun 21 05:27:58.540961 systemd[1]: Queued start job for default target multi-user.target. Jun 21 05:27:58.565115 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 21 05:27:59.043013 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 05:27:58.565692 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 05:27:59.048924 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 05:27:59.049031 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 05:27:59.052888 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 05:27:59.054082 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 05:27:59.056956 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 05:27:59.099526 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 05:27:59.110483 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 05:27:59.169720 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 05:27:59.171775 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 05:27:59.192599 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 05:27:59.197268 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:27:59.206218 kernel: loop0: detected capacity change from 0 to 146240 Jun 21 05:27:59.246373 systemd-journald[1182]: Time spent on flushing to /var/log/journal/8c2709627d22460ab86b9fc66fea438d is 60.697ms for 1013 entries. Jun 21 05:27:59.246373 systemd-journald[1182]: System Journal (/var/log/journal/8c2709627d22460ab86b9fc66fea438d) is 8M, max 195.6M, 187.6M free. Jun 21 05:27:59.352839 systemd-journald[1182]: Received client request to flush runtime journal. Jun 21 05:27:59.353076 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 05:27:59.356561 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 05:27:59.374435 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 05:27:59.398712 kernel: loop1: detected capacity change from 0 to 113872 Jun 21 05:27:59.431939 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:27:59.443291 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 05:27:59.448414 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 05:27:59.464172 kernel: loop2: detected capacity change from 0 to 224512 Jun 21 05:27:59.500285 kernel: loop3: detected capacity change from 0 to 8 Jun 21 05:27:59.520907 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jun 21 05:27:59.520946 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jun 21 05:27:59.526429 kernel: loop4: detected capacity change from 0 to 146240 Jun 21 05:27:59.536320 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:27:59.569759 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 05:27:59.580975 kernel: loop5: detected capacity change from 0 to 113872 Jun 21 05:27:59.610194 kernel: loop6: detected capacity change from 0 to 224512 Jun 21 05:27:59.634179 kernel: loop7: detected capacity change from 0 to 8 Jun 21 05:27:59.638415 (sd-merge)[1254]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jun 21 05:27:59.640172 (sd-merge)[1254]: Merged extensions into '/usr'. Jun 21 05:27:59.651332 systemd[1]: Reload requested from client PID 1211 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 05:27:59.651357 systemd[1]: Reloading... Jun 21 05:27:59.772159 zram_generator::config[1281]: No configuration found. Jun 21 05:28:00.111235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:28:00.145148 ldconfig[1208]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 05:28:00.270619 systemd[1]: Reloading finished in 618 ms. Jun 21 05:28:00.306693 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 05:28:00.309363 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 05:28:00.323484 systemd[1]: Starting ensure-sysext.service... Jun 21 05:28:00.326051 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 05:28:00.369969 systemd[1]: Reload requested from client PID 1324 ('systemctl') (unit ensure-sysext.service)... Jun 21 05:28:00.369992 systemd[1]: Reloading... Jun 21 05:28:00.408932 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 05:28:00.408980 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 05:28:00.409378 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 05:28:00.409712 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 05:28:00.410945 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 05:28:00.411333 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jun 21 05:28:00.411396 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jun 21 05:28:00.419659 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 05:28:00.419674 systemd-tmpfiles[1325]: Skipping /boot Jun 21 05:28:00.484197 zram_generator::config[1349]: No configuration found. Jun 21 05:28:00.490024 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 05:28:00.490045 systemd-tmpfiles[1325]: Skipping /boot Jun 21 05:28:00.652555 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:28:00.771924 systemd[1]: Reloading finished in 401 ms. Jun 21 05:28:00.787853 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 05:28:00.805897 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:28:00.818478 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 05:28:00.824526 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 05:28:00.833515 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 05:28:00.839562 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 05:28:00.852465 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:28:00.859968 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 05:28:00.865101 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:00.865402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:28:00.870415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 05:28:00.872779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 05:28:00.879548 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 05:28:00.881112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:28:00.881363 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:28:00.881499 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:00.890479 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:00.890712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:28:00.890905 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:28:00.890989 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:28:00.902598 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 05:28:00.904232 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:00.912494 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:00.912862 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:28:00.928669 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 05:28:00.932003 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:28:00.933333 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:28:00.933577 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:28:00.935009 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 05:28:00.935804 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 05:28:00.944078 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 05:28:00.952544 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 05:28:00.956435 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 05:28:00.957932 systemd[1]: Finished ensure-sysext.service. Jun 21 05:28:00.959005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 05:28:00.960030 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 05:28:00.973556 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 05:28:00.973835 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 05:28:00.982609 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 21 05:28:00.984221 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 05:28:00.984767 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 05:28:00.985666 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 05:28:00.986220 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 05:28:00.991273 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 05:28:01.009294 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 05:28:01.046072 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 05:28:01.072882 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 05:28:01.078210 augenrules[1443]: No rules Jun 21 05:28:01.079798 systemd-udevd[1402]: Using default interface naming scheme 'v255'. Jun 21 05:28:01.085715 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 05:28:01.087267 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 05:28:01.148304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:28:01.155516 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 05:28:01.181045 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 21 05:28:01.181841 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 05:28:01.217074 systemd-resolved[1401]: Positive Trust Anchors: Jun 21 05:28:01.217095 systemd-resolved[1401]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 05:28:01.217158 systemd-resolved[1401]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 05:28:01.223184 systemd-resolved[1401]: Using system hostname 'ci-4372.0.0-e-bb84d467cd'. Jun 21 05:28:01.256858 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 05:28:01.260100 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:28:01.260738 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 05:28:01.262298 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 05:28:01.262891 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 05:28:01.264448 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 21 05:28:01.265283 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 05:28:01.266428 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 05:28:01.268231 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 05:28:01.268795 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 05:28:01.268845 systemd[1]: Reached target paths.target - Path Units. Jun 21 05:28:01.269312 systemd[1]: Reached target timers.target - Timer Units. Jun 21 05:28:01.272477 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 05:28:01.277648 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 05:28:01.287565 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 05:28:01.289546 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 05:28:01.291264 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 05:28:01.302335 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 05:28:01.303748 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 05:28:01.308417 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 05:28:01.320883 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 05:28:01.323014 systemd[1]: Reached target basic.target - Basic System. Jun 21 05:28:01.324630 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 05:28:01.324694 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 05:28:01.330505 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 21 05:28:01.336308 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 05:28:01.340356 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 05:28:01.344587 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 05:28:01.356659 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 05:28:01.357297 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 05:28:01.359895 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 21 05:28:01.369572 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 05:28:01.373098 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 05:28:01.389754 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 05:28:01.410486 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 05:28:01.419519 jq[1482]: false Jun 21 05:28:01.422862 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 05:28:01.427246 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 05:28:01.429406 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 05:28:01.431753 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 05:28:01.435881 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 05:28:01.439266 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 05:28:01.440445 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 05:28:01.440739 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 05:28:01.456025 google_oslogin_nss_cache[1484]: oslogin_cache_refresh[1484]: Refreshing passwd entry cache Jun 21 05:28:01.455866 oslogin_cache_refresh[1484]: Refreshing passwd entry cache Jun 21 05:28:01.491400 oslogin_cache_refresh[1484]: Failure getting users, quitting Jun 21 05:28:01.494402 google_oslogin_nss_cache[1484]: oslogin_cache_refresh[1484]: Failure getting users, quitting Jun 21 05:28:01.494402 google_oslogin_nss_cache[1484]: oslogin_cache_refresh[1484]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 05:28:01.494402 google_oslogin_nss_cache[1484]: oslogin_cache_refresh[1484]: Refreshing group entry cache Jun 21 05:28:01.494402 google_oslogin_nss_cache[1484]: oslogin_cache_refresh[1484]: Failure getting groups, quitting Jun 21 05:28:01.494402 google_oslogin_nss_cache[1484]: oslogin_cache_refresh[1484]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 05:28:01.484177 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 05:28:01.491425 oslogin_cache_refresh[1484]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 05:28:01.484518 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 05:28:01.491491 oslogin_cache_refresh[1484]: Refreshing group entry cache Jun 21 05:28:01.492081 oslogin_cache_refresh[1484]: Failure getting groups, quitting Jun 21 05:28:01.492091 oslogin_cache_refresh[1484]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 05:28:01.502711 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 21 05:28:01.505770 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 21 05:28:01.524483 update_engine[1493]: I20250621 05:28:01.524374 1493 main.cc:92] Flatcar Update Engine starting Jun 21 05:28:01.548599 extend-filesystems[1483]: Found /dev/vda6 Jun 21 05:28:01.552404 tar[1499]: linux-amd64/LICENSE Jun 21 05:28:01.552404 tar[1499]: linux-amd64/helm Jun 21 05:28:01.553525 extend-filesystems[1483]: Found /dev/vda9 Jun 21 05:28:01.559350 extend-filesystems[1483]: Checking size of /dev/vda9 Jun 21 05:28:01.568293 jq[1494]: true Jun 21 05:28:01.582132 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 05:28:01.584305 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 05:28:01.585005 systemd-networkd[1454]: lo: Link UP Jun 21 05:28:01.585017 systemd-networkd[1454]: lo: Gained carrier Jun 21 05:28:01.596640 coreos-metadata[1479]: Jun 21 05:28:01.596 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 21 05:28:01.598362 systemd-networkd[1454]: Enumeration completed Jun 21 05:28:01.599004 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 05:28:01.600353 coreos-metadata[1479]: Jun 21 05:28:01.598 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Jun 21 05:28:01.599851 systemd[1]: Reached target network.target - Network. Jun 21 05:28:01.604964 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 05:28:01.613059 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 05:28:01.624375 dbus-daemon[1480]: [system] SELinux support is enabled Jun 21 05:28:01.631238 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 05:28:01.632274 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 05:28:01.638337 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 05:28:01.638416 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 05:28:01.639103 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 05:28:01.641182 jq[1519]: true Jun 21 05:28:01.639189 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 05:28:01.648679 update_engine[1493]: I20250621 05:28:01.648406 1493 update_check_scheduler.cc:74] Next update check in 6m39s Jun 21 05:28:01.662198 extend-filesystems[1483]: Resized partition /dev/vda9 Jun 21 05:28:01.662341 systemd[1]: Started update-engine.service - Update Engine. Jun 21 05:28:01.673173 extend-filesystems[1528]: resize2fs 1.47.2 (1-Jan-2025) Jun 21 05:28:01.691577 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jun 21 05:28:01.701942 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 05:28:01.729089 (ntainerd)[1532]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 05:28:01.760260 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 05:28:01.810974 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Jun 21 05:28:01.823770 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jun 21 05:28:01.824279 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 05:28:01.914366 kernel: ISO 9660 Extensions: RRIP_1991A Jun 21 05:28:01.917914 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jun 21 05:28:01.919812 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jun 21 05:28:01.928176 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jun 21 05:28:02.005779 bash[1549]: Updated "/home/core/.ssh/authorized_keys" Jun 21 05:28:02.006218 systemd-logind[1491]: New seat seat0. Jun 21 05:28:02.007341 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 05:28:02.010249 extend-filesystems[1528]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 21 05:28:02.010249 extend-filesystems[1528]: old_desc_blocks = 1, new_desc_blocks = 8 Jun 21 05:28:02.010249 extend-filesystems[1528]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jun 21 05:28:02.010417 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 05:28:02.014650 extend-filesystems[1483]: Resized filesystem in /dev/vda9 Jun 21 05:28:02.011994 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 05:28:02.012366 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 05:28:02.030272 systemd[1]: Starting sshkeys.service... Jun 21 05:28:02.075154 systemd-networkd[1454]: eth1: Configuring with /run/systemd/network/10-5a:37:30:8a:98:23.network. Jun 21 05:28:02.077461 systemd-networkd[1454]: eth1: Link UP Jun 21 05:28:02.079945 systemd-networkd[1454]: eth1: Gained carrier Jun 21 05:28:02.089554 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Jun 21 05:28:02.104341 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 21 05:28:02.109084 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 21 05:28:02.282394 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 21 05:28:02.300179 locksmithd[1531]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 05:28:02.301711 coreos-metadata[1562]: Jun 21 05:28:02.301 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 21 05:28:02.306066 systemd-networkd[1454]: eth0: Configuring with /run/systemd/network/10-d6:a5:06:0d:87:e0.network. Jun 21 05:28:02.306976 coreos-metadata[1562]: Jun 21 05:28:02.306 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Jun 21 05:28:02.310353 systemd-networkd[1454]: eth0: Link UP Jun 21 05:28:02.311936 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Jun 21 05:28:02.313373 systemd-networkd[1454]: eth0: Gained carrier Jun 21 05:28:02.320790 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Jun 21 05:28:02.322793 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Jun 21 05:28:02.372476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 05:28:02.377447 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 05:28:02.442423 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 05:28:02.511835 kernel: mousedev: PS/2 mouse device common for all mice Jun 21 05:28:02.537155 containerd[1532]: time="2025-06-21T05:28:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 05:28:02.543069 containerd[1532]: time="2025-06-21T05:28:02.543014775Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 05:28:02.549836 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 21 05:28:02.575707 containerd[1532]: time="2025-06-21T05:28:02.575658242Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.452µs" Jun 21 05:28:02.575838 containerd[1532]: time="2025-06-21T05:28:02.575817755Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 05:28:02.579424 containerd[1532]: time="2025-06-21T05:28:02.578188078Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 05:28:02.579424 containerd[1532]: time="2025-06-21T05:28:02.578424782Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 05:28:02.579424 containerd[1532]: time="2025-06-21T05:28:02.578455235Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 05:28:02.579424 containerd[1532]: time="2025-06-21T05:28:02.578497711Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 05:28:02.579424 containerd[1532]: time="2025-06-21T05:28:02.578597484Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 05:28:02.579424 containerd[1532]: time="2025-06-21T05:28:02.578615881Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 05:28:02.584649 containerd[1532]: time="2025-06-21T05:28:02.583266783Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 05:28:02.584649 containerd[1532]: time="2025-06-21T05:28:02.583306036Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 05:28:02.584649 containerd[1532]: time="2025-06-21T05:28:02.583324491Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 05:28:02.584649 containerd[1532]: time="2025-06-21T05:28:02.583333087Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 05:28:02.584649 containerd[1532]: time="2025-06-21T05:28:02.583535047Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 05:28:02.584649 containerd[1532]: time="2025-06-21T05:28:02.583866441Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 05:28:02.584649 containerd[1532]: time="2025-06-21T05:28:02.583912071Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 05:28:02.584649 containerd[1532]: time="2025-06-21T05:28:02.583923520Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 05:28:02.584649 containerd[1532]: time="2025-06-21T05:28:02.583978777Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 05:28:02.584649 containerd[1532]: time="2025-06-21T05:28:02.584340637Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 05:28:02.584649 containerd[1532]: time="2025-06-21T05:28:02.584453459Z" level=info msg="metadata content store policy set" policy=shared Jun 21 05:28:02.590878 containerd[1532]: time="2025-06-21T05:28:02.590827689Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 05:28:02.591094 containerd[1532]: time="2025-06-21T05:28:02.591064049Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 05:28:02.591227 containerd[1532]: time="2025-06-21T05:28:02.591210385Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 05:28:02.591302 containerd[1532]: time="2025-06-21T05:28:02.591290591Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 05:28:02.591394 containerd[1532]: time="2025-06-21T05:28:02.591378255Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 05:28:02.591476 containerd[1532]: time="2025-06-21T05:28:02.591462790Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 05:28:02.591582 containerd[1532]: time="2025-06-21T05:28:02.591567842Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 05:28:02.593627 containerd[1532]: time="2025-06-21T05:28:02.593180095Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 05:28:02.593627 containerd[1532]: time="2025-06-21T05:28:02.593232558Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 05:28:02.593627 containerd[1532]: time="2025-06-21T05:28:02.593250794Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 05:28:02.593627 containerd[1532]: time="2025-06-21T05:28:02.593261095Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 05:28:02.593627 containerd[1532]: time="2025-06-21T05:28:02.593274577Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 05:28:02.593627 containerd[1532]: time="2025-06-21T05:28:02.593477572Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 05:28:02.593627 containerd[1532]: time="2025-06-21T05:28:02.593523656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 05:28:02.593627 containerd[1532]: time="2025-06-21T05:28:02.593566331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 05:28:02.593627 containerd[1532]: time="2025-06-21T05:28:02.593583833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 05:28:02.593627 containerd[1532]: time="2025-06-21T05:28:02.593597458Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 05:28:02.594714 containerd[1532]: time="2025-06-21T05:28:02.593607762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 05:28:02.594714 containerd[1532]: time="2025-06-21T05:28:02.594011285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 05:28:02.594714 containerd[1532]: time="2025-06-21T05:28:02.594029861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 05:28:02.594714 containerd[1532]: time="2025-06-21T05:28:02.594566255Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 05:28:02.594714 containerd[1532]: time="2025-06-21T05:28:02.594591811Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 05:28:02.594714 containerd[1532]: time="2025-06-21T05:28:02.594606922Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 05:28:02.595220 containerd[1532]: time="2025-06-21T05:28:02.594978057Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 05:28:02.595220 containerd[1532]: time="2025-06-21T05:28:02.595006518Z" level=info msg="Start snapshots syncer" Jun 21 05:28:02.595374 containerd[1532]: time="2025-06-21T05:28:02.595345213Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 05:28:02.598883 containerd[1532]: time="2025-06-21T05:28:02.598319453Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 05:28:02.598883 containerd[1532]: time="2025-06-21T05:28:02.598485589Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 05:28:02.599216 containerd[1532]: time="2025-06-21T05:28:02.598692617Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 05:28:02.599729 coreos-metadata[1479]: Jun 21 05:28:02.599 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Jun 21 05:28:02.602171 kernel: ACPI: button: Power Button [PWRF] Jun 21 05:28:02.607617 containerd[1532]: time="2025-06-21T05:28:02.599403502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 05:28:02.607617 containerd[1532]: time="2025-06-21T05:28:02.607418981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 05:28:02.607617 containerd[1532]: time="2025-06-21T05:28:02.607441452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 05:28:02.607617 containerd[1532]: time="2025-06-21T05:28:02.607453323Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 05:28:02.607617 containerd[1532]: time="2025-06-21T05:28:02.607468512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 05:28:02.607617 containerd[1532]: time="2025-06-21T05:28:02.607481562Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 05:28:02.607617 containerd[1532]: time="2025-06-21T05:28:02.607493272Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 05:28:02.607617 containerd[1532]: time="2025-06-21T05:28:02.607537280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 05:28:02.607617 containerd[1532]: time="2025-06-21T05:28:02.607551988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 05:28:02.607617 containerd[1532]: time="2025-06-21T05:28:02.607563588Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 05:28:02.608936 containerd[1532]: time="2025-06-21T05:28:02.608566292Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 05:28:02.608936 containerd[1532]: time="2025-06-21T05:28:02.608673811Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 05:28:02.608936 containerd[1532]: time="2025-06-21T05:28:02.608689644Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 05:28:02.608936 containerd[1532]: time="2025-06-21T05:28:02.608699669Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 05:28:02.608936 containerd[1532]: time="2025-06-21T05:28:02.608707472Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 05:28:02.608936 containerd[1532]: time="2025-06-21T05:28:02.608717303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 05:28:02.608936 containerd[1532]: time="2025-06-21T05:28:02.608730320Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 05:28:02.608936 containerd[1532]: time="2025-06-21T05:28:02.608751644Z" level=info msg="runtime interface created" Jun 21 05:28:02.608936 containerd[1532]: time="2025-06-21T05:28:02.608756948Z" level=info msg="created NRI interface" Jun 21 05:28:02.608936 containerd[1532]: time="2025-06-21T05:28:02.608768418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 05:28:02.608936 containerd[1532]: time="2025-06-21T05:28:02.608788610Z" level=info msg="Connect containerd service" Jun 21 05:28:02.608936 containerd[1532]: time="2025-06-21T05:28:02.608887128Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 05:28:02.615176 containerd[1532]: time="2025-06-21T05:28:02.614609774Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 05:28:02.616147 coreos-metadata[1479]: Jun 21 05:28:02.616 INFO Fetch successful Jun 21 05:28:02.635726 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 21 05:28:02.712701 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 21 05:28:02.750186 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 21 05:28:02.755062 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 05:28:02.807755 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 21 05:28:02.846816 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 21 05:28:02.929474 kernel: Console: switching to colour dummy device 80x25 Jun 21 05:28:02.929585 containerd[1532]: time="2025-06-21T05:28:02.925534576Z" level=info msg="Start subscribing containerd event" Jun 21 05:28:02.929585 containerd[1532]: time="2025-06-21T05:28:02.925607122Z" level=info msg="Start recovering state" Jun 21 05:28:02.929585 containerd[1532]: time="2025-06-21T05:28:02.925740927Z" level=info msg="Start event monitor" Jun 21 05:28:02.929585 containerd[1532]: time="2025-06-21T05:28:02.925756138Z" level=info msg="Start cni network conf syncer for default" Jun 21 05:28:02.929585 containerd[1532]: time="2025-06-21T05:28:02.925770902Z" level=info msg="Start streaming server" Jun 21 05:28:02.929585 containerd[1532]: time="2025-06-21T05:28:02.925780242Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 05:28:02.929585 containerd[1532]: time="2025-06-21T05:28:02.925788592Z" level=info msg="runtime interface starting up..." Jun 21 05:28:02.929585 containerd[1532]: time="2025-06-21T05:28:02.925797868Z" level=info msg="starting plugins..." Jun 21 05:28:02.929585 containerd[1532]: time="2025-06-21T05:28:02.925845935Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 05:28:02.929585 containerd[1532]: time="2025-06-21T05:28:02.926617752Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 05:28:02.929585 containerd[1532]: time="2025-06-21T05:28:02.926759947Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 05:28:02.929585 containerd[1532]: time="2025-06-21T05:28:02.926996253Z" level=info msg="containerd successfully booted in 0.390296s" Jun 21 05:28:02.926959 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 05:28:02.930269 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 21 05:28:02.930321 kernel: [drm] features: -context_init Jun 21 05:28:02.935457 kernel: [drm] number of scanouts: 1 Jun 21 05:28:02.935541 kernel: [drm] number of cap sets: 0 Jun 21 05:28:02.935563 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jun 21 05:28:03.093859 kernel: EDAC MC: Ver: 3.0.0 Jun 21 05:28:03.157167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:28:03.274019 systemd-logind[1491]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 21 05:28:03.306480 coreos-metadata[1562]: Jun 21 05:28:03.306 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Jun 21 05:28:03.310890 systemd-logind[1491]: Watching system buttons on /dev/input/event2 (Power Button) Jun 21 05:28:03.318240 coreos-metadata[1562]: Jun 21 05:28:03.316 INFO Fetch successful Jun 21 05:28:03.335246 unknown[1562]: wrote ssh authorized keys file for user: core Jun 21 05:28:03.385145 update-ssh-keys[1622]: Updated "/home/core/.ssh/authorized_keys" Jun 21 05:28:03.385732 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 21 05:28:03.388148 systemd[1]: Finished sshkeys.service. Jun 21 05:28:03.398267 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:28:03.490285 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:28:03.490600 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:28:03.491407 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:28:03.496380 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:28:03.501463 sshd_keygen[1520]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 05:28:03.500224 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:28:03.580187 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 05:28:03.585511 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 05:28:03.590682 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:28:03.613497 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 05:28:03.613875 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 05:28:03.618685 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 05:28:03.622232 tar[1499]: linux-amd64/README.md Jun 21 05:28:03.655630 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 05:28:03.656796 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 05:28:03.661608 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 05:28:03.665476 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 21 05:28:03.665756 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 05:28:03.823808 systemd-networkd[1454]: eth1: Gained IPv6LL Jun 21 05:28:03.825266 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Jun 21 05:28:03.828723 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 05:28:03.829569 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 05:28:03.832535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:28:03.835435 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 05:28:03.879944 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 05:28:04.207339 systemd-networkd[1454]: eth0: Gained IPv6LL Jun 21 05:28:04.208804 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Jun 21 05:28:04.809886 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 05:28:04.813481 systemd[1]: Started sshd@0-143.198.235.111:22-139.178.68.195:55448.service - OpenSSH per-connection server daemon (139.178.68.195:55448). Jun 21 05:28:04.925349 sshd[1668]: Accepted publickey for core from 139.178.68.195 port 55448 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:04.928593 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:04.941225 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 05:28:04.946474 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 05:28:04.971870 systemd-logind[1491]: New session 1 of user core. Jun 21 05:28:04.996438 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 05:28:05.002090 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 05:28:05.023846 (systemd)[1672]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 05:28:05.030661 systemd-logind[1491]: New session c1 of user core. Jun 21 05:28:05.224904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:05.226049 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 05:28:05.237051 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 05:28:05.238520 systemd[1672]: Queued start job for default target default.target. Jun 21 05:28:05.239852 systemd[1672]: Created slice app.slice - User Application Slice. Jun 21 05:28:05.239879 systemd[1672]: Reached target paths.target - Paths. Jun 21 05:28:05.239923 systemd[1672]: Reached target timers.target - Timers. Jun 21 05:28:05.243376 systemd[1672]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 05:28:05.267059 systemd[1672]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 05:28:05.267208 systemd[1672]: Reached target sockets.target - Sockets. Jun 21 05:28:05.267263 systemd[1672]: Reached target basic.target - Basic System. Jun 21 05:28:05.267304 systemd[1672]: Reached target default.target - Main User Target. Jun 21 05:28:05.267338 systemd[1672]: Startup finished in 218ms. Jun 21 05:28:05.269931 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 05:28:05.277402 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 05:28:05.277927 systemd[1]: Startup finished in 3.951s (kernel) + 7.070s (initrd) + 7.592s (userspace) = 18.614s. Jun 21 05:28:05.362702 systemd[1]: Started sshd@1-143.198.235.111:22-139.178.68.195:55454.service - OpenSSH per-connection server daemon (139.178.68.195:55454). Jun 21 05:28:05.436892 sshd[1693]: Accepted publickey for core from 139.178.68.195 port 55454 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:05.437788 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:05.446260 systemd-logind[1491]: New session 2 of user core. Jun 21 05:28:05.450453 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 05:28:05.519711 sshd[1699]: Connection closed by 139.178.68.195 port 55454 Jun 21 05:28:05.519515 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Jun 21 05:28:05.531352 systemd[1]: sshd@1-143.198.235.111:22-139.178.68.195:55454.service: Deactivated successfully. Jun 21 05:28:05.534415 systemd[1]: session-2.scope: Deactivated successfully. Jun 21 05:28:05.537142 systemd-logind[1491]: Session 2 logged out. Waiting for processes to exit. Jun 21 05:28:05.540361 systemd-logind[1491]: Removed session 2. Jun 21 05:28:05.543499 systemd[1]: Started sshd@2-143.198.235.111:22-139.178.68.195:55456.service - OpenSSH per-connection server daemon (139.178.68.195:55456). Jun 21 05:28:05.603556 sshd[1705]: Accepted publickey for core from 139.178.68.195 port 55456 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:05.605300 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:05.615676 systemd-logind[1491]: New session 3 of user core. Jun 21 05:28:05.628521 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 05:28:05.690828 sshd[1707]: Connection closed by 139.178.68.195 port 55456 Jun 21 05:28:05.690712 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Jun 21 05:28:05.706035 systemd[1]: sshd@2-143.198.235.111:22-139.178.68.195:55456.service: Deactivated successfully. Jun 21 05:28:05.709529 systemd[1]: session-3.scope: Deactivated successfully. Jun 21 05:28:05.710945 systemd-logind[1491]: Session 3 logged out. Waiting for processes to exit. Jun 21 05:28:05.716458 systemd[1]: Started sshd@3-143.198.235.111:22-139.178.68.195:55466.service - OpenSSH per-connection server daemon (139.178.68.195:55466). Jun 21 05:28:05.717702 systemd-logind[1491]: Removed session 3. Jun 21 05:28:05.781009 sshd[1713]: Accepted publickey for core from 139.178.68.195 port 55466 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:05.782834 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:05.790170 systemd-logind[1491]: New session 4 of user core. Jun 21 05:28:05.797945 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 05:28:05.867988 sshd[1715]: Connection closed by 139.178.68.195 port 55466 Jun 21 05:28:05.869165 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jun 21 05:28:05.884025 systemd[1]: sshd@3-143.198.235.111:22-139.178.68.195:55466.service: Deactivated successfully. Jun 21 05:28:05.888103 systemd[1]: session-4.scope: Deactivated successfully. Jun 21 05:28:05.889883 systemd-logind[1491]: Session 4 logged out. Waiting for processes to exit. Jun 21 05:28:05.896719 systemd-logind[1491]: Removed session 4. Jun 21 05:28:05.898834 systemd[1]: Started sshd@4-143.198.235.111:22-139.178.68.195:55480.service - OpenSSH per-connection server daemon (139.178.68.195:55480). Jun 21 05:28:05.978723 sshd[1721]: Accepted publickey for core from 139.178.68.195 port 55480 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:05.981403 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:05.994367 systemd-logind[1491]: New session 5 of user core. Jun 21 05:28:06.001702 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 05:28:06.060244 kubelet[1683]: E0621 05:28:06.060026 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 05:28:06.062902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 05:28:06.063194 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 05:28:06.063662 systemd[1]: kubelet.service: Consumed 1.388s CPU time, 264.5M memory peak. Jun 21 05:28:06.077189 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 05:28:06.077674 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:28:06.091566 sudo[1725]: pam_unix(sudo:session): session closed for user root Jun 21 05:28:06.095734 sshd[1724]: Connection closed by 139.178.68.195 port 55480 Jun 21 05:28:06.096697 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jun 21 05:28:06.111991 systemd[1]: sshd@4-143.198.235.111:22-139.178.68.195:55480.service: Deactivated successfully. Jun 21 05:28:06.115267 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 05:28:06.116622 systemd-logind[1491]: Session 5 logged out. Waiting for processes to exit. Jun 21 05:28:06.122022 systemd[1]: Started sshd@5-143.198.235.111:22-139.178.68.195:55488.service - OpenSSH per-connection server daemon (139.178.68.195:55488). Jun 21 05:28:06.123389 systemd-logind[1491]: Removed session 5. Jun 21 05:28:06.189301 sshd[1732]: Accepted publickey for core from 139.178.68.195 port 55488 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:06.191563 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:06.199050 systemd-logind[1491]: New session 6 of user core. Jun 21 05:28:06.205427 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 05:28:06.265506 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 05:28:06.265887 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:28:06.289844 sudo[1736]: pam_unix(sudo:session): session closed for user root Jun 21 05:28:06.298563 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 05:28:06.298991 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:28:06.312501 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 05:28:06.365969 augenrules[1758]: No rules Jun 21 05:28:06.367233 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 05:28:06.367583 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 05:28:06.369057 sudo[1735]: pam_unix(sudo:session): session closed for user root Jun 21 05:28:06.373336 sshd[1734]: Connection closed by 139.178.68.195 port 55488 Jun 21 05:28:06.373201 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jun 21 05:28:06.387817 systemd[1]: sshd@5-143.198.235.111:22-139.178.68.195:55488.service: Deactivated successfully. Jun 21 05:28:06.390825 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 05:28:06.392413 systemd-logind[1491]: Session 6 logged out. Waiting for processes to exit. Jun 21 05:28:06.397715 systemd[1]: Started sshd@6-143.198.235.111:22-139.178.68.195:55500.service - OpenSSH per-connection server daemon (139.178.68.195:55500). Jun 21 05:28:06.399949 systemd-logind[1491]: Removed session 6. Jun 21 05:28:06.465114 sshd[1767]: Accepted publickey for core from 139.178.68.195 port 55500 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:28:06.468253 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:28:06.475781 systemd-logind[1491]: New session 7 of user core. Jun 21 05:28:06.486543 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 05:28:06.547820 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 05:28:06.548158 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:28:07.158576 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 05:28:07.174911 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 05:28:07.560231 dockerd[1788]: time="2025-06-21T05:28:07.559493835Z" level=info msg="Starting up" Jun 21 05:28:07.562398 dockerd[1788]: time="2025-06-21T05:28:07.562352867Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 05:28:07.594933 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1500506596-merged.mount: Deactivated successfully. Jun 21 05:28:07.677978 dockerd[1788]: time="2025-06-21T05:28:07.677728505Z" level=info msg="Loading containers: start." Jun 21 05:28:07.690194 kernel: Initializing XFRM netlink socket Jun 21 05:28:07.950306 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Jun 21 05:28:07.964405 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Jun 21 05:28:08.002064 systemd-networkd[1454]: docker0: Link UP Jun 21 05:28:08.002886 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Jun 21 05:28:08.006770 dockerd[1788]: time="2025-06-21T05:28:08.006662362Z" level=info msg="Loading containers: done." Jun 21 05:28:08.028828 dockerd[1788]: time="2025-06-21T05:28:08.028383079Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 05:28:08.028828 dockerd[1788]: time="2025-06-21T05:28:08.028499985Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 05:28:08.028828 dockerd[1788]: time="2025-06-21T05:28:08.028640288Z" level=info msg="Initializing buildkit" Jun 21 05:28:08.054853 dockerd[1788]: time="2025-06-21T05:28:08.054808459Z" level=info msg="Completed buildkit initialization" Jun 21 05:28:08.064277 dockerd[1788]: time="2025-06-21T05:28:08.064206540Z" level=info msg="Daemon has completed initialization" Jun 21 05:28:08.065029 dockerd[1788]: time="2025-06-21T05:28:08.064508208Z" level=info msg="API listen on /run/docker.sock" Jun 21 05:28:08.064659 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 05:28:08.592918 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck977641859-merged.mount: Deactivated successfully. Jun 21 05:28:09.035598 containerd[1532]: time="2025-06-21T05:28:09.035256565Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 21 05:28:09.584020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810389778.mount: Deactivated successfully. Jun 21 05:28:10.858157 containerd[1532]: time="2025-06-21T05:28:10.857207988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:10.859589 containerd[1532]: time="2025-06-21T05:28:10.859513104Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jun 21 05:28:10.860297 containerd[1532]: time="2025-06-21T05:28:10.860264471Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:10.863653 containerd[1532]: time="2025-06-21T05:28:10.863607243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:10.866088 containerd[1532]: time="2025-06-21T05:28:10.865821508Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.83030594s" Jun 21 05:28:10.866088 containerd[1532]: time="2025-06-21T05:28:10.865878490Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 21 05:28:10.866937 containerd[1532]: time="2025-06-21T05:28:10.866779718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 21 05:28:12.382426 containerd[1532]: time="2025-06-21T05:28:12.382350103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:12.383527 containerd[1532]: time="2025-06-21T05:28:12.383473557Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jun 21 05:28:12.384047 containerd[1532]: time="2025-06-21T05:28:12.384014996Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:12.387719 containerd[1532]: time="2025-06-21T05:28:12.387667924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:12.388615 containerd[1532]: time="2025-06-21T05:28:12.388583131Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.52148756s" Jun 21 05:28:12.388751 containerd[1532]: time="2025-06-21T05:28:12.388721693Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 21 05:28:12.389585 containerd[1532]: time="2025-06-21T05:28:12.389550057Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 21 05:28:13.676806 containerd[1532]: time="2025-06-21T05:28:13.675405779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:13.676806 containerd[1532]: time="2025-06-21T05:28:13.676655888Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jun 21 05:28:13.676806 containerd[1532]: time="2025-06-21T05:28:13.676724757Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:13.680425 containerd[1532]: time="2025-06-21T05:28:13.680369141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:13.682063 containerd[1532]: time="2025-06-21T05:28:13.682009676Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.292416099s" Jun 21 05:28:13.682291 containerd[1532]: time="2025-06-21T05:28:13.682271035Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 21 05:28:13.683418 containerd[1532]: time="2025-06-21T05:28:13.683373258Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 21 05:28:14.756812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2009024272.mount: Deactivated successfully. Jun 21 05:28:15.353426 containerd[1532]: time="2025-06-21T05:28:15.353340432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:15.354831 containerd[1532]: time="2025-06-21T05:28:15.354678924Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jun 21 05:28:15.355446 containerd[1532]: time="2025-06-21T05:28:15.355396773Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:15.357857 containerd[1532]: time="2025-06-21T05:28:15.357689969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:15.360633 containerd[1532]: time="2025-06-21T05:28:15.360510809Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.677059224s" Jun 21 05:28:15.362577 containerd[1532]: time="2025-06-21T05:28:15.360850510Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 21 05:28:15.362752 containerd[1532]: time="2025-06-21T05:28:15.362580712Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 21 05:28:15.364550 systemd-resolved[1401]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jun 21 05:28:15.888435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount165049586.mount: Deactivated successfully. Jun 21 05:28:16.070088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 05:28:16.074440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:28:16.274629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:16.290521 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 05:28:16.364841 kubelet[2087]: E0621 05:28:16.364787 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 05:28:16.370991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 05:28:16.371167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 05:28:16.371538 systemd[1]: kubelet.service: Consumed 212ms CPU time, 110.5M memory peak. Jun 21 05:28:16.909959 containerd[1532]: time="2025-06-21T05:28:16.909883816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:16.913351 containerd[1532]: time="2025-06-21T05:28:16.910173718Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jun 21 05:28:16.913351 containerd[1532]: time="2025-06-21T05:28:16.912056433Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:16.916727 containerd[1532]: time="2025-06-21T05:28:16.916666537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:16.918163 containerd[1532]: time="2025-06-21T05:28:16.917504378Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.554833919s" Jun 21 05:28:16.918480 containerd[1532]: time="2025-06-21T05:28:16.918434607Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 21 05:28:16.922029 containerd[1532]: time="2025-06-21T05:28:16.919150180Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 05:28:17.353748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2494768357.mount: Deactivated successfully. Jun 21 05:28:17.360315 containerd[1532]: time="2025-06-21T05:28:17.359375266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:28:17.361010 containerd[1532]: time="2025-06-21T05:28:17.360963634Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 21 05:28:17.362215 containerd[1532]: time="2025-06-21T05:28:17.362150565Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:28:17.364514 containerd[1532]: time="2025-06-21T05:28:17.364474789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:28:17.365405 containerd[1532]: time="2025-06-21T05:28:17.365368791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 446.18207ms" Jun 21 05:28:17.365405 containerd[1532]: time="2025-06-21T05:28:17.365404427Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 21 05:28:17.366563 containerd[1532]: time="2025-06-21T05:28:17.366489545Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 21 05:28:17.921555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463143633.mount: Deactivated successfully. Jun 21 05:28:18.415350 systemd-resolved[1401]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jun 21 05:28:19.997256 containerd[1532]: time="2025-06-21T05:28:19.995041901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:19.997256 containerd[1532]: time="2025-06-21T05:28:19.996343260Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jun 21 05:28:19.999152 containerd[1532]: time="2025-06-21T05:28:19.998361886Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:20.072858 containerd[1532]: time="2025-06-21T05:28:20.072785393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:20.075721 containerd[1532]: time="2025-06-21T05:28:20.075609024Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.709080753s" Jun 21 05:28:20.075721 containerd[1532]: time="2025-06-21T05:28:20.075672465Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 21 05:28:23.453499 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:23.453673 systemd[1]: kubelet.service: Consumed 212ms CPU time, 110.5M memory peak. Jun 21 05:28:23.456458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:28:23.501043 systemd[1]: Reload requested from client PID 2219 ('systemctl') (unit session-7.scope)... Jun 21 05:28:23.501343 systemd[1]: Reloading... Jun 21 05:28:23.679168 zram_generator::config[2262]: No configuration found. Jun 21 05:28:23.814839 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:28:24.001741 systemd[1]: Reloading finished in 499 ms. Jun 21 05:28:24.067041 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 05:28:24.067216 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 05:28:24.067793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:24.067868 systemd[1]: kubelet.service: Consumed 144ms CPU time, 97.7M memory peak. Jun 21 05:28:24.070373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:28:24.250323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:24.264774 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 05:28:24.335385 kubelet[2316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:28:24.335385 kubelet[2316]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 05:28:24.335385 kubelet[2316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:28:24.335969 kubelet[2316]: I0621 05:28:24.335534 2316 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 05:28:24.703292 kubelet[2316]: I0621 05:28:24.702764 2316 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 21 05:28:24.703292 kubelet[2316]: I0621 05:28:24.702807 2316 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 05:28:24.703882 kubelet[2316]: I0621 05:28:24.703846 2316 server.go:954] "Client rotation is on, will bootstrap in background" Jun 21 05:28:24.738157 kubelet[2316]: E0621 05:28:24.738024 2316 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.198.235.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.235.111:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:24.748730 kubelet[2316]: I0621 05:28:24.748655 2316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 05:28:24.773787 kubelet[2316]: I0621 05:28:24.773721 2316 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 05:28:24.779886 kubelet[2316]: I0621 05:28:24.779838 2316 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 05:28:24.783136 kubelet[2316]: I0621 05:28:24.783006 2316 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 05:28:24.783413 kubelet[2316]: I0621 05:28:24.783109 2316 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.0-e-bb84d467cd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 05:28:24.783543 kubelet[2316]: I0621 05:28:24.783424 2316 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 05:28:24.783543 kubelet[2316]: I0621 05:28:24.783439 2316 container_manager_linux.go:304] "Creating device plugin manager" Jun 21 05:28:24.787282 kubelet[2316]: I0621 05:28:24.787175 2316 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:28:24.791723 kubelet[2316]: I0621 05:28:24.791640 2316 kubelet.go:446] "Attempting to sync node with API server" Jun 21 05:28:24.792421 kubelet[2316]: I0621 05:28:24.791984 2316 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 05:28:24.792421 kubelet[2316]: I0621 05:28:24.792047 2316 kubelet.go:352] "Adding apiserver pod source" Jun 21 05:28:24.792421 kubelet[2316]: I0621 05:28:24.792071 2316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 05:28:24.801382 kubelet[2316]: W0621 05:28:24.800594 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.235.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-e-bb84d467cd&limit=500&resourceVersion=0": dial tcp 143.198.235.111:6443: connect: connection refused Jun 21 05:28:24.801382 kubelet[2316]: E0621 05:28:24.800665 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.235.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-e-bb84d467cd&limit=500&resourceVersion=0\": dial tcp 143.198.235.111:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:24.801855 kubelet[2316]: W0621 05:28:24.801798 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.235.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.235.111:6443: connect: connection refused Jun 21 05:28:24.801986 kubelet[2316]: E0621 05:28:24.801868 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.235.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.235.111:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:24.803345 kubelet[2316]: I0621 05:28:24.803166 2316 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 05:28:24.806710 kubelet[2316]: I0621 05:28:24.806669 2316 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 05:28:24.808248 kubelet[2316]: W0621 05:28:24.807614 2316 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 05:28:24.810465 kubelet[2316]: I0621 05:28:24.810435 2316 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 05:28:24.811203 kubelet[2316]: I0621 05:28:24.810683 2316 server.go:1287] "Started kubelet" Jun 21 05:28:24.813016 kubelet[2316]: I0621 05:28:24.812769 2316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 05:28:24.823419 kubelet[2316]: E0621 05:28:24.820325 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.235.111:6443/api/v1/namespaces/default/events\": dial tcp 143.198.235.111:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.0-e-bb84d467cd.184af7a7b2590f50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.0-e-bb84d467cd,UID:ci-4372.0.0-e-bb84d467cd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.0-e-bb84d467cd,},FirstTimestamp:2025-06-21 05:28:24.810639184 +0000 UTC m=+0.539796014,LastTimestamp:2025-06-21 05:28:24.810639184 +0000 UTC m=+0.539796014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.0-e-bb84d467cd,}" Jun 21 05:28:24.823419 kubelet[2316]: I0621 05:28:24.822432 2316 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 05:28:24.824560 kubelet[2316]: I0621 05:28:24.824517 2316 server.go:479] "Adding debug handlers to kubelet server" Jun 21 05:28:24.832634 kubelet[2316]: I0621 05:28:24.832454 2316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 05:28:24.835204 kubelet[2316]: I0621 05:28:24.834637 2316 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 05:28:24.835204 kubelet[2316]: I0621 05:28:24.834814 2316 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 05:28:24.835617 kubelet[2316]: E0621 05:28:24.835561 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:24.840294 kubelet[2316]: I0621 05:28:24.840243 2316 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 05:28:24.840573 kubelet[2316]: I0621 05:28:24.840556 2316 reconciler.go:26] "Reconciler: start to sync state" Jun 21 05:28:24.846081 kubelet[2316]: I0621 05:28:24.846028 2316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 05:28:24.858168 kubelet[2316]: E0621 05:28:24.856991 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.235.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-e-bb84d467cd?timeout=10s\": dial tcp 143.198.235.111:6443: connect: connection refused" interval="200ms" Jun 21 05:28:24.858168 kubelet[2316]: I0621 05:28:24.857940 2316 factory.go:221] Registration of the systemd container factory successfully Jun 21 05:28:24.858168 kubelet[2316]: I0621 05:28:24.858105 2316 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 05:28:24.858447 kubelet[2316]: W0621 05:28:24.858410 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.235.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.235.111:6443: connect: connection refused Jun 21 05:28:24.858510 kubelet[2316]: E0621 05:28:24.858478 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.235.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.235.111:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:24.868156 kubelet[2316]: I0621 05:28:24.866585 2316 factory.go:221] Registration of the containerd container factory successfully Jun 21 05:28:24.868156 kubelet[2316]: I0621 05:28:24.868106 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 05:28:24.869790 kubelet[2316]: I0621 05:28:24.869741 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 05:28:24.869790 kubelet[2316]: I0621 05:28:24.869781 2316 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 21 05:28:24.869975 kubelet[2316]: I0621 05:28:24.869812 2316 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 05:28:24.869975 kubelet[2316]: I0621 05:28:24.869822 2316 kubelet.go:2382] "Starting kubelet main sync loop" Jun 21 05:28:24.869975 kubelet[2316]: E0621 05:28:24.869885 2316 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 05:28:24.880752 kubelet[2316]: W0621 05:28:24.880676 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.235.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.235.111:6443: connect: connection refused Jun 21 05:28:24.881055 kubelet[2316]: E0621 05:28:24.880760 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.235.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.235.111:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:24.881439 kubelet[2316]: E0621 05:28:24.881264 2316 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 05:28:24.902577 kubelet[2316]: I0621 05:28:24.902519 2316 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 05:28:24.902839 kubelet[2316]: I0621 05:28:24.902681 2316 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 05:28:24.902839 kubelet[2316]: I0621 05:28:24.902702 2316 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:28:24.904679 kubelet[2316]: I0621 05:28:24.904579 2316 policy_none.go:49] "None policy: Start" Jun 21 05:28:24.904679 kubelet[2316]: I0621 05:28:24.904615 2316 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 05:28:24.904679 kubelet[2316]: I0621 05:28:24.904626 2316 state_mem.go:35] "Initializing new in-memory state store" Jun 21 05:28:24.912744 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 05:28:24.925610 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 05:28:24.931754 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 05:28:24.936066 kubelet[2316]: E0621 05:28:24.936006 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:24.939150 kubelet[2316]: I0621 05:28:24.939067 2316 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 05:28:24.940028 kubelet[2316]: I0621 05:28:24.939579 2316 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 05:28:24.940028 kubelet[2316]: I0621 05:28:24.939602 2316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 05:28:24.940028 kubelet[2316]: I0621 05:28:24.939920 2316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 05:28:24.943669 kubelet[2316]: E0621 05:28:24.943623 2316 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 05:28:24.943807 kubelet[2316]: E0621 05:28:24.943683 2316 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:24.983178 systemd[1]: Created slice kubepods-burstable-poda26ecf9c2fca630849cbdd91a772c78a.slice - libcontainer container kubepods-burstable-poda26ecf9c2fca630849cbdd91a772c78a.slice. Jun 21 05:28:25.008758 kubelet[2316]: E0621 05:28:25.008682 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.012673 systemd[1]: Created slice kubepods-burstable-pod662a1feef6715d56169d29c996e9183c.slice - libcontainer container kubepods-burstable-pod662a1feef6715d56169d29c996e9183c.slice. Jun 21 05:28:25.022586 kubelet[2316]: E0621 05:28:25.022537 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.027175 systemd[1]: Created slice kubepods-burstable-podb35c539070d00506af9481321d983cd8.slice - libcontainer container kubepods-burstable-podb35c539070d00506af9481321d983cd8.slice. Jun 21 05:28:25.029934 kubelet[2316]: E0621 05:28:25.029873 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.041510 kubelet[2316]: I0621 05:28:25.041468 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a26ecf9c2fca630849cbdd91a772c78a-k8s-certs\") pod \"kube-apiserver-ci-4372.0.0-e-bb84d467cd\" (UID: \"a26ecf9c2fca630849cbdd91a772c78a\") " pod="kube-system/kube-apiserver-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.041724 kubelet[2316]: I0621 05:28:25.041544 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/662a1feef6715d56169d29c996e9183c-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.0-e-bb84d467cd\" (UID: \"662a1feef6715d56169d29c996e9183c\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.041724 kubelet[2316]: I0621 05:28:25.041565 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/662a1feef6715d56169d29c996e9183c-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.0-e-bb84d467cd\" (UID: \"662a1feef6715d56169d29c996e9183c\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.041724 kubelet[2316]: I0621 05:28:25.041588 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/662a1feef6715d56169d29c996e9183c-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.0-e-bb84d467cd\" (UID: \"662a1feef6715d56169d29c996e9183c\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.041724 kubelet[2316]: I0621 05:28:25.041603 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a26ecf9c2fca630849cbdd91a772c78a-ca-certs\") pod \"kube-apiserver-ci-4372.0.0-e-bb84d467cd\" (UID: \"a26ecf9c2fca630849cbdd91a772c78a\") " pod="kube-system/kube-apiserver-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.041724 kubelet[2316]: I0621 05:28:25.041621 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a26ecf9c2fca630849cbdd91a772c78a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.0-e-bb84d467cd\" (UID: \"a26ecf9c2fca630849cbdd91a772c78a\") " pod="kube-system/kube-apiserver-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.041995 kubelet[2316]: I0621 05:28:25.041636 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/662a1feef6715d56169d29c996e9183c-ca-certs\") pod \"kube-controller-manager-ci-4372.0.0-e-bb84d467cd\" (UID: \"662a1feef6715d56169d29c996e9183c\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.041995 kubelet[2316]: I0621 05:28:25.041651 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/662a1feef6715d56169d29c996e9183c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.0-e-bb84d467cd\" (UID: \"662a1feef6715d56169d29c996e9183c\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.041995 kubelet[2316]: I0621 05:28:25.041667 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35c539070d00506af9481321d983cd8-kubeconfig\") pod \"kube-scheduler-ci-4372.0.0-e-bb84d467cd\" (UID: \"b35c539070d00506af9481321d983cd8\") " pod="kube-system/kube-scheduler-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.041995 kubelet[2316]: I0621 05:28:25.041970 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.042396 kubelet[2316]: E0621 05:28:25.042372 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.235.111:6443/api/v1/nodes\": dial tcp 143.198.235.111:6443: connect: connection refused" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.058059 kubelet[2316]: E0621 05:28:25.058003 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.235.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-e-bb84d467cd?timeout=10s\": dial tcp 143.198.235.111:6443: connect: connection refused" interval="400ms" Jun 21 05:28:25.244024 kubelet[2316]: I0621 05:28:25.243702 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.244814 kubelet[2316]: E0621 05:28:25.244537 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.235.111:6443/api/v1/nodes\": dial tcp 143.198.235.111:6443: connect: connection refused" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.312422 kubelet[2316]: E0621 05:28:25.312317 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:25.313490 containerd[1532]: time="2025-06-21T05:28:25.313427664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.0-e-bb84d467cd,Uid:a26ecf9c2fca630849cbdd91a772c78a,Namespace:kube-system,Attempt:0,}" Jun 21 05:28:25.323750 kubelet[2316]: E0621 05:28:25.323636 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:25.324287 containerd[1532]: time="2025-06-21T05:28:25.324239353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.0-e-bb84d467cd,Uid:662a1feef6715d56169d29c996e9183c,Namespace:kube-system,Attempt:0,}" Jun 21 05:28:25.333667 kubelet[2316]: E0621 05:28:25.333295 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:25.340933 containerd[1532]: time="2025-06-21T05:28:25.340886528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.0-e-bb84d467cd,Uid:b35c539070d00506af9481321d983cd8,Namespace:kube-system,Attempt:0,}" Jun 21 05:28:25.460096 kubelet[2316]: E0621 05:28:25.459945 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.235.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-e-bb84d467cd?timeout=10s\": dial tcp 143.198.235.111:6443: connect: connection refused" interval="800ms" Jun 21 05:28:25.486168 containerd[1532]: time="2025-06-21T05:28:25.485204722Z" level=info msg="connecting to shim 201dffda2a146d1dea1662cadde7961daa7e586228de7351c5c168b586929361" address="unix:///run/containerd/s/0bc80e774b63de80f8fad47ad10d0104ea323876e357077755f29723c0b1b667" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:28:25.486667 containerd[1532]: time="2025-06-21T05:28:25.486607158Z" level=info msg="connecting to shim 9f966516122d010992df0f4358b17f3853b5902f82454ae49099d0c51d86ee96" address="unix:///run/containerd/s/3a6d2a1c6a8267928568d03c27b3db4d15075b616748d37ba6c57dc424f1abb1" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:28:25.487519 containerd[1532]: time="2025-06-21T05:28:25.487473159Z" level=info msg="connecting to shim ea6a7822df89e3e2d6521ca31e1e53d74da23baccd3f261ee0a62242c9cf5061" address="unix:///run/containerd/s/32ec3e1558041caa511f6d26349b8dad6e1366589908c552b2a61a1f18ba322e" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:28:25.616412 systemd[1]: Started cri-containerd-201dffda2a146d1dea1662cadde7961daa7e586228de7351c5c168b586929361.scope - libcontainer container 201dffda2a146d1dea1662cadde7961daa7e586228de7351c5c168b586929361. Jun 21 05:28:25.618712 systemd[1]: Started cri-containerd-9f966516122d010992df0f4358b17f3853b5902f82454ae49099d0c51d86ee96.scope - libcontainer container 9f966516122d010992df0f4358b17f3853b5902f82454ae49099d0c51d86ee96. Jun 21 05:28:25.621423 systemd[1]: Started cri-containerd-ea6a7822df89e3e2d6521ca31e1e53d74da23baccd3f261ee0a62242c9cf5061.scope - libcontainer container ea6a7822df89e3e2d6521ca31e1e53d74da23baccd3f261ee0a62242c9cf5061. Jun 21 05:28:25.647722 kubelet[2316]: I0621 05:28:25.647642 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.649317 kubelet[2316]: E0621 05:28:25.649253 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.235.111:6443/api/v1/nodes\": dial tcp 143.198.235.111:6443: connect: connection refused" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:25.705616 containerd[1532]: time="2025-06-21T05:28:25.705553281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.0-e-bb84d467cd,Uid:662a1feef6715d56169d29c996e9183c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f966516122d010992df0f4358b17f3853b5902f82454ae49099d0c51d86ee96\"" Jun 21 05:28:25.707941 kubelet[2316]: E0621 05:28:25.707801 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:25.713690 containerd[1532]: time="2025-06-21T05:28:25.712600656Z" level=info msg="CreateContainer within sandbox \"9f966516122d010992df0f4358b17f3853b5902f82454ae49099d0c51d86ee96\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 05:28:25.740276 containerd[1532]: time="2025-06-21T05:28:25.740224032Z" level=info msg="Container 4660c79b9d5af8b5caa9ace8d60bb6f9823d0647f53ccdbba872f551be9f62c6: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:25.768156 containerd[1532]: time="2025-06-21T05:28:25.768069993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.0-e-bb84d467cd,Uid:a26ecf9c2fca630849cbdd91a772c78a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea6a7822df89e3e2d6521ca31e1e53d74da23baccd3f261ee0a62242c9cf5061\"" Jun 21 05:28:25.770106 containerd[1532]: time="2025-06-21T05:28:25.769899098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.0-e-bb84d467cd,Uid:b35c539070d00506af9481321d983cd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"201dffda2a146d1dea1662cadde7961daa7e586228de7351c5c168b586929361\"" Jun 21 05:28:25.771075 kubelet[2316]: E0621 05:28:25.771044 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:25.772977 kubelet[2316]: E0621 05:28:25.772221 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:25.775183 containerd[1532]: time="2025-06-21T05:28:25.775094094Z" level=info msg="CreateContainer within sandbox \"201dffda2a146d1dea1662cadde7961daa7e586228de7351c5c168b586929361\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 05:28:25.778048 containerd[1532]: time="2025-06-21T05:28:25.777998491Z" level=info msg="CreateContainer within sandbox \"9f966516122d010992df0f4358b17f3853b5902f82454ae49099d0c51d86ee96\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4660c79b9d5af8b5caa9ace8d60bb6f9823d0647f53ccdbba872f551be9f62c6\"" Jun 21 05:28:25.786473 containerd[1532]: time="2025-06-21T05:28:25.786412138Z" level=info msg="Container dcccb7cf976ec1dadbd33ee9159dfd0c9d685115abdba7b6c00400e1b762cbd4: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:25.793193 containerd[1532]: time="2025-06-21T05:28:25.793094868Z" level=info msg="CreateContainer within sandbox \"ea6a7822df89e3e2d6521ca31e1e53d74da23baccd3f261ee0a62242c9cf5061\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 05:28:25.793895 containerd[1532]: time="2025-06-21T05:28:25.793843998Z" level=info msg="StartContainer for \"4660c79b9d5af8b5caa9ace8d60bb6f9823d0647f53ccdbba872f551be9f62c6\"" Jun 21 05:28:25.796344 containerd[1532]: time="2025-06-21T05:28:25.796283785Z" level=info msg="connecting to shim 4660c79b9d5af8b5caa9ace8d60bb6f9823d0647f53ccdbba872f551be9f62c6" address="unix:///run/containerd/s/3a6d2a1c6a8267928568d03c27b3db4d15075b616748d37ba6c57dc424f1abb1" protocol=ttrpc version=3 Jun 21 05:28:25.803756 containerd[1532]: time="2025-06-21T05:28:25.803691649Z" level=info msg="CreateContainer within sandbox \"201dffda2a146d1dea1662cadde7961daa7e586228de7351c5c168b586929361\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dcccb7cf976ec1dadbd33ee9159dfd0c9d685115abdba7b6c00400e1b762cbd4\"" Jun 21 05:28:25.806149 containerd[1532]: time="2025-06-21T05:28:25.804647349Z" level=info msg="Container 285d30e79185a93f6be44421cad878e9c7aee770bd04bcbd9f36bd509ff70271: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:25.806149 containerd[1532]: time="2025-06-21T05:28:25.804777410Z" level=info msg="StartContainer for \"dcccb7cf976ec1dadbd33ee9159dfd0c9d685115abdba7b6c00400e1b762cbd4\"" Jun 21 05:28:25.806397 containerd[1532]: time="2025-06-21T05:28:25.806365446Z" level=info msg="connecting to shim dcccb7cf976ec1dadbd33ee9159dfd0c9d685115abdba7b6c00400e1b762cbd4" address="unix:///run/containerd/s/0bc80e774b63de80f8fad47ad10d0104ea323876e357077755f29723c0b1b667" protocol=ttrpc version=3 Jun 21 05:28:25.817979 kubelet[2316]: W0621 05:28:25.817869 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.235.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.235.111:6443: connect: connection refused Jun 21 05:28:25.817979 kubelet[2316]: E0621 05:28:25.817973 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.235.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.235.111:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:25.822473 containerd[1532]: time="2025-06-21T05:28:25.822291549Z" level=info msg="CreateContainer within sandbox \"ea6a7822df89e3e2d6521ca31e1e53d74da23baccd3f261ee0a62242c9cf5061\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"285d30e79185a93f6be44421cad878e9c7aee770bd04bcbd9f36bd509ff70271\"" Jun 21 05:28:25.823517 containerd[1532]: time="2025-06-21T05:28:25.823465705Z" level=info msg="StartContainer for \"285d30e79185a93f6be44421cad878e9c7aee770bd04bcbd9f36bd509ff70271\"" Jun 21 05:28:25.829931 containerd[1532]: time="2025-06-21T05:28:25.829268356Z" level=info msg="connecting to shim 285d30e79185a93f6be44421cad878e9c7aee770bd04bcbd9f36bd509ff70271" address="unix:///run/containerd/s/32ec3e1558041caa511f6d26349b8dad6e1366589908c552b2a61a1f18ba322e" protocol=ttrpc version=3 Jun 21 05:28:25.837927 kubelet[2316]: W0621 05:28:25.837498 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.235.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-e-bb84d467cd&limit=500&resourceVersion=0": dial tcp 143.198.235.111:6443: connect: connection refused Jun 21 05:28:25.837927 kubelet[2316]: E0621 05:28:25.837596 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.235.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-e-bb84d467cd&limit=500&resourceVersion=0\": dial tcp 143.198.235.111:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:25.841484 systemd[1]: Started cri-containerd-4660c79b9d5af8b5caa9ace8d60bb6f9823d0647f53ccdbba872f551be9f62c6.scope - libcontainer container 4660c79b9d5af8b5caa9ace8d60bb6f9823d0647f53ccdbba872f551be9f62c6. Jun 21 05:28:25.871443 systemd[1]: Started cri-containerd-dcccb7cf976ec1dadbd33ee9159dfd0c9d685115abdba7b6c00400e1b762cbd4.scope - libcontainer container dcccb7cf976ec1dadbd33ee9159dfd0c9d685115abdba7b6c00400e1b762cbd4. Jun 21 05:28:25.897005 systemd[1]: Started cri-containerd-285d30e79185a93f6be44421cad878e9c7aee770bd04bcbd9f36bd509ff70271.scope - libcontainer container 285d30e79185a93f6be44421cad878e9c7aee770bd04bcbd9f36bd509ff70271. Jun 21 05:28:25.954358 kubelet[2316]: W0621 05:28:25.954205 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.235.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.235.111:6443: connect: connection refused Jun 21 05:28:25.954635 kubelet[2316]: E0621 05:28:25.954455 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.235.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.235.111:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:26.011842 containerd[1532]: time="2025-06-21T05:28:26.011767752Z" level=info msg="StartContainer for \"285d30e79185a93f6be44421cad878e9c7aee770bd04bcbd9f36bd509ff70271\" returns successfully" Jun 21 05:28:26.014918 containerd[1532]: time="2025-06-21T05:28:26.014770720Z" level=info msg="StartContainer for \"4660c79b9d5af8b5caa9ace8d60bb6f9823d0647f53ccdbba872f551be9f62c6\" returns successfully" Jun 21 05:28:26.014918 containerd[1532]: time="2025-06-21T05:28:26.014846516Z" level=info msg="StartContainer for \"dcccb7cf976ec1dadbd33ee9159dfd0c9d685115abdba7b6c00400e1b762cbd4\" returns successfully" Jun 21 05:28:26.042742 kubelet[2316]: W0621 05:28:26.042603 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.235.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.235.111:6443: connect: connection refused Jun 21 05:28:26.043046 kubelet[2316]: E0621 05:28:26.042719 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.235.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.235.111:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:28:26.451828 kubelet[2316]: I0621 05:28:26.451460 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:26.922950 kubelet[2316]: E0621 05:28:26.922242 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:26.922950 kubelet[2316]: E0621 05:28:26.922429 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:26.931742 kubelet[2316]: E0621 05:28:26.931293 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:26.931742 kubelet[2316]: E0621 05:28:26.931495 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:26.937317 kubelet[2316]: E0621 05:28:26.937281 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:26.937701 kubelet[2316]: E0621 05:28:26.937682 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:27.941919 kubelet[2316]: E0621 05:28:27.941480 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:27.941919 kubelet[2316]: E0621 05:28:27.941625 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:27.942874 kubelet[2316]: E0621 05:28:27.942849 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:27.944150 kubelet[2316]: E0621 05:28:27.943448 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:27.944150 kubelet[2316]: E0621 05:28:27.943652 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:27.944150 kubelet[2316]: E0621 05:28:27.943695 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:28.028359 kubelet[2316]: E0621 05:28:28.028290 2316 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.0-e-bb84d467cd\" not found" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:28.182404 kubelet[2316]: I0621 05:28:28.182236 2316 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:28.182404 kubelet[2316]: E0621 05:28:28.182307 2316 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.0.0-e-bb84d467cd\": node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:28.202391 kubelet[2316]: E0621 05:28:28.202257 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:28.302900 kubelet[2316]: E0621 05:28:28.302852 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:28.403898 kubelet[2316]: E0621 05:28:28.403825 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:28.504195 kubelet[2316]: E0621 05:28:28.504031 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:28.604998 kubelet[2316]: E0621 05:28:28.604938 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:28.705860 kubelet[2316]: E0621 05:28:28.705746 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:28.806307 kubelet[2316]: E0621 05:28:28.806101 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:28.906735 kubelet[2316]: E0621 05:28:28.906675 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:28.944229 kubelet[2316]: E0621 05:28:28.944176 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:28.946537 kubelet[2316]: E0621 05:28:28.944345 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:28.946537 kubelet[2316]: E0621 05:28:28.945775 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:28.946537 kubelet[2316]: E0621 05:28:28.945903 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:29.007886 kubelet[2316]: E0621 05:28:29.007807 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:29.036921 kubelet[2316]: I0621 05:28:29.036858 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:29.060379 kubelet[2316]: W0621 05:28:29.060209 2316 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 05:28:29.060561 kubelet[2316]: I0621 05:28:29.060512 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:29.070048 kubelet[2316]: W0621 05:28:29.069971 2316 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 05:28:29.070546 kubelet[2316]: I0621 05:28:29.070474 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:29.079981 kubelet[2316]: W0621 05:28:29.079924 2316 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 05:28:29.803235 kubelet[2316]: I0621 05:28:29.803193 2316 apiserver.go:52] "Watching apiserver" Jun 21 05:28:29.806046 kubelet[2316]: E0621 05:28:29.805997 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:29.840910 kubelet[2316]: I0621 05:28:29.840851 2316 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 05:28:29.945065 kubelet[2316]: E0621 05:28:29.945031 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:29.945994 kubelet[2316]: E0621 05:28:29.945412 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:30.237338 systemd[1]: Reload requested from client PID 2587 ('systemctl') (unit session-7.scope)... Jun 21 05:28:30.237795 systemd[1]: Reloading... Jun 21 05:28:30.403161 zram_generator::config[2637]: No configuration found. Jun 21 05:28:30.554625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:28:30.713883 systemd[1]: Reloading finished in 475 ms. Jun 21 05:28:30.749415 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:28:30.764751 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 05:28:30.765031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:30.765105 systemd[1]: kubelet.service: Consumed 991ms CPU time, 126.1M memory peak. Jun 21 05:28:30.767753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:28:30.942653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:28:30.956615 (kubelet)[2681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 05:28:31.028149 kubelet[2681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:28:31.028149 kubelet[2681]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 05:28:31.028149 kubelet[2681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:28:31.028149 kubelet[2681]: I0621 05:28:31.027518 2681 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 05:28:31.040251 kubelet[2681]: I0621 05:28:31.038995 2681 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 21 05:28:31.040469 kubelet[2681]: I0621 05:28:31.040452 2681 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 05:28:31.041007 kubelet[2681]: I0621 05:28:31.040980 2681 server.go:954] "Client rotation is on, will bootstrap in background" Jun 21 05:28:31.042562 kubelet[2681]: I0621 05:28:31.042541 2681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 21 05:28:31.045701 kubelet[2681]: I0621 05:28:31.045578 2681 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 05:28:31.060155 kubelet[2681]: I0621 05:28:31.060015 2681 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 05:28:31.065143 kubelet[2681]: I0621 05:28:31.063961 2681 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 05:28:31.065143 kubelet[2681]: I0621 05:28:31.064293 2681 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 05:28:31.065143 kubelet[2681]: I0621 05:28:31.064321 2681 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.0-e-bb84d467cd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 05:28:31.065143 kubelet[2681]: I0621 05:28:31.064623 2681 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 05:28:31.065466 kubelet[2681]: I0621 05:28:31.064728 2681 container_manager_linux.go:304] "Creating device plugin manager" Jun 21 05:28:31.065466 kubelet[2681]: I0621 05:28:31.064793 2681 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:28:31.065466 kubelet[2681]: I0621 05:28:31.064966 2681 kubelet.go:446] "Attempting to sync node with API server" Jun 21 05:28:31.065466 kubelet[2681]: I0621 05:28:31.064992 2681 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 05:28:31.065466 kubelet[2681]: I0621 05:28:31.065015 2681 kubelet.go:352] "Adding apiserver pod source" Jun 21 05:28:31.065466 kubelet[2681]: I0621 05:28:31.065026 2681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 05:28:31.069999 kubelet[2681]: I0621 05:28:31.069962 2681 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 05:28:31.070636 kubelet[2681]: I0621 05:28:31.070616 2681 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 05:28:31.071176 kubelet[2681]: I0621 05:28:31.071160 2681 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 05:28:31.071319 kubelet[2681]: I0621 05:28:31.071309 2681 server.go:1287] "Started kubelet" Jun 21 05:28:31.078038 kubelet[2681]: I0621 05:28:31.077145 2681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 05:28:31.078038 kubelet[2681]: I0621 05:28:31.077958 2681 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 05:28:31.081502 kubelet[2681]: I0621 05:28:31.081472 2681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 05:28:31.092644 kubelet[2681]: I0621 05:28:31.092589 2681 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 05:28:31.098587 kubelet[2681]: I0621 05:28:31.098532 2681 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 05:28:31.098969 kubelet[2681]: E0621 05:28:31.098945 2681 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-e-bb84d467cd\" not found" Jun 21 05:28:31.100806 kubelet[2681]: I0621 05:28:31.100772 2681 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 05:28:31.100982 kubelet[2681]: I0621 05:28:31.100957 2681 reconciler.go:26] "Reconciler: start to sync state" Jun 21 05:28:31.101933 kubelet[2681]: I0621 05:28:31.101872 2681 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 05:28:31.106263 kubelet[2681]: I0621 05:28:31.106238 2681 server.go:479] "Adding debug handlers to kubelet server" Jun 21 05:28:31.110368 kubelet[2681]: I0621 05:28:31.110330 2681 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 05:28:31.115344 kubelet[2681]: E0621 05:28:31.115257 2681 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 05:28:31.116849 kubelet[2681]: I0621 05:28:31.116814 2681 factory.go:221] Registration of the containerd container factory successfully Jun 21 05:28:31.117228 kubelet[2681]: I0621 05:28:31.117062 2681 factory.go:221] Registration of the systemd container factory successfully Jun 21 05:28:31.132379 kubelet[2681]: I0621 05:28:31.132269 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 05:28:31.138148 kubelet[2681]: I0621 05:28:31.137805 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 05:28:31.138148 kubelet[2681]: I0621 05:28:31.137846 2681 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 21 05:28:31.138148 kubelet[2681]: I0621 05:28:31.138064 2681 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 05:28:31.138148 kubelet[2681]: I0621 05:28:31.138074 2681 kubelet.go:2382] "Starting kubelet main sync loop" Jun 21 05:28:31.142640 kubelet[2681]: E0621 05:28:31.141969 2681 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 05:28:31.186468 kubelet[2681]: I0621 05:28:31.186426 2681 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 05:28:31.186468 kubelet[2681]: I0621 05:28:31.186444 2681 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 05:28:31.186468 kubelet[2681]: I0621 05:28:31.186467 2681 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:28:31.186678 kubelet[2681]: I0621 05:28:31.186650 2681 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 05:28:31.186678 kubelet[2681]: I0621 05:28:31.186660 2681 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 05:28:31.186678 kubelet[2681]: I0621 05:28:31.186678 2681 policy_none.go:49] "None policy: Start" Jun 21 05:28:31.187764 kubelet[2681]: I0621 05:28:31.186691 2681 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 05:28:31.187764 kubelet[2681]: I0621 05:28:31.186703 2681 state_mem.go:35] "Initializing new in-memory state store" Jun 21 05:28:31.187764 kubelet[2681]: I0621 05:28:31.187076 2681 state_mem.go:75] "Updated machine memory state" Jun 21 05:28:31.202645 kubelet[2681]: I0621 05:28:31.202083 2681 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 05:28:31.203643 kubelet[2681]: I0621 05:28:31.203445 2681 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 05:28:31.204971 kubelet[2681]: I0621 05:28:31.203478 2681 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 05:28:31.205652 kubelet[2681]: I0621 05:28:31.205629 2681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 05:28:31.206802 kubelet[2681]: E0621 05:28:31.206781 2681 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 05:28:31.247025 kubelet[2681]: I0621 05:28:31.246949 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.247619 kubelet[2681]: I0621 05:28:31.247573 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.249781 kubelet[2681]: I0621 05:28:31.249703 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.261055 kubelet[2681]: W0621 05:28:31.261014 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 05:28:31.261996 kubelet[2681]: E0621 05:28:31.261555 2681 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.0-e-bb84d467cd\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.262810 kubelet[2681]: W0621 05:28:31.262778 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 05:28:31.263331 kubelet[2681]: E0621 05:28:31.263179 2681 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.0-e-bb84d467cd\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.264136 kubelet[2681]: W0621 05:28:31.264049 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 05:28:31.264259 kubelet[2681]: E0621 05:28:31.264239 2681 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.0-e-bb84d467cd\" already exists" pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.302671 kubelet[2681]: I0621 05:28:31.302622 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/662a1feef6715d56169d29c996e9183c-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.0-e-bb84d467cd\" (UID: \"662a1feef6715d56169d29c996e9183c\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.302971 kubelet[2681]: I0621 05:28:31.302898 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/662a1feef6715d56169d29c996e9183c-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.0-e-bb84d467cd\" (UID: \"662a1feef6715d56169d29c996e9183c\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.302971 kubelet[2681]: I0621 05:28:31.302927 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a26ecf9c2fca630849cbdd91a772c78a-k8s-certs\") pod \"kube-apiserver-ci-4372.0.0-e-bb84d467cd\" (UID: \"a26ecf9c2fca630849cbdd91a772c78a\") " pod="kube-system/kube-apiserver-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.303176 kubelet[2681]: I0621 05:28:31.303069 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/662a1feef6715d56169d29c996e9183c-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.0-e-bb84d467cd\" (UID: \"662a1feef6715d56169d29c996e9183c\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.303176 kubelet[2681]: I0621 05:28:31.303090 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a26ecf9c2fca630849cbdd91a772c78a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.0-e-bb84d467cd\" (UID: \"a26ecf9c2fca630849cbdd91a772c78a\") " pod="kube-system/kube-apiserver-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.303351 kubelet[2681]: I0621 05:28:31.303301 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/662a1feef6715d56169d29c996e9183c-ca-certs\") pod \"kube-controller-manager-ci-4372.0.0-e-bb84d467cd\" (UID: \"662a1feef6715d56169d29c996e9183c\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.303351 kubelet[2681]: I0621 05:28:31.303333 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/662a1feef6715d56169d29c996e9183c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.0-e-bb84d467cd\" (UID: \"662a1feef6715d56169d29c996e9183c\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.303498 kubelet[2681]: I0621 05:28:31.303458 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35c539070d00506af9481321d983cd8-kubeconfig\") pod \"kube-scheduler-ci-4372.0.0-e-bb84d467cd\" (UID: \"b35c539070d00506af9481321d983cd8\") " pod="kube-system/kube-scheduler-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.303498 kubelet[2681]: I0621 05:28:31.303475 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a26ecf9c2fca630849cbdd91a772c78a-ca-certs\") pod \"kube-apiserver-ci-4372.0.0-e-bb84d467cd\" (UID: \"a26ecf9c2fca630849cbdd91a772c78a\") " pod="kube-system/kube-apiserver-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.314153 kubelet[2681]: I0621 05:28:31.313997 2681 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.324469 kubelet[2681]: I0621 05:28:31.323513 2681 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.324469 kubelet[2681]: I0621 05:28:31.323734 2681 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:31.563693 kubelet[2681]: E0621 05:28:31.563182 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:31.565457 kubelet[2681]: E0621 05:28:31.565415 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:31.568289 kubelet[2681]: E0621 05:28:31.566770 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:32.068719 kubelet[2681]: I0621 05:28:32.068586 2681 apiserver.go:52] "Watching apiserver" Jun 21 05:28:32.101599 kubelet[2681]: I0621 05:28:32.101535 2681 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 05:28:32.174186 kubelet[2681]: E0621 05:28:32.173109 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:32.174186 kubelet[2681]: E0621 05:28:32.173461 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:32.174186 kubelet[2681]: I0621 05:28:32.173656 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:32.190350 kubelet[2681]: W0621 05:28:32.190277 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 05:28:32.191621 kubelet[2681]: E0621 05:28:32.190836 2681 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.0-e-bb84d467cd\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.0-e-bb84d467cd" Jun 21 05:28:32.192091 kubelet[2681]: E0621 05:28:32.192034 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:32.254554 kubelet[2681]: I0621 05:28:32.254340 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.0-e-bb84d467cd" podStartSLOduration=3.254289279 podStartE2EDuration="3.254289279s" podCreationTimestamp="2025-06-21 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:28:32.237532235 +0000 UTC m=+1.275004685" watchObservedRunningTime="2025-06-21 05:28:32.254289279 +0000 UTC m=+1.291761720" Jun 21 05:28:32.263296 kubelet[2681]: I0621 05:28:32.263196 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.0-e-bb84d467cd" podStartSLOduration=3.2631622350000002 podStartE2EDuration="3.263162235s" podCreationTimestamp="2025-06-21 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:28:32.254871062 +0000 UTC m=+1.292343511" watchObservedRunningTime="2025-06-21 05:28:32.263162235 +0000 UTC m=+1.300634673" Jun 21 05:28:32.263785 kubelet[2681]: I0621 05:28:32.263646 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.0-e-bb84d467cd" podStartSLOduration=3.263629913 podStartE2EDuration="3.263629913s" podCreationTimestamp="2025-06-21 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:28:32.262824563 +0000 UTC m=+1.300297007" watchObservedRunningTime="2025-06-21 05:28:32.263629913 +0000 UTC m=+1.301102354" Jun 21 05:28:33.181876 kubelet[2681]: E0621 05:28:33.181828 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:33.183608 kubelet[2681]: E0621 05:28:33.183578 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:34.184248 kubelet[2681]: E0621 05:28:34.184117 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:34.958689 kubelet[2681]: I0621 05:28:34.958648 2681 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 05:28:34.959150 containerd[1532]: time="2025-06-21T05:28:34.959087024Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 05:28:34.960143 kubelet[2681]: I0621 05:28:34.959777 2681 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 05:28:35.675468 kubelet[2681]: I0621 05:28:35.675402 2681 status_manager.go:890] "Failed to get status for pod" podUID="e172385a-fc56-4009-bad1-63230c062173" pod="kube-system/kube-proxy-wgfc5" err="pods \"kube-proxy-wgfc5\" is forbidden: User \"system:node:ci-4372.0.0-e-bb84d467cd\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372.0.0-e-bb84d467cd' and this object" Jun 21 05:28:35.681492 systemd[1]: Created slice kubepods-besteffort-pode172385a_fc56_4009_bad1_63230c062173.slice - libcontainer container kubepods-besteffort-pode172385a_fc56_4009_bad1_63230c062173.slice. Jun 21 05:28:35.831866 kubelet[2681]: I0621 05:28:35.831739 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krbm8\" (UniqueName: \"kubernetes.io/projected/e172385a-fc56-4009-bad1-63230c062173-kube-api-access-krbm8\") pod \"kube-proxy-wgfc5\" (UID: \"e172385a-fc56-4009-bad1-63230c062173\") " pod="kube-system/kube-proxy-wgfc5" Jun 21 05:28:35.831866 kubelet[2681]: I0621 05:28:35.831789 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e172385a-fc56-4009-bad1-63230c062173-lib-modules\") pod \"kube-proxy-wgfc5\" (UID: \"e172385a-fc56-4009-bad1-63230c062173\") " pod="kube-system/kube-proxy-wgfc5" Jun 21 05:28:35.831866 kubelet[2681]: I0621 05:28:35.831837 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e172385a-fc56-4009-bad1-63230c062173-kube-proxy\") pod \"kube-proxy-wgfc5\" (UID: \"e172385a-fc56-4009-bad1-63230c062173\") " pod="kube-system/kube-proxy-wgfc5" Jun 21 05:28:35.831866 kubelet[2681]: I0621 05:28:35.831853 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e172385a-fc56-4009-bad1-63230c062173-xtables-lock\") pod \"kube-proxy-wgfc5\" (UID: \"e172385a-fc56-4009-bad1-63230c062173\") " pod="kube-system/kube-proxy-wgfc5" Jun 21 05:28:35.992304 kubelet[2681]: E0621 05:28:35.991283 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:35.992743 containerd[1532]: time="2025-06-21T05:28:35.992708940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wgfc5,Uid:e172385a-fc56-4009-bad1-63230c062173,Namespace:kube-system,Attempt:0,}" Jun 21 05:28:36.034069 containerd[1532]: time="2025-06-21T05:28:36.032890031Z" level=info msg="connecting to shim 03034b64e543da28c85d29ce8722dd226ff75dbb6a6f24015509fe2fce84d350" address="unix:///run/containerd/s/c45162066f0e485aafc301cdb9deecc7159389ab8b932aa9fd2d2fbb4643666f" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:28:36.098409 systemd[1]: Started cri-containerd-03034b64e543da28c85d29ce8722dd226ff75dbb6a6f24015509fe2fce84d350.scope - libcontainer container 03034b64e543da28c85d29ce8722dd226ff75dbb6a6f24015509fe2fce84d350. Jun 21 05:28:36.128628 systemd[1]: Created slice kubepods-besteffort-pod34936830_07bf_4e12_b667_1c11312e0988.slice - libcontainer container kubepods-besteffort-pod34936830_07bf_4e12_b667_1c11312e0988.slice. Jun 21 05:28:36.134632 kubelet[2681]: I0621 05:28:36.134594 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bg9t\" (UniqueName: \"kubernetes.io/projected/34936830-07bf-4e12-b667-1c11312e0988-kube-api-access-2bg9t\") pod \"tigera-operator-68f7c7984d-5vd9d\" (UID: \"34936830-07bf-4e12-b667-1c11312e0988\") " pod="tigera-operator/tigera-operator-68f7c7984d-5vd9d" Jun 21 05:28:36.134632 kubelet[2681]: I0621 05:28:36.134642 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/34936830-07bf-4e12-b667-1c11312e0988-var-lib-calico\") pod \"tigera-operator-68f7c7984d-5vd9d\" (UID: \"34936830-07bf-4e12-b667-1c11312e0988\") " pod="tigera-operator/tigera-operator-68f7c7984d-5vd9d" Jun 21 05:28:36.162155 containerd[1532]: time="2025-06-21T05:28:36.162009450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wgfc5,Uid:e172385a-fc56-4009-bad1-63230c062173,Namespace:kube-system,Attempt:0,} returns sandbox id \"03034b64e543da28c85d29ce8722dd226ff75dbb6a6f24015509fe2fce84d350\"" Jun 21 05:28:36.163091 kubelet[2681]: E0621 05:28:36.163071 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:36.167079 containerd[1532]: time="2025-06-21T05:28:36.167028463Z" level=info msg="CreateContainer within sandbox \"03034b64e543da28c85d29ce8722dd226ff75dbb6a6f24015509fe2fce84d350\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 05:28:36.187847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208192462.mount: Deactivated successfully. Jun 21 05:28:36.189155 containerd[1532]: time="2025-06-21T05:28:36.189084801Z" level=info msg="Container 532c6ac7f8ecb8f9de741dc3bfcc1cfb91e410b218e4285f4dfe52e07f0ed804: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:36.199028 containerd[1532]: time="2025-06-21T05:28:36.198984481Z" level=info msg="CreateContainer within sandbox \"03034b64e543da28c85d29ce8722dd226ff75dbb6a6f24015509fe2fce84d350\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"532c6ac7f8ecb8f9de741dc3bfcc1cfb91e410b218e4285f4dfe52e07f0ed804\"" Jun 21 05:28:36.201305 containerd[1532]: time="2025-06-21T05:28:36.200856644Z" level=info msg="StartContainer for \"532c6ac7f8ecb8f9de741dc3bfcc1cfb91e410b218e4285f4dfe52e07f0ed804\"" Jun 21 05:28:36.210636 containerd[1532]: time="2025-06-21T05:28:36.210531484Z" level=info msg="connecting to shim 532c6ac7f8ecb8f9de741dc3bfcc1cfb91e410b218e4285f4dfe52e07f0ed804" address="unix:///run/containerd/s/c45162066f0e485aafc301cdb9deecc7159389ab8b932aa9fd2d2fbb4643666f" protocol=ttrpc version=3 Jun 21 05:28:36.237405 systemd[1]: Started cri-containerd-532c6ac7f8ecb8f9de741dc3bfcc1cfb91e410b218e4285f4dfe52e07f0ed804.scope - libcontainer container 532c6ac7f8ecb8f9de741dc3bfcc1cfb91e410b218e4285f4dfe52e07f0ed804. Jun 21 05:28:36.300888 containerd[1532]: time="2025-06-21T05:28:36.299289307Z" level=info msg="StartContainer for \"532c6ac7f8ecb8f9de741dc3bfcc1cfb91e410b218e4285f4dfe52e07f0ed804\" returns successfully" Jun 21 05:28:36.434435 containerd[1532]: time="2025-06-21T05:28:36.434380066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-5vd9d,Uid:34936830-07bf-4e12-b667-1c11312e0988,Namespace:tigera-operator,Attempt:0,}" Jun 21 05:28:36.469782 containerd[1532]: time="2025-06-21T05:28:36.469661616Z" level=info msg="connecting to shim 757d6b429adab91ed26068be0d07109569f2b9d80a1e47f60f0b1ed596869dac" address="unix:///run/containerd/s/c0035bf31e8cd1c0a3e09c136f8acdc9c918dc61fd5325fca35ef26dcd7a2b6a" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:28:36.509691 systemd[1]: Started cri-containerd-757d6b429adab91ed26068be0d07109569f2b9d80a1e47f60f0b1ed596869dac.scope - libcontainer container 757d6b429adab91ed26068be0d07109569f2b9d80a1e47f60f0b1ed596869dac. Jun 21 05:28:36.571994 containerd[1532]: time="2025-06-21T05:28:36.571954776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-5vd9d,Uid:34936830-07bf-4e12-b667-1c11312e0988,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"757d6b429adab91ed26068be0d07109569f2b9d80a1e47f60f0b1ed596869dac\"" Jun 21 05:28:36.574725 containerd[1532]: time="2025-06-21T05:28:36.574691322Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 21 05:28:36.576793 systemd-resolved[1401]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jun 21 05:28:37.197096 kubelet[2681]: E0621 05:28:37.196636 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:37.210863 kubelet[2681]: I0621 05:28:37.210758 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wgfc5" podStartSLOduration=2.210730454 podStartE2EDuration="2.210730454s" podCreationTimestamp="2025-06-21 05:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:28:37.210347598 +0000 UTC m=+6.247820096" watchObservedRunningTime="2025-06-21 05:28:37.210730454 +0000 UTC m=+6.248202925" Jun 21 05:28:38.073242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3166440619.mount: Deactivated successfully. Jun 21 05:28:39.068858 systemd-resolved[1401]: Clock change detected. Flushing caches. Jun 21 05:28:39.069392 systemd-timesyncd[1422]: Contacted time server 15.204.87.223:123 (2.flatcar.pool.ntp.org). Jun 21 05:28:39.069480 systemd-timesyncd[1422]: Initial clock synchronization to Sat 2025-06-21 05:28:39.068678 UTC. Jun 21 05:28:39.182115 kubelet[2681]: E0621 05:28:39.181998 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:39.706598 containerd[1532]: time="2025-06-21T05:28:39.706512636Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:39.708733 containerd[1532]: time="2025-06-21T05:28:39.708416374Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 21 05:28:39.710354 containerd[1532]: time="2025-06-21T05:28:39.709884482Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:39.718488 containerd[1532]: time="2025-06-21T05:28:39.717712141Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:39.718488 containerd[1532]: time="2025-06-21T05:28:39.718338624Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 2.164079065s" Jun 21 05:28:39.718488 containerd[1532]: time="2025-06-21T05:28:39.718380875Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 21 05:28:39.722971 containerd[1532]: time="2025-06-21T05:28:39.722920082Z" level=info msg="CreateContainer within sandbox \"757d6b429adab91ed26068be0d07109569f2b9d80a1e47f60f0b1ed596869dac\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 21 05:28:39.734670 containerd[1532]: time="2025-06-21T05:28:39.732626521Z" level=info msg="Container 78dbea64a0937b13829503b676b71974363cce12f00c3a01c9a1643da88dfe99: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:39.738779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3046969842.mount: Deactivated successfully. Jun 21 05:28:39.747885 containerd[1532]: time="2025-06-21T05:28:39.747606168Z" level=info msg="CreateContainer within sandbox \"757d6b429adab91ed26068be0d07109569f2b9d80a1e47f60f0b1ed596869dac\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"78dbea64a0937b13829503b676b71974363cce12f00c3a01c9a1643da88dfe99\"" Jun 21 05:28:39.749029 containerd[1532]: time="2025-06-21T05:28:39.748972773Z" level=info msg="StartContainer for \"78dbea64a0937b13829503b676b71974363cce12f00c3a01c9a1643da88dfe99\"" Jun 21 05:28:39.752507 containerd[1532]: time="2025-06-21T05:28:39.752448192Z" level=info msg="connecting to shim 78dbea64a0937b13829503b676b71974363cce12f00c3a01c9a1643da88dfe99" address="unix:///run/containerd/s/c0035bf31e8cd1c0a3e09c136f8acdc9c918dc61fd5325fca35ef26dcd7a2b6a" protocol=ttrpc version=3 Jun 21 05:28:39.779872 systemd[1]: Started cri-containerd-78dbea64a0937b13829503b676b71974363cce12f00c3a01c9a1643da88dfe99.scope - libcontainer container 78dbea64a0937b13829503b676b71974363cce12f00c3a01c9a1643da88dfe99. Jun 21 05:28:39.822772 containerd[1532]: time="2025-06-21T05:28:39.822579519Z" level=info msg="StartContainer for \"78dbea64a0937b13829503b676b71974363cce12f00c3a01c9a1643da88dfe99\" returns successfully" Jun 21 05:28:40.524490 kubelet[2681]: E0621 05:28:40.524312 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:40.548534 kubelet[2681]: I0621 05:28:40.548426 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-5vd9d" podStartSLOduration=3.381507928 podStartE2EDuration="5.548403613s" podCreationTimestamp="2025-06-21 05:28:35 +0000 UTC" firstStartedPulling="2025-06-21 05:28:36.573947964 +0000 UTC m=+5.611420392" lastFinishedPulling="2025-06-21 05:28:39.720175853 +0000 UTC m=+7.778316077" observedRunningTime="2025-06-21 05:28:40.198069422 +0000 UTC m=+8.256209663" watchObservedRunningTime="2025-06-21 05:28:40.548403613 +0000 UTC m=+8.606543872" Jun 21 05:28:41.191712 kubelet[2681]: E0621 05:28:41.191656 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:42.231337 kubelet[2681]: E0621 05:28:42.230838 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:43.195772 kubelet[2681]: E0621 05:28:43.195550 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:43.215535 kubelet[2681]: E0621 05:28:43.215490 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:44.197856 kubelet[2681]: E0621 05:28:44.197677 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:46.897919 sudo[1770]: pam_unix(sudo:session): session closed for user root Jun 21 05:28:46.904607 sshd[1769]: Connection closed by 139.178.68.195 port 55500 Jun 21 05:28:46.906719 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jun 21 05:28:46.917824 systemd-logind[1491]: Session 7 logged out. Waiting for processes to exit. Jun 21 05:28:46.918148 systemd[1]: sshd@6-143.198.235.111:22-139.178.68.195:55500.service: Deactivated successfully. Jun 21 05:28:46.921988 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 05:28:46.922772 systemd[1]: session-7.scope: Consumed 5.978s CPU time, 158.4M memory peak. Jun 21 05:28:46.928218 systemd-logind[1491]: Removed session 7. Jun 21 05:28:47.642636 update_engine[1493]: I20250621 05:28:47.642526 1493 update_attempter.cc:509] Updating boot flags... Jun 21 05:28:50.497603 systemd[1]: Created slice kubepods-besteffort-pod488c1d6b_6caa_4adf_89fa_f7200395de5b.slice - libcontainer container kubepods-besteffort-pod488c1d6b_6caa_4adf_89fa_f7200395de5b.slice. Jun 21 05:28:50.609756 kubelet[2681]: I0621 05:28:50.609694 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/488c1d6b-6caa-4adf-89fa-f7200395de5b-typha-certs\") pod \"calico-typha-7c8b6965fb-8mvl5\" (UID: \"488c1d6b-6caa-4adf-89fa-f7200395de5b\") " pod="calico-system/calico-typha-7c8b6965fb-8mvl5" Jun 21 05:28:50.609756 kubelet[2681]: I0621 05:28:50.609752 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl2fm\" (UniqueName: \"kubernetes.io/projected/488c1d6b-6caa-4adf-89fa-f7200395de5b-kube-api-access-gl2fm\") pod \"calico-typha-7c8b6965fb-8mvl5\" (UID: \"488c1d6b-6caa-4adf-89fa-f7200395de5b\") " pod="calico-system/calico-typha-7c8b6965fb-8mvl5" Jun 21 05:28:50.610264 kubelet[2681]: I0621 05:28:50.609777 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/488c1d6b-6caa-4adf-89fa-f7200395de5b-tigera-ca-bundle\") pod \"calico-typha-7c8b6965fb-8mvl5\" (UID: \"488c1d6b-6caa-4adf-89fa-f7200395de5b\") " pod="calico-system/calico-typha-7c8b6965fb-8mvl5" Jun 21 05:28:50.802018 kubelet[2681]: E0621 05:28:50.801967 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:50.803176 containerd[1532]: time="2025-06-21T05:28:50.803057802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c8b6965fb-8mvl5,Uid:488c1d6b-6caa-4adf-89fa-f7200395de5b,Namespace:calico-system,Attempt:0,}" Jun 21 05:28:50.841130 containerd[1532]: time="2025-06-21T05:28:50.840990688Z" level=info msg="connecting to shim 6c487d4f9b331d3f81107650993279830c3fd639c29f834e2437f58768858a50" address="unix:///run/containerd/s/39b1f730765a6619fc7f035af7bd947f9b36301ec5b269aae8d176e55cc86551" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:28:50.901309 systemd[1]: Started cri-containerd-6c487d4f9b331d3f81107650993279830c3fd639c29f834e2437f58768858a50.scope - libcontainer container 6c487d4f9b331d3f81107650993279830c3fd639c29f834e2437f58768858a50. Jun 21 05:28:50.925140 systemd[1]: Created slice kubepods-besteffort-pod172361f1_ced1_472a_a9b9_1ad0f9018ad9.slice - libcontainer container kubepods-besteffort-pod172361f1_ced1_472a_a9b9_1ad0f9018ad9.slice. Jun 21 05:28:51.015824 kubelet[2681]: I0621 05:28:51.015739 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/172361f1-ced1-472a-a9b9-1ad0f9018ad9-cni-log-dir\") pod \"calico-node-kdwc5\" (UID: \"172361f1-ced1-472a-a9b9-1ad0f9018ad9\") " pod="calico-system/calico-node-kdwc5" Jun 21 05:28:51.016123 kubelet[2681]: I0621 05:28:51.015982 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/172361f1-ced1-472a-a9b9-1ad0f9018ad9-var-lib-calico\") pod \"calico-node-kdwc5\" (UID: \"172361f1-ced1-472a-a9b9-1ad0f9018ad9\") " pod="calico-system/calico-node-kdwc5" Jun 21 05:28:51.016467 kubelet[2681]: I0621 05:28:51.016220 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/172361f1-ced1-472a-a9b9-1ad0f9018ad9-var-run-calico\") pod \"calico-node-kdwc5\" (UID: \"172361f1-ced1-472a-a9b9-1ad0f9018ad9\") " pod="calico-system/calico-node-kdwc5" Jun 21 05:28:51.016467 kubelet[2681]: I0621 05:28:51.016261 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/172361f1-ced1-472a-a9b9-1ad0f9018ad9-flexvol-driver-host\") pod \"calico-node-kdwc5\" (UID: \"172361f1-ced1-472a-a9b9-1ad0f9018ad9\") " pod="calico-system/calico-node-kdwc5" Jun 21 05:28:51.016467 kubelet[2681]: I0621 05:28:51.016278 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/172361f1-ced1-472a-a9b9-1ad0f9018ad9-tigera-ca-bundle\") pod \"calico-node-kdwc5\" (UID: \"172361f1-ced1-472a-a9b9-1ad0f9018ad9\") " pod="calico-system/calico-node-kdwc5" Jun 21 05:28:51.016467 kubelet[2681]: I0621 05:28:51.016298 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/172361f1-ced1-472a-a9b9-1ad0f9018ad9-node-certs\") pod \"calico-node-kdwc5\" (UID: \"172361f1-ced1-472a-a9b9-1ad0f9018ad9\") " pod="calico-system/calico-node-kdwc5" Jun 21 05:28:51.016467 kubelet[2681]: I0621 05:28:51.016316 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7scr\" (UniqueName: \"kubernetes.io/projected/172361f1-ced1-472a-a9b9-1ad0f9018ad9-kube-api-access-p7scr\") pod \"calico-node-kdwc5\" (UID: \"172361f1-ced1-472a-a9b9-1ad0f9018ad9\") " pod="calico-system/calico-node-kdwc5" Jun 21 05:28:51.016633 kubelet[2681]: I0621 05:28:51.016338 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/172361f1-ced1-472a-a9b9-1ad0f9018ad9-cni-bin-dir\") pod \"calico-node-kdwc5\" (UID: \"172361f1-ced1-472a-a9b9-1ad0f9018ad9\") " pod="calico-system/calico-node-kdwc5" Jun 21 05:28:51.016633 kubelet[2681]: I0621 05:28:51.016355 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/172361f1-ced1-472a-a9b9-1ad0f9018ad9-policysync\") pod \"calico-node-kdwc5\" (UID: \"172361f1-ced1-472a-a9b9-1ad0f9018ad9\") " pod="calico-system/calico-node-kdwc5" Jun 21 05:28:51.016633 kubelet[2681]: I0621 05:28:51.016371 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/172361f1-ced1-472a-a9b9-1ad0f9018ad9-cni-net-dir\") pod \"calico-node-kdwc5\" (UID: \"172361f1-ced1-472a-a9b9-1ad0f9018ad9\") " pod="calico-system/calico-node-kdwc5" Jun 21 05:28:51.016633 kubelet[2681]: I0621 05:28:51.016390 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/172361f1-ced1-472a-a9b9-1ad0f9018ad9-lib-modules\") pod \"calico-node-kdwc5\" (UID: \"172361f1-ced1-472a-a9b9-1ad0f9018ad9\") " pod="calico-system/calico-node-kdwc5" Jun 21 05:28:51.016633 kubelet[2681]: I0621 05:28:51.016408 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/172361f1-ced1-472a-a9b9-1ad0f9018ad9-xtables-lock\") pod \"calico-node-kdwc5\" (UID: \"172361f1-ced1-472a-a9b9-1ad0f9018ad9\") " pod="calico-system/calico-node-kdwc5" Jun 21 05:28:51.091411 containerd[1532]: time="2025-06-21T05:28:51.091134770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c8b6965fb-8mvl5,Uid:488c1d6b-6caa-4adf-89fa-f7200395de5b,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c487d4f9b331d3f81107650993279830c3fd639c29f834e2437f58768858a50\"" Jun 21 05:28:51.093413 kubelet[2681]: E0621 05:28:51.093376 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:51.096633 containerd[1532]: time="2025-06-21T05:28:51.096576499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 21 05:28:51.156631 kubelet[2681]: E0621 05:28:51.156574 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.156631 kubelet[2681]: W0621 05:28:51.156618 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.159217 kubelet[2681]: E0621 05:28:51.158935 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.159217 kubelet[2681]: E0621 05:28:51.158952 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.159217 kubelet[2681]: W0621 05:28:51.159028 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.159217 kubelet[2681]: E0621 05:28:51.159062 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.159715 kubelet[2681]: E0621 05:28:51.159692 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.160061 kubelet[2681]: W0621 05:28:51.160038 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.160414 kubelet[2681]: E0621 05:28:51.160320 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.160414 kubelet[2681]: W0621 05:28:51.160335 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.161195 kubelet[2681]: E0621 05:28:51.161149 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.161259 kubelet[2681]: E0621 05:28:51.161198 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.161600 kubelet[2681]: E0621 05:28:51.161563 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.161600 kubelet[2681]: W0621 05:28:51.161581 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.161789 kubelet[2681]: E0621 05:28:51.161774 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.162024 kubelet[2681]: E0621 05:28:51.161999 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.162024 kubelet[2681]: W0621 05:28:51.162010 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.162258 kubelet[2681]: E0621 05:28:51.162243 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.162689 kubelet[2681]: E0621 05:28:51.162522 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.162689 kubelet[2681]: W0621 05:28:51.162534 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.162689 kubelet[2681]: E0621 05:28:51.162547 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.162863 kubelet[2681]: E0621 05:28:51.162852 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.162943 kubelet[2681]: W0621 05:28:51.162929 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.163066 kubelet[2681]: E0621 05:28:51.163055 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.163406 kubelet[2681]: E0621 05:28:51.163391 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.163544 kubelet[2681]: W0621 05:28:51.163529 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.163719 kubelet[2681]: E0621 05:28:51.163660 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.164273 kubelet[2681]: E0621 05:28:51.164258 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.164373 kubelet[2681]: W0621 05:28:51.164354 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.164485 kubelet[2681]: E0621 05:28:51.164422 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.164705 kubelet[2681]: E0621 05:28:51.164693 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.164826 kubelet[2681]: W0621 05:28:51.164773 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.164886 kubelet[2681]: E0621 05:28:51.164876 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.165146 kubelet[2681]: E0621 05:28:51.165136 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.165219 kubelet[2681]: W0621 05:28:51.165208 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.165272 kubelet[2681]: E0621 05:28:51.165263 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.166606 kubelet[2681]: E0621 05:28:51.166578 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.166606 kubelet[2681]: W0621 05:28:51.166602 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.166716 kubelet[2681]: E0621 05:28:51.166635 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.166995 kubelet[2681]: E0621 05:28:51.166973 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.166995 kubelet[2681]: W0621 05:28:51.166993 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.167089 kubelet[2681]: E0621 05:28:51.167013 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.177080 kubelet[2681]: E0621 05:28:51.176953 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.177080 kubelet[2681]: W0621 05:28:51.176976 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.177080 kubelet[2681]: E0621 05:28:51.177008 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.206408 kubelet[2681]: E0621 05:28:51.205942 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d42wl" podUID="47771cd6-42bc-4e44-9ac1-516f10966eb8" Jun 21 05:28:51.213518 kubelet[2681]: E0621 05:28:51.213481 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.214173 kubelet[2681]: W0621 05:28:51.214043 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.214855 kubelet[2681]: E0621 05:28:51.214352 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.215799 kubelet[2681]: E0621 05:28:51.215780 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.216037 kubelet[2681]: W0621 05:28:51.215909 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.216037 kubelet[2681]: E0621 05:28:51.215937 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.217394 kubelet[2681]: E0621 05:28:51.216745 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.217798 kubelet[2681]: W0621 05:28:51.217779 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.217899 kubelet[2681]: E0621 05:28:51.217885 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.218755 kubelet[2681]: E0621 05:28:51.218740 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.219041 kubelet[2681]: W0621 05:28:51.219016 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.219156 kubelet[2681]: E0621 05:28:51.219139 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.219586 kubelet[2681]: E0621 05:28:51.219571 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.220083 kubelet[2681]: W0621 05:28:51.219963 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.220083 kubelet[2681]: E0621 05:28:51.219986 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.221533 kubelet[2681]: E0621 05:28:51.221333 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.221533 kubelet[2681]: W0621 05:28:51.221352 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.221533 kubelet[2681]: E0621 05:28:51.221371 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.223114 kubelet[2681]: E0621 05:28:51.222963 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.223114 kubelet[2681]: W0621 05:28:51.222977 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.223114 kubelet[2681]: E0621 05:28:51.222995 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.224360 kubelet[2681]: E0621 05:28:51.223703 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.225233 kubelet[2681]: W0621 05:28:51.224639 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.225233 kubelet[2681]: E0621 05:28:51.224669 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.226812 kubelet[2681]: E0621 05:28:51.226798 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.226936 kubelet[2681]: W0621 05:28:51.226921 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.227004 kubelet[2681]: E0621 05:28:51.226993 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.227326 kubelet[2681]: E0621 05:28:51.227311 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.227553 kubelet[2681]: W0621 05:28:51.227404 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.227553 kubelet[2681]: E0621 05:28:51.227423 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.227738 kubelet[2681]: E0621 05:28:51.227728 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.227953 kubelet[2681]: W0621 05:28:51.227902 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.227953 kubelet[2681]: E0621 05:28:51.227922 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.228745 kubelet[2681]: E0621 05:28:51.228540 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.228745 kubelet[2681]: W0621 05:28:51.228561 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.228745 kubelet[2681]: E0621 05:28:51.228576 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.229346 kubelet[2681]: E0621 05:28:51.229329 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.229620 kubelet[2681]: W0621 05:28:51.229436 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.229620 kubelet[2681]: E0621 05:28:51.229469 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.230692 kubelet[2681]: E0621 05:28:51.230418 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.231906 kubelet[2681]: W0621 05:28:51.230439 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.234534 kubelet[2681]: E0621 05:28:51.231810 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.235535 kubelet[2681]: E0621 05:28:51.235510 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.235652 kubelet[2681]: W0621 05:28:51.235636 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.235717 kubelet[2681]: E0621 05:28:51.235706 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.236268 kubelet[2681]: E0621 05:28:51.236124 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.236268 kubelet[2681]: W0621 05:28:51.236137 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.236268 kubelet[2681]: E0621 05:28:51.236150 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.236467 kubelet[2681]: E0621 05:28:51.236444 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.236531 kubelet[2681]: W0621 05:28:51.236521 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.236655 kubelet[2681]: E0621 05:28:51.236568 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.236785 kubelet[2681]: E0621 05:28:51.236770 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.236842 kubelet[2681]: W0621 05:28:51.236833 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.236896 kubelet[2681]: E0621 05:28:51.236887 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.237231 kubelet[2681]: E0621 05:28:51.237124 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.237231 kubelet[2681]: W0621 05:28:51.237136 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.237231 kubelet[2681]: E0621 05:28:51.237148 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.237389 kubelet[2681]: E0621 05:28:51.237378 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.237606 kubelet[2681]: W0621 05:28:51.237506 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.237707 kubelet[2681]: E0621 05:28:51.237694 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.238765 containerd[1532]: time="2025-06-21T05:28:51.238583634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kdwc5,Uid:172361f1-ced1-472a-a9b9-1ad0f9018ad9,Namespace:calico-system,Attempt:0,}" Jun 21 05:28:51.239203 kubelet[2681]: E0621 05:28:51.239167 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.239203 kubelet[2681]: W0621 05:28:51.239191 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.239324 kubelet[2681]: E0621 05:28:51.239212 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.240085 kubelet[2681]: I0621 05:28:51.240053 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwc4n\" (UniqueName: \"kubernetes.io/projected/47771cd6-42bc-4e44-9ac1-516f10966eb8-kube-api-access-bwc4n\") pod \"csi-node-driver-d42wl\" (UID: \"47771cd6-42bc-4e44-9ac1-516f10966eb8\") " pod="calico-system/csi-node-driver-d42wl" Jun 21 05:28:51.240714 kubelet[2681]: E0621 05:28:51.240659 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.240714 kubelet[2681]: W0621 05:28:51.240682 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.240714 kubelet[2681]: E0621 05:28:51.240709 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.241834 kubelet[2681]: E0621 05:28:51.241503 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.241834 kubelet[2681]: W0621 05:28:51.241523 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.241834 kubelet[2681]: E0621 05:28:51.241546 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.242614 kubelet[2681]: E0621 05:28:51.242593 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.243804 kubelet[2681]: W0621 05:28:51.243529 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.243804 kubelet[2681]: E0621 05:28:51.243584 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.243804 kubelet[2681]: I0621 05:28:51.243621 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/47771cd6-42bc-4e44-9ac1-516f10966eb8-varrun\") pod \"csi-node-driver-d42wl\" (UID: \"47771cd6-42bc-4e44-9ac1-516f10966eb8\") " pod="calico-system/csi-node-driver-d42wl" Jun 21 05:28:51.244464 kubelet[2681]: E0621 05:28:51.244354 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.244925 kubelet[2681]: W0621 05:28:51.244712 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.244925 kubelet[2681]: E0621 05:28:51.244760 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.244925 kubelet[2681]: I0621 05:28:51.244798 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47771cd6-42bc-4e44-9ac1-516f10966eb8-kubelet-dir\") pod \"csi-node-driver-d42wl\" (UID: \"47771cd6-42bc-4e44-9ac1-516f10966eb8\") " pod="calico-system/csi-node-driver-d42wl" Jun 21 05:28:51.245656 kubelet[2681]: E0621 05:28:51.245628 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.245656 kubelet[2681]: W0621 05:28:51.245653 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.246235 kubelet[2681]: E0621 05:28:51.245683 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.246577 kubelet[2681]: E0621 05:28:51.246557 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.246643 kubelet[2681]: W0621 05:28:51.246577 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.246887 kubelet[2681]: E0621 05:28:51.246855 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.248285 kubelet[2681]: E0621 05:28:51.248261 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.248285 kubelet[2681]: W0621 05:28:51.248284 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.248645 kubelet[2681]: E0621 05:28:51.248566 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.248645 kubelet[2681]: I0621 05:28:51.248622 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47771cd6-42bc-4e44-9ac1-516f10966eb8-registration-dir\") pod \"csi-node-driver-d42wl\" (UID: \"47771cd6-42bc-4e44-9ac1-516f10966eb8\") " pod="calico-system/csi-node-driver-d42wl" Jun 21 05:28:51.249260 kubelet[2681]: E0621 05:28:51.249232 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.249260 kubelet[2681]: W0621 05:28:51.249257 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.249418 kubelet[2681]: E0621 05:28:51.249286 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.249905 kubelet[2681]: E0621 05:28:51.249879 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.249905 kubelet[2681]: W0621 05:28:51.249902 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.250565 kubelet[2681]: E0621 05:28:51.249929 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.250956 kubelet[2681]: E0621 05:28:51.250908 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.250956 kubelet[2681]: W0621 05:28:51.250933 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.251761 kubelet[2681]: E0621 05:28:51.251529 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.252484 kubelet[2681]: I0621 05:28:51.252246 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47771cd6-42bc-4e44-9ac1-516f10966eb8-socket-dir\") pod \"csi-node-driver-d42wl\" (UID: \"47771cd6-42bc-4e44-9ac1-516f10966eb8\") " pod="calico-system/csi-node-driver-d42wl" Jun 21 05:28:51.252654 kubelet[2681]: E0621 05:28:51.252573 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.252654 kubelet[2681]: W0621 05:28:51.252599 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.252654 kubelet[2681]: E0621 05:28:51.252621 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.253710 kubelet[2681]: E0621 05:28:51.253683 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.253710 kubelet[2681]: W0621 05:28:51.253708 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.253809 kubelet[2681]: E0621 05:28:51.253737 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.254743 kubelet[2681]: E0621 05:28:51.254716 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.254997 kubelet[2681]: W0621 05:28:51.254897 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.254997 kubelet[2681]: E0621 05:28:51.254929 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.255746 kubelet[2681]: E0621 05:28:51.255722 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.255746 kubelet[2681]: W0621 05:28:51.255743 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.255836 kubelet[2681]: E0621 05:28:51.255763 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.273873 containerd[1532]: time="2025-06-21T05:28:51.273810628Z" level=info msg="connecting to shim 62df5cce0b7ee90005b2da00485a2e351b05720f1e483445fbeb8c0a731866b3" address="unix:///run/containerd/s/badc3383317c9640e9ad6efc38b3cc61b67acb5a31051f93eddc485c8243b7dd" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:28:51.321895 systemd[1]: Started cri-containerd-62df5cce0b7ee90005b2da00485a2e351b05720f1e483445fbeb8c0a731866b3.scope - libcontainer container 62df5cce0b7ee90005b2da00485a2e351b05720f1e483445fbeb8c0a731866b3. Jun 21 05:28:51.355178 kubelet[2681]: E0621 05:28:51.353994 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.355178 kubelet[2681]: W0621 05:28:51.354037 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.355178 kubelet[2681]: E0621 05:28:51.354088 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.355178 kubelet[2681]: E0621 05:28:51.354773 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.355178 kubelet[2681]: W0621 05:28:51.354797 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.355178 kubelet[2681]: E0621 05:28:51.354821 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.356790 kubelet[2681]: E0621 05:28:51.355752 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.356790 kubelet[2681]: W0621 05:28:51.355779 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.356790 kubelet[2681]: E0621 05:28:51.355832 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.357941 kubelet[2681]: E0621 05:28:51.357902 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.357941 kubelet[2681]: W0621 05:28:51.357929 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.358194 kubelet[2681]: E0621 05:28:51.358061 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.358349 kubelet[2681]: E0621 05:28:51.358326 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.358387 kubelet[2681]: W0621 05:28:51.358367 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.358413 kubelet[2681]: E0621 05:28:51.358399 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.358863 kubelet[2681]: E0621 05:28:51.358827 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.358912 kubelet[2681]: W0621 05:28:51.358865 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.358912 kubelet[2681]: E0621 05:28:51.358889 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.361605 kubelet[2681]: E0621 05:28:51.361568 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.361605 kubelet[2681]: W0621 05:28:51.361596 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.361724 kubelet[2681]: E0621 05:28:51.361629 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.362015 kubelet[2681]: E0621 05:28:51.361989 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.362053 kubelet[2681]: W0621 05:28:51.362013 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.362129 kubelet[2681]: E0621 05:28:51.362109 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.362431 kubelet[2681]: E0621 05:28:51.362413 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.362507 kubelet[2681]: W0621 05:28:51.362439 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.362575 kubelet[2681]: E0621 05:28:51.362557 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.362782 kubelet[2681]: E0621 05:28:51.362767 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.362813 kubelet[2681]: W0621 05:28:51.362783 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.362839 kubelet[2681]: E0621 05:28:51.362812 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.363179 kubelet[2681]: E0621 05:28:51.363162 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.363217 kubelet[2681]: W0621 05:28:51.363180 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.363217 kubelet[2681]: E0621 05:28:51.363204 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.365572 kubelet[2681]: E0621 05:28:51.365540 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.365572 kubelet[2681]: W0621 05:28:51.365568 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.365685 kubelet[2681]: E0621 05:28:51.365667 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.365993 kubelet[2681]: E0621 05:28:51.365972 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.366036 kubelet[2681]: W0621 05:28:51.365993 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.366102 kubelet[2681]: E0621 05:28:51.366085 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.366443 kubelet[2681]: E0621 05:28:51.366426 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.366569 kubelet[2681]: W0621 05:28:51.366446 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.366569 kubelet[2681]: E0621 05:28:51.366562 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.366811 kubelet[2681]: E0621 05:28:51.366791 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.366847 kubelet[2681]: W0621 05:28:51.366812 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.366918 kubelet[2681]: E0621 05:28:51.366900 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.367217 kubelet[2681]: E0621 05:28:51.367198 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.367268 kubelet[2681]: W0621 05:28:51.367219 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.367355 kubelet[2681]: E0621 05:28:51.367335 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.369219 kubelet[2681]: E0621 05:28:51.369183 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.369219 kubelet[2681]: W0621 05:28:51.369209 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.369383 kubelet[2681]: E0621 05:28:51.369292 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.369645 kubelet[2681]: E0621 05:28:51.369618 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.369645 kubelet[2681]: W0621 05:28:51.369641 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.369852 kubelet[2681]: E0621 05:28:51.369815 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.369971 kubelet[2681]: E0621 05:28:51.369947 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.370005 kubelet[2681]: W0621 05:28:51.369967 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.370071 kubelet[2681]: E0621 05:28:51.370055 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.370341 kubelet[2681]: E0621 05:28:51.370322 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.370387 kubelet[2681]: W0621 05:28:51.370341 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.370387 kubelet[2681]: E0621 05:28:51.370376 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.370694 kubelet[2681]: E0621 05:28:51.370676 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.370746 kubelet[2681]: W0621 05:28:51.370696 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.370746 kubelet[2681]: E0621 05:28:51.370728 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.371522 kubelet[2681]: E0621 05:28:51.371494 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.371522 kubelet[2681]: W0621 05:28:51.371514 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.371628 kubelet[2681]: E0621 05:28:51.371530 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.372554 kubelet[2681]: E0621 05:28:51.372522 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.372554 kubelet[2681]: W0621 05:28:51.372544 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.372694 kubelet[2681]: E0621 05:28:51.372670 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.373576 kubelet[2681]: E0621 05:28:51.373546 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.373576 kubelet[2681]: W0621 05:28:51.373566 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.373716 kubelet[2681]: E0621 05:28:51.373692 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.374605 kubelet[2681]: E0621 05:28:51.374571 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.374605 kubelet[2681]: W0621 05:28:51.374593 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.374690 kubelet[2681]: E0621 05:28:51.374611 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.401097 kubelet[2681]: E0621 05:28:51.401042 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:51.401097 kubelet[2681]: W0621 05:28:51.401090 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:51.401260 kubelet[2681]: E0621 05:28:51.401121 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:51.475513 containerd[1532]: time="2025-06-21T05:28:51.475431709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kdwc5,Uid:172361f1-ced1-472a-a9b9-1ad0f9018ad9,Namespace:calico-system,Attempt:0,} returns sandbox id \"62df5cce0b7ee90005b2da00485a2e351b05720f1e483445fbeb8c0a731866b3\"" Jun 21 05:28:52.561603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1163680575.mount: Deactivated successfully. Jun 21 05:28:53.118965 kubelet[2681]: E0621 05:28:53.117792 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d42wl" podUID="47771cd6-42bc-4e44-9ac1-516f10966eb8" Jun 21 05:28:54.037252 containerd[1532]: time="2025-06-21T05:28:54.016874999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=35227888" Jun 21 05:28:54.038366 containerd[1532]: time="2025-06-21T05:28:54.019908037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 2.923111489s" Jun 21 05:28:54.038366 containerd[1532]: time="2025-06-21T05:28:54.037979761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 21 05:28:54.038366 containerd[1532]: time="2025-06-21T05:28:54.019988551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:54.039883 containerd[1532]: time="2025-06-21T05:28:54.039746861Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:54.040968 containerd[1532]: time="2025-06-21T05:28:54.040523397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:54.040968 containerd[1532]: time="2025-06-21T05:28:54.040719355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 21 05:28:54.074146 containerd[1532]: time="2025-06-21T05:28:54.074091671Z" level=info msg="CreateContainer within sandbox \"6c487d4f9b331d3f81107650993279830c3fd639c29f834e2437f58768858a50\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 21 05:28:54.082136 containerd[1532]: time="2025-06-21T05:28:54.082083734Z" level=info msg="Container c502cfcd2b96539271dc2d9ca48458bc17a2a850346b24029c3d1865b3fd2a9a: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:54.089105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount609216922.mount: Deactivated successfully. Jun 21 05:28:54.100650 containerd[1532]: time="2025-06-21T05:28:54.100540457Z" level=info msg="CreateContainer within sandbox \"6c487d4f9b331d3f81107650993279830c3fd639c29f834e2437f58768858a50\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c502cfcd2b96539271dc2d9ca48458bc17a2a850346b24029c3d1865b3fd2a9a\"" Jun 21 05:28:54.101520 containerd[1532]: time="2025-06-21T05:28:54.101271092Z" level=info msg="StartContainer for \"c502cfcd2b96539271dc2d9ca48458bc17a2a850346b24029c3d1865b3fd2a9a\"" Jun 21 05:28:54.104967 containerd[1532]: time="2025-06-21T05:28:54.104772939Z" level=info msg="connecting to shim c502cfcd2b96539271dc2d9ca48458bc17a2a850346b24029c3d1865b3fd2a9a" address="unix:///run/containerd/s/39b1f730765a6619fc7f035af7bd947f9b36301ec5b269aae8d176e55cc86551" protocol=ttrpc version=3 Jun 21 05:28:54.138737 systemd[1]: Started cri-containerd-c502cfcd2b96539271dc2d9ca48458bc17a2a850346b24029c3d1865b3fd2a9a.scope - libcontainer container c502cfcd2b96539271dc2d9ca48458bc17a2a850346b24029c3d1865b3fd2a9a. Jun 21 05:28:54.200832 containerd[1532]: time="2025-06-21T05:28:54.200777513Z" level=info msg="StartContainer for \"c502cfcd2b96539271dc2d9ca48458bc17a2a850346b24029c3d1865b3fd2a9a\" returns successfully" Jun 21 05:28:54.243926 kubelet[2681]: E0621 05:28:54.243878 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:54.265641 kubelet[2681]: E0621 05:28:54.265601 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.266525 kubelet[2681]: W0621 05:28:54.266064 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.266525 kubelet[2681]: E0621 05:28:54.266109 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.267855 kubelet[2681]: E0621 05:28:54.266716 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.267855 kubelet[2681]: W0621 05:28:54.266741 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.267855 kubelet[2681]: E0621 05:28:54.266766 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.267855 kubelet[2681]: E0621 05:28:54.267557 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.267855 kubelet[2681]: W0621 05:28:54.267575 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.267855 kubelet[2681]: E0621 05:28:54.267596 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.269300 kubelet[2681]: I0621 05:28:54.269199 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c8b6965fb-8mvl5" podStartSLOduration=1.324335051 podStartE2EDuration="4.269154122s" podCreationTimestamp="2025-06-21 05:28:50 +0000 UTC" firstStartedPulling="2025-06-21 05:28:51.09551114 +0000 UTC m=+19.153651375" lastFinishedPulling="2025-06-21 05:28:54.040330211 +0000 UTC m=+22.098470446" observedRunningTime="2025-06-21 05:28:54.264012676 +0000 UTC m=+22.322152926" watchObservedRunningTime="2025-06-21 05:28:54.269154122 +0000 UTC m=+22.327294427" Jun 21 05:28:54.269632 kubelet[2681]: E0621 05:28:54.269318 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.269738 kubelet[2681]: W0621 05:28:54.269724 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.269821 kubelet[2681]: E0621 05:28:54.269808 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.270319 kubelet[2681]: E0621 05:28:54.270171 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.270319 kubelet[2681]: W0621 05:28:54.270195 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.270319 kubelet[2681]: E0621 05:28:54.270214 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.270641 kubelet[2681]: E0621 05:28:54.270601 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.270833 kubelet[2681]: W0621 05:28:54.270736 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.270833 kubelet[2681]: E0621 05:28:54.270756 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.273346 kubelet[2681]: E0621 05:28:54.273319 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.273610 kubelet[2681]: W0621 05:28:54.273497 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.273610 kubelet[2681]: E0621 05:28:54.273524 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.274203 kubelet[2681]: E0621 05:28:54.274046 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.274203 kubelet[2681]: W0621 05:28:54.274068 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.274203 kubelet[2681]: E0621 05:28:54.274088 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.274422 kubelet[2681]: E0621 05:28:54.274409 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.274621 kubelet[2681]: W0621 05:28:54.274487 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.274621 kubelet[2681]: E0621 05:28:54.274504 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.274798 kubelet[2681]: E0621 05:28:54.274786 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.275099 kubelet[2681]: W0621 05:28:54.274959 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.275099 kubelet[2681]: E0621 05:28:54.274978 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.275287 kubelet[2681]: E0621 05:28:54.275270 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.276530 kubelet[2681]: W0621 05:28:54.276237 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.276681 kubelet[2681]: E0621 05:28:54.276654 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.277315 kubelet[2681]: E0621 05:28:54.277184 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.277315 kubelet[2681]: W0621 05:28:54.277199 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.277315 kubelet[2681]: E0621 05:28:54.277213 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.277763 kubelet[2681]: E0621 05:28:54.277631 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.277763 kubelet[2681]: W0621 05:28:54.277645 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.278046 kubelet[2681]: E0621 05:28:54.277658 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.279227 kubelet[2681]: E0621 05:28:54.278439 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.279378 kubelet[2681]: W0621 05:28:54.279355 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.279499 kubelet[2681]: E0621 05:28:54.279481 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.279926 kubelet[2681]: E0621 05:28:54.279817 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.279926 kubelet[2681]: W0621 05:28:54.279832 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.279926 kubelet[2681]: E0621 05:28:54.279856 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.290994 kubelet[2681]: E0621 05:28:54.289400 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.290994 kubelet[2681]: W0621 05:28:54.289432 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.290994 kubelet[2681]: E0621 05:28:54.289494 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.290994 kubelet[2681]: E0621 05:28:54.290685 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.290994 kubelet[2681]: W0621 05:28:54.290710 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.290994 kubelet[2681]: E0621 05:28:54.290735 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.293347 kubelet[2681]: E0621 05:28:54.291609 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.293347 kubelet[2681]: W0621 05:28:54.291634 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.293347 kubelet[2681]: E0621 05:28:54.291659 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.293907 kubelet[2681]: E0621 05:28:54.293768 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.293907 kubelet[2681]: W0621 05:28:54.293793 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.293907 kubelet[2681]: E0621 05:28:54.293829 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.294346 kubelet[2681]: E0621 05:28:54.294318 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.294346 kubelet[2681]: W0621 05:28:54.294340 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.294587 kubelet[2681]: E0621 05:28:54.294442 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.296606 kubelet[2681]: E0621 05:28:54.296564 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.296606 kubelet[2681]: W0621 05:28:54.296598 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.297675 kubelet[2681]: E0621 05:28:54.296885 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.297675 kubelet[2681]: W0621 05:28:54.296901 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.297675 kubelet[2681]: E0621 05:28:54.297123 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.297675 kubelet[2681]: W0621 05:28:54.297136 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.297675 kubelet[2681]: E0621 05:28:54.297155 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.298792 kubelet[2681]: E0621 05:28:54.298302 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.298792 kubelet[2681]: W0621 05:28:54.298325 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.298792 kubelet[2681]: E0621 05:28:54.298348 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.302584 kubelet[2681]: E0621 05:28:54.302539 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.304912 kubelet[2681]: E0621 05:28:54.304566 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.305138 kubelet[2681]: W0621 05:28:54.305077 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.305526 kubelet[2681]: E0621 05:28:54.305491 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.306643 kubelet[2681]: E0621 05:28:54.306618 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.306928 kubelet[2681]: W0621 05:28:54.306868 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.307146 kubelet[2681]: E0621 05:28:54.307127 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.308632 kubelet[2681]: E0621 05:28:54.307851 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.308632 kubelet[2681]: W0621 05:28:54.307871 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.308632 kubelet[2681]: E0621 05:28:54.307894 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.309345 kubelet[2681]: E0621 05:28:54.309266 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.309764 kubelet[2681]: W0621 05:28:54.309707 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.310139 kubelet[2681]: E0621 05:28:54.310040 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.313738 kubelet[2681]: E0621 05:28:54.313696 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.314779 kubelet[2681]: E0621 05:28:54.314728 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.314779 kubelet[2681]: W0621 05:28:54.314768 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.314926 kubelet[2681]: E0621 05:28:54.314797 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.317349 kubelet[2681]: E0621 05:28:54.317079 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.317349 kubelet[2681]: W0621 05:28:54.317113 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.317349 kubelet[2681]: E0621 05:28:54.317142 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.319950 kubelet[2681]: E0621 05:28:54.318239 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.319950 kubelet[2681]: W0621 05:28:54.318269 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.319950 kubelet[2681]: E0621 05:28:54.318296 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.319950 kubelet[2681]: E0621 05:28:54.319182 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.319950 kubelet[2681]: W0621 05:28:54.319203 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.319950 kubelet[2681]: E0621 05:28:54.319226 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:54.319950 kubelet[2681]: E0621 05:28:54.319857 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:54.319950 kubelet[2681]: W0621 05:28:54.319874 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:54.319950 kubelet[2681]: E0621 05:28:54.319894 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.118703 kubelet[2681]: E0621 05:28:55.118286 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d42wl" podUID="47771cd6-42bc-4e44-9ac1-516f10966eb8" Jun 21 05:28:55.247952 kubelet[2681]: E0621 05:28:55.247913 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:55.287670 kubelet[2681]: E0621 05:28:55.287602 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.288153 kubelet[2681]: W0621 05:28:55.287638 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.288153 kubelet[2681]: E0621 05:28:55.288039 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.288716 kubelet[2681]: E0621 05:28:55.288673 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.289074 kubelet[2681]: W0621 05:28:55.288800 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.289074 kubelet[2681]: E0621 05:28:55.288828 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.289746 kubelet[2681]: E0621 05:28:55.289663 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.289746 kubelet[2681]: W0621 05:28:55.289694 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.290157 kubelet[2681]: E0621 05:28:55.290011 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.290807 kubelet[2681]: E0621 05:28:55.290702 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.291180 kubelet[2681]: W0621 05:28:55.290923 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.291180 kubelet[2681]: E0621 05:28:55.290954 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.292081 kubelet[2681]: E0621 05:28:55.291907 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.292472 kubelet[2681]: W0621 05:28:55.292152 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.292472 kubelet[2681]: E0621 05:28:55.292178 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.293433 kubelet[2681]: E0621 05:28:55.293310 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.293650 kubelet[2681]: W0621 05:28:55.293494 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.293650 kubelet[2681]: E0621 05:28:55.293522 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.294760 kubelet[2681]: E0621 05:28:55.294705 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.295252 kubelet[2681]: W0621 05:28:55.294898 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.295252 kubelet[2681]: E0621 05:28:55.294924 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.295627 kubelet[2681]: E0621 05:28:55.295613 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.295721 kubelet[2681]: W0621 05:28:55.295702 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.295868 kubelet[2681]: E0621 05:28:55.295856 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.296533 kubelet[2681]: E0621 05:28:55.296263 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.296533 kubelet[2681]: W0621 05:28:55.296281 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.296533 kubelet[2681]: E0621 05:28:55.296298 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.297005 kubelet[2681]: E0621 05:28:55.296986 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.297478 kubelet[2681]: W0621 05:28:55.297191 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.297478 kubelet[2681]: E0621 05:28:55.297219 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.297899 kubelet[2681]: E0621 05:28:55.297735 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.298164 kubelet[2681]: W0621 05:28:55.298006 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.298164 kubelet[2681]: E0621 05:28:55.298034 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.298597 kubelet[2681]: E0621 05:28:55.298580 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.298959 kubelet[2681]: W0621 05:28:55.298709 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.298959 kubelet[2681]: E0621 05:28:55.298734 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.300639 kubelet[2681]: E0621 05:28:55.300611 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.301015 kubelet[2681]: W0621 05:28:55.300783 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.301015 kubelet[2681]: E0621 05:28:55.300820 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.301278 kubelet[2681]: E0621 05:28:55.301260 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.301385 kubelet[2681]: W0621 05:28:55.301366 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.301632 kubelet[2681]: E0621 05:28:55.301485 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.301873 kubelet[2681]: E0621 05:28:55.301854 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.301971 kubelet[2681]: W0621 05:28:55.301954 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.302078 kubelet[2681]: E0621 05:28:55.302060 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.302774 kubelet[2681]: E0621 05:28:55.302576 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.302774 kubelet[2681]: W0621 05:28:55.302595 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.302774 kubelet[2681]: E0621 05:28:55.302613 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.303202 kubelet[2681]: E0621 05:28:55.303182 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.303306 kubelet[2681]: W0621 05:28:55.303289 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.303574 kubelet[2681]: E0621 05:28:55.303400 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.303808 kubelet[2681]: E0621 05:28:55.303789 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.303948 kubelet[2681]: W0621 05:28:55.303928 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.304151 kubelet[2681]: E0621 05:28:55.304037 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.304438 kubelet[2681]: E0621 05:28:55.304407 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.304438 kubelet[2681]: W0621 05:28:55.304435 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.304817 kubelet[2681]: E0621 05:28:55.304472 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.304817 kubelet[2681]: E0621 05:28:55.304724 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.304817 kubelet[2681]: W0621 05:28:55.304738 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.304817 kubelet[2681]: E0621 05:28:55.304755 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.304981 kubelet[2681]: E0621 05:28:55.304957 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.304981 kubelet[2681]: W0621 05:28:55.304969 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.305066 kubelet[2681]: E0621 05:28:55.304984 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.305289 kubelet[2681]: E0621 05:28:55.305271 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.305289 kubelet[2681]: W0621 05:28:55.305285 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.305581 kubelet[2681]: E0621 05:28:55.305558 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.306702 kubelet[2681]: E0621 05:28:55.306678 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.306702 kubelet[2681]: W0621 05:28:55.306696 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.307526 kubelet[2681]: E0621 05:28:55.306878 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.307526 kubelet[2681]: W0621 05:28:55.306886 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.307786 kubelet[2681]: E0621 05:28:55.307670 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.307786 kubelet[2681]: E0621 05:28:55.307719 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.308690 kubelet[2681]: E0621 05:28:55.308666 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.308690 kubelet[2681]: W0621 05:28:55.308684 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.308807 kubelet[2681]: E0621 05:28:55.308718 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.308971 kubelet[2681]: E0621 05:28:55.308948 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.308971 kubelet[2681]: W0621 05:28:55.308960 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.309058 kubelet[2681]: E0621 05:28:55.308982 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.309176 kubelet[2681]: E0621 05:28:55.309160 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.309176 kubelet[2681]: W0621 05:28:55.309171 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.309271 kubelet[2681]: E0621 05:28:55.309249 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.309406 kubelet[2681]: E0621 05:28:55.309394 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.309406 kubelet[2681]: W0621 05:28:55.309402 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.309562 kubelet[2681]: E0621 05:28:55.309420 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.309627 kubelet[2681]: E0621 05:28:55.309604 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.309627 kubelet[2681]: W0621 05:28:55.309618 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.309706 kubelet[2681]: E0621 05:28:55.309636 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.311584 kubelet[2681]: E0621 05:28:55.311554 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.311584 kubelet[2681]: W0621 05:28:55.311575 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.311758 kubelet[2681]: E0621 05:28:55.311613 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.312345 kubelet[2681]: E0621 05:28:55.312317 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.312345 kubelet[2681]: W0621 05:28:55.312336 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.312345 kubelet[2681]: E0621 05:28:55.312355 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.313033 kubelet[2681]: E0621 05:28:55.312944 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.313033 kubelet[2681]: W0621 05:28:55.312965 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.313033 kubelet[2681]: E0621 05:28:55.312985 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.314152 kubelet[2681]: E0621 05:28:55.314126 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:28:55.314152 kubelet[2681]: W0621 05:28:55.314146 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:28:55.314295 kubelet[2681]: E0621 05:28:55.314163 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:28:55.812526 containerd[1532]: time="2025-06-21T05:28:55.812416298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:55.814223 containerd[1532]: time="2025-06-21T05:28:55.814168319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4441627" Jun 21 05:28:55.814719 containerd[1532]: time="2025-06-21T05:28:55.814679112Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:55.818269 containerd[1532]: time="2025-06-21T05:28:55.818199052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 1.777446893s" Jun 21 05:28:55.818269 containerd[1532]: time="2025-06-21T05:28:55.818259412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 21 05:28:55.818491 containerd[1532]: time="2025-06-21T05:28:55.818443292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:28:55.825787 containerd[1532]: time="2025-06-21T05:28:55.825679289Z" level=info msg="CreateContainer within sandbox \"62df5cce0b7ee90005b2da00485a2e351b05720f1e483445fbeb8c0a731866b3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 21 05:28:55.838488 containerd[1532]: time="2025-06-21T05:28:55.835045550Z" level=info msg="Container 17d14737ea7ef36d435cc51e0ab73d2a2374f124c088d65a85cb0d1f9ca1c2e8: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:28:55.847279 containerd[1532]: time="2025-06-21T05:28:55.847073232Z" level=info msg="CreateContainer within sandbox \"62df5cce0b7ee90005b2da00485a2e351b05720f1e483445fbeb8c0a731866b3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"17d14737ea7ef36d435cc51e0ab73d2a2374f124c088d65a85cb0d1f9ca1c2e8\"" Jun 21 05:28:55.848476 containerd[1532]: time="2025-06-21T05:28:55.848423411Z" level=info msg="StartContainer for \"17d14737ea7ef36d435cc51e0ab73d2a2374f124c088d65a85cb0d1f9ca1c2e8\"" Jun 21 05:28:55.852796 containerd[1532]: time="2025-06-21T05:28:55.852723174Z" level=info msg="connecting to shim 17d14737ea7ef36d435cc51e0ab73d2a2374f124c088d65a85cb0d1f9ca1c2e8" address="unix:///run/containerd/s/badc3383317c9640e9ad6efc38b3cc61b67acb5a31051f93eddc485c8243b7dd" protocol=ttrpc version=3 Jun 21 05:28:55.893831 systemd[1]: Started cri-containerd-17d14737ea7ef36d435cc51e0ab73d2a2374f124c088d65a85cb0d1f9ca1c2e8.scope - libcontainer container 17d14737ea7ef36d435cc51e0ab73d2a2374f124c088d65a85cb0d1f9ca1c2e8. Jun 21 05:28:55.960046 containerd[1532]: time="2025-06-21T05:28:55.959940180Z" level=info msg="StartContainer for \"17d14737ea7ef36d435cc51e0ab73d2a2374f124c088d65a85cb0d1f9ca1c2e8\" returns successfully" Jun 21 05:28:55.976316 systemd[1]: cri-containerd-17d14737ea7ef36d435cc51e0ab73d2a2374f124c088d65a85cb0d1f9ca1c2e8.scope: Deactivated successfully. Jun 21 05:28:56.018517 containerd[1532]: time="2025-06-21T05:28:56.018402760Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17d14737ea7ef36d435cc51e0ab73d2a2374f124c088d65a85cb0d1f9ca1c2e8\" id:\"17d14737ea7ef36d435cc51e0ab73d2a2374f124c088d65a85cb0d1f9ca1c2e8\" pid:3411 exited_at:{seconds:1750483735 nanos:980184861}" Jun 21 05:28:56.037339 containerd[1532]: time="2025-06-21T05:28:56.037244064Z" level=info msg="received exit event container_id:\"17d14737ea7ef36d435cc51e0ab73d2a2374f124c088d65a85cb0d1f9ca1c2e8\" id:\"17d14737ea7ef36d435cc51e0ab73d2a2374f124c088d65a85cb0d1f9ca1c2e8\" pid:3411 exited_at:{seconds:1750483735 nanos:980184861}" Jun 21 05:28:56.072018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17d14737ea7ef36d435cc51e0ab73d2a2374f124c088d65a85cb0d1f9ca1c2e8-rootfs.mount: Deactivated successfully. Jun 21 05:28:56.254515 kubelet[2681]: E0621 05:28:56.253983 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:28:56.256257 containerd[1532]: time="2025-06-21T05:28:56.255306745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 21 05:28:57.118561 kubelet[2681]: E0621 05:28:57.118449 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d42wl" podUID="47771cd6-42bc-4e44-9ac1-516f10966eb8" Jun 21 05:28:59.119924 kubelet[2681]: E0621 05:28:59.118713 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d42wl" podUID="47771cd6-42bc-4e44-9ac1-516f10966eb8" Jun 21 05:29:00.977173 containerd[1532]: time="2025-06-21T05:29:00.977022152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:00.991356 containerd[1532]: time="2025-06-21T05:29:00.991279109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 21 05:29:00.992928 containerd[1532]: time="2025-06-21T05:29:00.992849835Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:00.995843 containerd[1532]: time="2025-06-21T05:29:00.995766443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:00.997530 containerd[1532]: time="2025-06-21T05:29:00.996915992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 4.741460505s" Jun 21 05:29:00.997530 containerd[1532]: time="2025-06-21T05:29:00.996976650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 21 05:29:01.005980 containerd[1532]: time="2025-06-21T05:29:01.005889669Z" level=info msg="CreateContainer within sandbox \"62df5cce0b7ee90005b2da00485a2e351b05720f1e483445fbeb8c0a731866b3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 21 05:29:01.038492 containerd[1532]: time="2025-06-21T05:29:01.037414669Z" level=info msg="Container da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:01.047620 containerd[1532]: time="2025-06-21T05:29:01.047561990Z" level=info msg="CreateContainer within sandbox \"62df5cce0b7ee90005b2da00485a2e351b05720f1e483445fbeb8c0a731866b3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1\"" Jun 21 05:29:01.048785 containerd[1532]: time="2025-06-21T05:29:01.048717836Z" level=info msg="StartContainer for \"da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1\"" Jun 21 05:29:01.051110 containerd[1532]: time="2025-06-21T05:29:01.051046806Z" level=info msg="connecting to shim da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1" address="unix:///run/containerd/s/badc3383317c9640e9ad6efc38b3cc61b67acb5a31051f93eddc485c8243b7dd" protocol=ttrpc version=3 Jun 21 05:29:01.084833 systemd[1]: Started cri-containerd-da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1.scope - libcontainer container da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1. Jun 21 05:29:01.118776 kubelet[2681]: E0621 05:29:01.118285 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d42wl" podUID="47771cd6-42bc-4e44-9ac1-516f10966eb8" Jun 21 05:29:01.159885 containerd[1532]: time="2025-06-21T05:29:01.159813910Z" level=info msg="StartContainer for \"da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1\" returns successfully" Jun 21 05:29:02.128953 systemd[1]: cri-containerd-da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1.scope: Deactivated successfully. Jun 21 05:29:02.129368 systemd[1]: cri-containerd-da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1.scope: Consumed 852ms CPU time, 163.5M memory peak, 8.3M read from disk, 171.2M written to disk. Jun 21 05:29:02.135124 containerd[1532]: time="2025-06-21T05:29:02.135025831Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1\" id:\"da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1\" pid:3468 exited_at:{seconds:1750483742 nanos:134267263}" Jun 21 05:29:02.136199 containerd[1532]: time="2025-06-21T05:29:02.136013396Z" level=info msg="received exit event container_id:\"da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1\" id:\"da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1\" pid:3468 exited_at:{seconds:1750483742 nanos:134267263}" Jun 21 05:29:02.247550 kubelet[2681]: I0621 05:29:02.245620 2681 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 21 05:29:02.247403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da7aff30cd5c561ad8a2428f9ce0c2a4717fb16d365fd6f5c13e6f8c60db61a1-rootfs.mount: Deactivated successfully. Jun 21 05:29:02.320429 containerd[1532]: time="2025-06-21T05:29:02.320179018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 21 05:29:02.400675 systemd[1]: Created slice kubepods-burstable-pode103371f_c594_4fc8_ada3_19b59332d837.slice - libcontainer container kubepods-burstable-pode103371f_c594_4fc8_ada3_19b59332d837.slice. Jun 21 05:29:02.415341 kubelet[2681]: W0621 05:29:02.415275 2681 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4372.0.0-e-bb84d467cd" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.0.0-e-bb84d467cd' and this object Jun 21 05:29:02.418712 kubelet[2681]: E0621 05:29:02.417560 2681 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4372.0.0-e-bb84d467cd\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.0.0-e-bb84d467cd' and this object" logger="UnhandledError" Jun 21 05:29:02.418712 kubelet[2681]: W0621 05:29:02.418001 2681 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4372.0.0-e-bb84d467cd" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.0.0-e-bb84d467cd' and this object Jun 21 05:29:02.418712 kubelet[2681]: E0621 05:29:02.418034 2681 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4372.0.0-e-bb84d467cd\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.0.0-e-bb84d467cd' and this object" logger="UnhandledError" Jun 21 05:29:02.433542 systemd[1]: Created slice kubepods-burstable-pod5c6287d7_4572_4ea1_b2d5_7d6f8762f244.slice - libcontainer container kubepods-burstable-pod5c6287d7_4572_4ea1_b2d5_7d6f8762f244.slice. Jun 21 05:29:02.471713 kubelet[2681]: I0621 05:29:02.471426 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9012398b-cd9e-4e40-be6b-a069caec21d3-whisker-backend-key-pair\") pod \"whisker-7c5cfb5855-bfhp9\" (UID: \"9012398b-cd9e-4e40-be6b-a069caec21d3\") " pod="calico-system/whisker-7c5cfb5855-bfhp9" Jun 21 05:29:02.477535 kubelet[2681]: I0621 05:29:02.472227 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xjk8\" (UniqueName: \"kubernetes.io/projected/41978ebe-aee9-4eb1-b675-daa95daac0a7-kube-api-access-7xjk8\") pod \"calico-apiserver-86d6895b5f-nmhc6\" (UID: \"41978ebe-aee9-4eb1-b675-daa95daac0a7\") " pod="calico-apiserver/calico-apiserver-86d6895b5f-nmhc6" Jun 21 05:29:02.477535 kubelet[2681]: I0621 05:29:02.475569 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/561d093e-330b-43dd-ad62-1acd896769be-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-tkkdb\" (UID: \"561d093e-330b-43dd-ad62-1acd896769be\") " pod="calico-system/goldmane-5bd85449d4-tkkdb" Jun 21 05:29:02.477535 kubelet[2681]: I0621 05:29:02.475627 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e103371f-c594-4fc8-ada3-19b59332d837-config-volume\") pod \"coredns-668d6bf9bc-8lkqm\" (UID: \"e103371f-c594-4fc8-ada3-19b59332d837\") " pod="kube-system/coredns-668d6bf9bc-8lkqm" Jun 21 05:29:02.477535 kubelet[2681]: I0621 05:29:02.475661 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9012398b-cd9e-4e40-be6b-a069caec21d3-whisker-ca-bundle\") pod \"whisker-7c5cfb5855-bfhp9\" (UID: \"9012398b-cd9e-4e40-be6b-a069caec21d3\") " pod="calico-system/whisker-7c5cfb5855-bfhp9" Jun 21 05:29:02.477535 kubelet[2681]: I0621 05:29:02.475700 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwkwr\" (UniqueName: \"kubernetes.io/projected/9012398b-cd9e-4e40-be6b-a069caec21d3-kube-api-access-nwkwr\") pod \"whisker-7c5cfb5855-bfhp9\" (UID: \"9012398b-cd9e-4e40-be6b-a069caec21d3\") " pod="calico-system/whisker-7c5cfb5855-bfhp9" Jun 21 05:29:02.473261 systemd[1]: Created slice kubepods-besteffort-pod6f880db5_867a_4143_aed9_1a92c814b9e8.slice - libcontainer container kubepods-besteffort-pod6f880db5_867a_4143_aed9_1a92c814b9e8.slice. Jun 21 05:29:02.478057 kubelet[2681]: I0621 05:29:02.475728 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qcfn\" (UniqueName: \"kubernetes.io/projected/5c6287d7-4572-4ea1-b2d5-7d6f8762f244-kube-api-access-9qcfn\") pod \"coredns-668d6bf9bc-fnmnk\" (UID: \"5c6287d7-4572-4ea1-b2d5-7d6f8762f244\") " pod="kube-system/coredns-668d6bf9bc-fnmnk" Jun 21 05:29:02.478057 kubelet[2681]: I0621 05:29:02.475766 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxmht\" (UniqueName: \"kubernetes.io/projected/2487498c-8136-4994-aca7-e0dddb1eb173-kube-api-access-hxmht\") pod \"calico-apiserver-86d6895b5f-77tzw\" (UID: \"2487498c-8136-4994-aca7-e0dddb1eb173\") " pod="calico-apiserver/calico-apiserver-86d6895b5f-77tzw" Jun 21 05:29:02.478057 kubelet[2681]: I0621 05:29:02.475799 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54czj\" (UniqueName: \"kubernetes.io/projected/6f880db5-867a-4143-aed9-1a92c814b9e8-kube-api-access-54czj\") pod \"calico-kube-controllers-845486d6f4-4qppc\" (UID: \"6f880db5-867a-4143-aed9-1a92c814b9e8\") " pod="calico-system/calico-kube-controllers-845486d6f4-4qppc" Jun 21 05:29:02.478057 kubelet[2681]: I0621 05:29:02.475828 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf9lt\" (UniqueName: \"kubernetes.io/projected/e103371f-c594-4fc8-ada3-19b59332d837-kube-api-access-kf9lt\") pod \"coredns-668d6bf9bc-8lkqm\" (UID: \"e103371f-c594-4fc8-ada3-19b59332d837\") " pod="kube-system/coredns-668d6bf9bc-8lkqm" Jun 21 05:29:02.478057 kubelet[2681]: I0621 05:29:02.475866 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c6287d7-4572-4ea1-b2d5-7d6f8762f244-config-volume\") pod \"coredns-668d6bf9bc-fnmnk\" (UID: \"5c6287d7-4572-4ea1-b2d5-7d6f8762f244\") " pod="kube-system/coredns-668d6bf9bc-fnmnk" Jun 21 05:29:02.474414 systemd[1]: Created slice kubepods-besteffort-pod9012398b_cd9e_4e40_be6b_a069caec21d3.slice - libcontainer container kubepods-besteffort-pod9012398b_cd9e_4e40_be6b_a069caec21d3.slice. Jun 21 05:29:02.478377 kubelet[2681]: I0621 05:29:02.475892 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f880db5-867a-4143-aed9-1a92c814b9e8-tigera-ca-bundle\") pod \"calico-kube-controllers-845486d6f4-4qppc\" (UID: \"6f880db5-867a-4143-aed9-1a92c814b9e8\") " pod="calico-system/calico-kube-controllers-845486d6f4-4qppc" Jun 21 05:29:02.478377 kubelet[2681]: I0621 05:29:02.475925 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/561d093e-330b-43dd-ad62-1acd896769be-config\") pod \"goldmane-5bd85449d4-tkkdb\" (UID: \"561d093e-330b-43dd-ad62-1acd896769be\") " pod="calico-system/goldmane-5bd85449d4-tkkdb" Jun 21 05:29:02.478377 kubelet[2681]: I0621 05:29:02.475953 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc7m9\" (UniqueName: \"kubernetes.io/projected/561d093e-330b-43dd-ad62-1acd896769be-kube-api-access-kc7m9\") pod \"goldmane-5bd85449d4-tkkdb\" (UID: \"561d093e-330b-43dd-ad62-1acd896769be\") " pod="calico-system/goldmane-5bd85449d4-tkkdb" Jun 21 05:29:02.478377 kubelet[2681]: I0621 05:29:02.476027 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/41978ebe-aee9-4eb1-b675-daa95daac0a7-calico-apiserver-certs\") pod \"calico-apiserver-86d6895b5f-nmhc6\" (UID: \"41978ebe-aee9-4eb1-b675-daa95daac0a7\") " pod="calico-apiserver/calico-apiserver-86d6895b5f-nmhc6" Jun 21 05:29:02.478377 kubelet[2681]: I0621 05:29:02.476058 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2487498c-8136-4994-aca7-e0dddb1eb173-calico-apiserver-certs\") pod \"calico-apiserver-86d6895b5f-77tzw\" (UID: \"2487498c-8136-4994-aca7-e0dddb1eb173\") " pod="calico-apiserver/calico-apiserver-86d6895b5f-77tzw" Jun 21 05:29:02.479717 kubelet[2681]: I0621 05:29:02.476089 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/561d093e-330b-43dd-ad62-1acd896769be-goldmane-key-pair\") pod \"goldmane-5bd85449d4-tkkdb\" (UID: \"561d093e-330b-43dd-ad62-1acd896769be\") " pod="calico-system/goldmane-5bd85449d4-tkkdb" Jun 21 05:29:02.495762 systemd[1]: Created slice kubepods-besteffort-pod41978ebe_aee9_4eb1_b675_daa95daac0a7.slice - libcontainer container kubepods-besteffort-pod41978ebe_aee9_4eb1_b675_daa95daac0a7.slice. Jun 21 05:29:02.509112 systemd[1]: Created slice kubepods-besteffort-pod561d093e_330b_43dd_ad62_1acd896769be.slice - libcontainer container kubepods-besteffort-pod561d093e_330b_43dd_ad62_1acd896769be.slice. Jun 21 05:29:02.520505 systemd[1]: Created slice kubepods-besteffort-pod2487498c_8136_4994_aca7_e0dddb1eb173.slice - libcontainer container kubepods-besteffort-pod2487498c_8136_4994_aca7_e0dddb1eb173.slice. Jun 21 05:29:02.721923 kubelet[2681]: E0621 05:29:02.721113 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:02.724578 containerd[1532]: time="2025-06-21T05:29:02.724495493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8lkqm,Uid:e103371f-c594-4fc8-ada3-19b59332d837,Namespace:kube-system,Attempt:0,}" Jun 21 05:29:02.748545 kubelet[2681]: E0621 05:29:02.747910 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:02.752396 containerd[1532]: time="2025-06-21T05:29:02.750612664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fnmnk,Uid:5c6287d7-4572-4ea1-b2d5-7d6f8762f244,Namespace:kube-system,Attempt:0,}" Jun 21 05:29:02.794384 containerd[1532]: time="2025-06-21T05:29:02.793334207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-845486d6f4-4qppc,Uid:6f880db5-867a-4143-aed9-1a92c814b9e8,Namespace:calico-system,Attempt:0,}" Jun 21 05:29:02.829486 containerd[1532]: time="2025-06-21T05:29:02.828293921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-tkkdb,Uid:561d093e-330b-43dd-ad62-1acd896769be,Namespace:calico-system,Attempt:0,}" Jun 21 05:29:02.836394 containerd[1532]: time="2025-06-21T05:29:02.835808415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d6895b5f-nmhc6,Uid:41978ebe-aee9-4eb1-b675-daa95daac0a7,Namespace:calico-apiserver,Attempt:0,}" Jun 21 05:29:02.839723 containerd[1532]: time="2025-06-21T05:29:02.838718884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d6895b5f-77tzw,Uid:2487498c-8136-4994-aca7-e0dddb1eb173,Namespace:calico-apiserver,Attempt:0,}" Jun 21 05:29:03.132490 systemd[1]: Created slice kubepods-besteffort-pod47771cd6_42bc_4e44_9ac1_516f10966eb8.slice - libcontainer container kubepods-besteffort-pod47771cd6_42bc_4e44_9ac1_516f10966eb8.slice. Jun 21 05:29:03.142871 containerd[1532]: time="2025-06-21T05:29:03.142805804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d42wl,Uid:47771cd6-42bc-4e44-9ac1-516f10966eb8,Namespace:calico-system,Attempt:0,}" Jun 21 05:29:03.357024 containerd[1532]: time="2025-06-21T05:29:03.356885027Z" level=error msg="Failed to destroy network for sandbox \"8ee2c8e2750d7019089135d445c7983571e3555cb3a62c3373b55a10082c8e52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.358011 containerd[1532]: time="2025-06-21T05:29:03.357706356Z" level=error msg="Failed to destroy network for sandbox \"2d10fa4d741cf8863ee817dc782d01e46bdf207c84333822c6035b041ef28f67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.361662 containerd[1532]: time="2025-06-21T05:29:03.361610205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-845486d6f4-4qppc,Uid:6f880db5-867a-4143-aed9-1a92c814b9e8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d10fa4d741cf8863ee817dc782d01e46bdf207c84333822c6035b041ef28f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.363151 kubelet[2681]: E0621 05:29:03.362677 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d10fa4d741cf8863ee817dc782d01e46bdf207c84333822c6035b041ef28f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.363151 kubelet[2681]: E0621 05:29:03.362797 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d10fa4d741cf8863ee817dc782d01e46bdf207c84333822c6035b041ef28f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-845486d6f4-4qppc" Jun 21 05:29:03.363151 kubelet[2681]: E0621 05:29:03.362849 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d10fa4d741cf8863ee817dc782d01e46bdf207c84333822c6035b041ef28f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-845486d6f4-4qppc" Jun 21 05:29:03.363857 kubelet[2681]: E0621 05:29:03.362923 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-845486d6f4-4qppc_calico-system(6f880db5-867a-4143-aed9-1a92c814b9e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-845486d6f4-4qppc_calico-system(6f880db5-867a-4143-aed9-1a92c814b9e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d10fa4d741cf8863ee817dc782d01e46bdf207c84333822c6035b041ef28f67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-845486d6f4-4qppc" podUID="6f880db5-867a-4143-aed9-1a92c814b9e8" Jun 21 05:29:03.363248 systemd[1]: run-netns-cni\x2d94e4bf6e\x2d377d\x2d0ae1\x2d2160\x2d6ccf5eea078b.mount: Deactivated successfully. Jun 21 05:29:03.371424 systemd[1]: run-netns-cni\x2dce88216b\x2dc721\x2dd239\x2d84b1\x2df94873270b39.mount: Deactivated successfully. Jun 21 05:29:03.375268 containerd[1532]: time="2025-06-21T05:29:03.375123592Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d6895b5f-nmhc6,Uid:41978ebe-aee9-4eb1-b675-daa95daac0a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee2c8e2750d7019089135d445c7983571e3555cb3a62c3373b55a10082c8e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.378136 kubelet[2681]: E0621 05:29:03.378089 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee2c8e2750d7019089135d445c7983571e3555cb3a62c3373b55a10082c8e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.378136 kubelet[2681]: E0621 05:29:03.378148 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee2c8e2750d7019089135d445c7983571e3555cb3a62c3373b55a10082c8e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d6895b5f-nmhc6" Jun 21 05:29:03.378538 kubelet[2681]: E0621 05:29:03.378169 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee2c8e2750d7019089135d445c7983571e3555cb3a62c3373b55a10082c8e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d6895b5f-nmhc6" Jun 21 05:29:03.378538 kubelet[2681]: E0621 05:29:03.378242 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86d6895b5f-nmhc6_calico-apiserver(41978ebe-aee9-4eb1-b675-daa95daac0a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86d6895b5f-nmhc6_calico-apiserver(41978ebe-aee9-4eb1-b675-daa95daac0a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ee2c8e2750d7019089135d445c7983571e3555cb3a62c3373b55a10082c8e52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86d6895b5f-nmhc6" podUID="41978ebe-aee9-4eb1-b675-daa95daac0a7" Jun 21 05:29:03.402688 containerd[1532]: time="2025-06-21T05:29:03.402323220Z" level=error msg="Failed to destroy network for sandbox \"d788db82a8b3987379462b8210b95208d9fb6fff0a487d5716698213c27e79a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.411315 containerd[1532]: time="2025-06-21T05:29:03.411251051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-tkkdb,Uid:561d093e-330b-43dd-ad62-1acd896769be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d788db82a8b3987379462b8210b95208d9fb6fff0a487d5716698213c27e79a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.413471 kubelet[2681]: E0621 05:29:03.411817 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d788db82a8b3987379462b8210b95208d9fb6fff0a487d5716698213c27e79a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.413471 kubelet[2681]: E0621 05:29:03.411891 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d788db82a8b3987379462b8210b95208d9fb6fff0a487d5716698213c27e79a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-tkkdb" Jun 21 05:29:03.413471 kubelet[2681]: E0621 05:29:03.411913 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d788db82a8b3987379462b8210b95208d9fb6fff0a487d5716698213c27e79a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-tkkdb" Jun 21 05:29:03.411902 systemd[1]: run-netns-cni\x2dae373510\x2d5ee7\x2dea25\x2d9548\x2d69e7cf26ff18.mount: Deactivated successfully. Jun 21 05:29:03.413867 kubelet[2681]: E0621 05:29:03.411956 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-tkkdb_calico-system(561d093e-330b-43dd-ad62-1acd896769be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-tkkdb_calico-system(561d093e-330b-43dd-ad62-1acd896769be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d788db82a8b3987379462b8210b95208d9fb6fff0a487d5716698213c27e79a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-tkkdb" podUID="561d093e-330b-43dd-ad62-1acd896769be" Jun 21 05:29:03.419181 containerd[1532]: time="2025-06-21T05:29:03.418887328Z" level=error msg="Failed to destroy network for sandbox \"31c998d2b6aee94d911921ea79fee9c33f837b4c4346a9ebe770696e5a511d6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.422053 containerd[1532]: time="2025-06-21T05:29:03.421990178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fnmnk,Uid:5c6287d7-4572-4ea1-b2d5-7d6f8762f244,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c998d2b6aee94d911921ea79fee9c33f837b4c4346a9ebe770696e5a511d6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.422774 kubelet[2681]: E0621 05:29:03.422725 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c998d2b6aee94d911921ea79fee9c33f837b4c4346a9ebe770696e5a511d6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.422928 kubelet[2681]: E0621 05:29:03.422795 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c998d2b6aee94d911921ea79fee9c33f837b4c4346a9ebe770696e5a511d6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fnmnk" Jun 21 05:29:03.422928 kubelet[2681]: E0621 05:29:03.422855 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c998d2b6aee94d911921ea79fee9c33f837b4c4346a9ebe770696e5a511d6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fnmnk" Jun 21 05:29:03.422928 kubelet[2681]: E0621 05:29:03.422909 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fnmnk_kube-system(5c6287d7-4572-4ea1-b2d5-7d6f8762f244)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fnmnk_kube-system(5c6287d7-4572-4ea1-b2d5-7d6f8762f244)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31c998d2b6aee94d911921ea79fee9c33f837b4c4346a9ebe770696e5a511d6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fnmnk" podUID="5c6287d7-4572-4ea1-b2d5-7d6f8762f244" Jun 21 05:29:03.425244 containerd[1532]: time="2025-06-21T05:29:03.425173114Z" level=error msg="Failed to destroy network for sandbox \"2b74aff562c360ed5e3208ff5fb5491a423f44fde8d37da6bce30bd64ad22d56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.426504 containerd[1532]: time="2025-06-21T05:29:03.426426727Z" level=error msg="Failed to destroy network for sandbox \"6b147d9d8b68d2bc7b3874cd56e4e935adff300c92f54fca45637cfd758ea916\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.427811 containerd[1532]: time="2025-06-21T05:29:03.427604775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d6895b5f-77tzw,Uid:2487498c-8136-4994-aca7-e0dddb1eb173,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b74aff562c360ed5e3208ff5fb5491a423f44fde8d37da6bce30bd64ad22d56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.429661 kubelet[2681]: E0621 05:29:03.429553 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b74aff562c360ed5e3208ff5fb5491a423f44fde8d37da6bce30bd64ad22d56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.429844 containerd[1532]: time="2025-06-21T05:29:03.429807821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8lkqm,Uid:e103371f-c594-4fc8-ada3-19b59332d837,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b147d9d8b68d2bc7b3874cd56e4e935adff300c92f54fca45637cfd758ea916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.430128 kubelet[2681]: E0621 05:29:03.429988 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b74aff562c360ed5e3208ff5fb5491a423f44fde8d37da6bce30bd64ad22d56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d6895b5f-77tzw" Jun 21 05:29:03.430128 kubelet[2681]: E0621 05:29:03.430023 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b74aff562c360ed5e3208ff5fb5491a423f44fde8d37da6bce30bd64ad22d56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d6895b5f-77tzw" Jun 21 05:29:03.430128 kubelet[2681]: E0621 05:29:03.430080 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86d6895b5f-77tzw_calico-apiserver(2487498c-8136-4994-aca7-e0dddb1eb173)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86d6895b5f-77tzw_calico-apiserver(2487498c-8136-4994-aca7-e0dddb1eb173)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b74aff562c360ed5e3208ff5fb5491a423f44fde8d37da6bce30bd64ad22d56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86d6895b5f-77tzw" podUID="2487498c-8136-4994-aca7-e0dddb1eb173" Jun 21 05:29:03.431946 kubelet[2681]: E0621 05:29:03.431803 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b147d9d8b68d2bc7b3874cd56e4e935adff300c92f54fca45637cfd758ea916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.432025 kubelet[2681]: E0621 05:29:03.431959 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b147d9d8b68d2bc7b3874cd56e4e935adff300c92f54fca45637cfd758ea916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8lkqm" Jun 21 05:29:03.432097 kubelet[2681]: E0621 05:29:03.432078 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b147d9d8b68d2bc7b3874cd56e4e935adff300c92f54fca45637cfd758ea916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8lkqm" Jun 21 05:29:03.432211 kubelet[2681]: E0621 05:29:03.432131 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8lkqm_kube-system(e103371f-c594-4fc8-ada3-19b59332d837)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8lkqm_kube-system(e103371f-c594-4fc8-ada3-19b59332d837)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b147d9d8b68d2bc7b3874cd56e4e935adff300c92f54fca45637cfd758ea916\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8lkqm" podUID="e103371f-c594-4fc8-ada3-19b59332d837" Jun 21 05:29:03.467999 containerd[1532]: time="2025-06-21T05:29:03.467843901Z" level=error msg="Failed to destroy network for sandbox \"62a46d4f14759c014f69afcbcd28f90675d01e9a4a5600314168a45b08a014b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.469441 containerd[1532]: time="2025-06-21T05:29:03.469248546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d42wl,Uid:47771cd6-42bc-4e44-9ac1-516f10966eb8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62a46d4f14759c014f69afcbcd28f90675d01e9a4a5600314168a45b08a014b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.469915 kubelet[2681]: E0621 05:29:03.469562 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62a46d4f14759c014f69afcbcd28f90675d01e9a4a5600314168a45b08a014b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.469915 kubelet[2681]: E0621 05:29:03.469638 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62a46d4f14759c014f69afcbcd28f90675d01e9a4a5600314168a45b08a014b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d42wl" Jun 21 05:29:03.469915 kubelet[2681]: E0621 05:29:03.469662 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62a46d4f14759c014f69afcbcd28f90675d01e9a4a5600314168a45b08a014b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d42wl" Jun 21 05:29:03.470315 kubelet[2681]: E0621 05:29:03.469707 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d42wl_calico-system(47771cd6-42bc-4e44-9ac1-516f10966eb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d42wl_calico-system(47771cd6-42bc-4e44-9ac1-516f10966eb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62a46d4f14759c014f69afcbcd28f90675d01e9a4a5600314168a45b08a014b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d42wl" podUID="47771cd6-42bc-4e44-9ac1-516f10966eb8" Jun 21 05:29:03.687721 containerd[1532]: time="2025-06-21T05:29:03.687510705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c5cfb5855-bfhp9,Uid:9012398b-cd9e-4e40-be6b-a069caec21d3,Namespace:calico-system,Attempt:0,}" Jun 21 05:29:03.782066 containerd[1532]: time="2025-06-21T05:29:03.782013310Z" level=error msg="Failed to destroy network for sandbox \"096c7457e623db5da7f334ee14c5f52bb37b6404c78477f8f8db1386ab5e39d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.785091 containerd[1532]: time="2025-06-21T05:29:03.785030997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c5cfb5855-bfhp9,Uid:9012398b-cd9e-4e40-be6b-a069caec21d3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"096c7457e623db5da7f334ee14c5f52bb37b6404c78477f8f8db1386ab5e39d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.786323 kubelet[2681]: E0621 05:29:03.785351 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"096c7457e623db5da7f334ee14c5f52bb37b6404c78477f8f8db1386ab5e39d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:29:03.786323 kubelet[2681]: E0621 05:29:03.785485 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"096c7457e623db5da7f334ee14c5f52bb37b6404c78477f8f8db1386ab5e39d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c5cfb5855-bfhp9" Jun 21 05:29:03.786323 kubelet[2681]: E0621 05:29:03.785522 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"096c7457e623db5da7f334ee14c5f52bb37b6404c78477f8f8db1386ab5e39d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c5cfb5855-bfhp9" Jun 21 05:29:03.786661 kubelet[2681]: E0621 05:29:03.785604 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c5cfb5855-bfhp9_calico-system(9012398b-cd9e-4e40-be6b-a069caec21d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c5cfb5855-bfhp9_calico-system(9012398b-cd9e-4e40-be6b-a069caec21d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"096c7457e623db5da7f334ee14c5f52bb37b6404c78477f8f8db1386ab5e39d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c5cfb5855-bfhp9" podUID="9012398b-cd9e-4e40-be6b-a069caec21d3" Jun 21 05:29:04.245414 systemd[1]: run-netns-cni\x2d7b7c743f\x2d7551\x2d3664\x2db004\x2d8209e56c13e5.mount: Deactivated successfully. Jun 21 05:29:04.247219 systemd[1]: run-netns-cni\x2d581acb2f\x2dc1bd\x2df336\x2dd22d\x2de59163f9b87d.mount: Deactivated successfully. Jun 21 05:29:04.247424 systemd[1]: run-netns-cni\x2d8722e1aa\x2dbb11\x2d6fcb\x2df72b\x2d213729a46897.mount: Deactivated successfully. Jun 21 05:29:04.247711 systemd[1]: run-netns-cni\x2d28e2d367\x2d473e\x2dc201\x2d354d\x2d7f8fc637b5ef.mount: Deactivated successfully. Jun 21 05:29:10.703505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount311344260.mount: Deactivated successfully. Jun 21 05:29:10.745732 containerd[1532]: time="2025-06-21T05:29:10.745405560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:10.746700 containerd[1532]: time="2025-06-21T05:29:10.746644259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 21 05:29:10.748281 containerd[1532]: time="2025-06-21T05:29:10.748218238Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:10.753833 containerd[1532]: time="2025-06-21T05:29:10.753740801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:10.757396 containerd[1532]: time="2025-06-21T05:29:10.756757133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 8.436516101s" Jun 21 05:29:10.757396 containerd[1532]: time="2025-06-21T05:29:10.756829192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 21 05:29:10.810204 containerd[1532]: time="2025-06-21T05:29:10.810156497Z" level=info msg="CreateContainer within sandbox \"62df5cce0b7ee90005b2da00485a2e351b05720f1e483445fbeb8c0a731866b3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 21 05:29:10.821612 containerd[1532]: time="2025-06-21T05:29:10.821560511Z" level=info msg="Container d6eb43f4ff0d3df7ea99fefd6922a0e1ba922c7b47f1270d865e67abce564d66: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:10.838431 containerd[1532]: time="2025-06-21T05:29:10.838367995Z" level=info msg="CreateContainer within sandbox \"62df5cce0b7ee90005b2da00485a2e351b05720f1e483445fbeb8c0a731866b3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d6eb43f4ff0d3df7ea99fefd6922a0e1ba922c7b47f1270d865e67abce564d66\"" Jun 21 05:29:10.839588 containerd[1532]: time="2025-06-21T05:29:10.839544751Z" level=info msg="StartContainer for \"d6eb43f4ff0d3df7ea99fefd6922a0e1ba922c7b47f1270d865e67abce564d66\"" Jun 21 05:29:10.841422 containerd[1532]: time="2025-06-21T05:29:10.841301306Z" level=info msg="connecting to shim d6eb43f4ff0d3df7ea99fefd6922a0e1ba922c7b47f1270d865e67abce564d66" address="unix:///run/containerd/s/badc3383317c9640e9ad6efc38b3cc61b67acb5a31051f93eddc485c8243b7dd" protocol=ttrpc version=3 Jun 21 05:29:11.059951 systemd[1]: Started cri-containerd-d6eb43f4ff0d3df7ea99fefd6922a0e1ba922c7b47f1270d865e67abce564d66.scope - libcontainer container d6eb43f4ff0d3df7ea99fefd6922a0e1ba922c7b47f1270d865e67abce564d66. Jun 21 05:29:11.160023 containerd[1532]: time="2025-06-21T05:29:11.159966902Z" level=info msg="StartContainer for \"d6eb43f4ff0d3df7ea99fefd6922a0e1ba922c7b47f1270d865e67abce564d66\" returns successfully" Jun 21 05:29:11.296813 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 21 05:29:11.297251 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 21 05:29:11.431167 kubelet[2681]: I0621 05:29:11.429668 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kdwc5" podStartSLOduration=2.142467663 podStartE2EDuration="21.429626184s" podCreationTimestamp="2025-06-21 05:28:50 +0000 UTC" firstStartedPulling="2025-06-21 05:28:51.478624893 +0000 UTC m=+19.536765121" lastFinishedPulling="2025-06-21 05:29:10.765783407 +0000 UTC m=+38.823923642" observedRunningTime="2025-06-21 05:29:11.427315121 +0000 UTC m=+39.485455366" watchObservedRunningTime="2025-06-21 05:29:11.429626184 +0000 UTC m=+39.487766428" Jun 21 05:29:11.784867 kubelet[2681]: I0621 05:29:11.784725 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwkwr\" (UniqueName: \"kubernetes.io/projected/9012398b-cd9e-4e40-be6b-a069caec21d3-kube-api-access-nwkwr\") pod \"9012398b-cd9e-4e40-be6b-a069caec21d3\" (UID: \"9012398b-cd9e-4e40-be6b-a069caec21d3\") " Jun 21 05:29:11.784867 kubelet[2681]: I0621 05:29:11.784805 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9012398b-cd9e-4e40-be6b-a069caec21d3-whisker-ca-bundle\") pod \"9012398b-cd9e-4e40-be6b-a069caec21d3\" (UID: \"9012398b-cd9e-4e40-be6b-a069caec21d3\") " Jun 21 05:29:11.784867 kubelet[2681]: I0621 05:29:11.784842 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9012398b-cd9e-4e40-be6b-a069caec21d3-whisker-backend-key-pair\") pod \"9012398b-cd9e-4e40-be6b-a069caec21d3\" (UID: \"9012398b-cd9e-4e40-be6b-a069caec21d3\") " Jun 21 05:29:11.789936 kubelet[2681]: I0621 05:29:11.789777 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9012398b-cd9e-4e40-be6b-a069caec21d3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9012398b-cd9e-4e40-be6b-a069caec21d3" (UID: "9012398b-cd9e-4e40-be6b-a069caec21d3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 21 05:29:11.798557 kubelet[2681]: I0621 05:29:11.798482 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9012398b-cd9e-4e40-be6b-a069caec21d3-kube-api-access-nwkwr" (OuterVolumeSpecName: "kube-api-access-nwkwr") pod "9012398b-cd9e-4e40-be6b-a069caec21d3" (UID: "9012398b-cd9e-4e40-be6b-a069caec21d3"). InnerVolumeSpecName "kube-api-access-nwkwr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 05:29:11.801672 kubelet[2681]: I0621 05:29:11.801548 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9012398b-cd9e-4e40-be6b-a069caec21d3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9012398b-cd9e-4e40-be6b-a069caec21d3" (UID: "9012398b-cd9e-4e40-be6b-a069caec21d3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 21 05:29:11.802471 systemd[1]: var-lib-kubelet-pods-9012398b\x2dcd9e\x2d4e40\x2dbe6b\x2da069caec21d3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnwkwr.mount: Deactivated successfully. Jun 21 05:29:11.812808 systemd[1]: var-lib-kubelet-pods-9012398b\x2dcd9e\x2d4e40\x2dbe6b\x2da069caec21d3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 21 05:29:11.854038 containerd[1532]: time="2025-06-21T05:29:11.853988819Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6eb43f4ff0d3df7ea99fefd6922a0e1ba922c7b47f1270d865e67abce564d66\" id:\"5cc11576a33a961a35a96e5063a8c1709ab89fb050209d3bc23fc0a8d2018815\" pid:3787 exit_status:1 exited_at:{seconds:1750483751 nanos:853534029}" Jun 21 05:29:11.886494 kubelet[2681]: I0621 05:29:11.886340 2681 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9012398b-cd9e-4e40-be6b-a069caec21d3-whisker-backend-key-pair\") on node \"ci-4372.0.0-e-bb84d467cd\" DevicePath \"\"" Jun 21 05:29:11.887202 kubelet[2681]: I0621 05:29:11.887135 2681 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9012398b-cd9e-4e40-be6b-a069caec21d3-whisker-ca-bundle\") on node \"ci-4372.0.0-e-bb84d467cd\" DevicePath \"\"" Jun 21 05:29:11.887202 kubelet[2681]: I0621 05:29:11.887174 2681 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nwkwr\" (UniqueName: \"kubernetes.io/projected/9012398b-cd9e-4e40-be6b-a069caec21d3-kube-api-access-nwkwr\") on node \"ci-4372.0.0-e-bb84d467cd\" DevicePath \"\"" Jun 21 05:29:12.128353 systemd[1]: Removed slice kubepods-besteffort-pod9012398b_cd9e_4e40_be6b_a069caec21d3.slice - libcontainer container kubepods-besteffort-pod9012398b_cd9e_4e40_be6b_a069caec21d3.slice. Jun 21 05:29:12.513607 systemd[1]: Created slice kubepods-besteffort-pod4948fac2_22f0_493c_a676_3841ce17bf60.slice - libcontainer container kubepods-besteffort-pod4948fac2_22f0_493c_a676_3841ce17bf60.slice. Jun 21 05:29:12.592880 kubelet[2681]: I0621 05:29:12.592810 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js57s\" (UniqueName: \"kubernetes.io/projected/4948fac2-22f0-493c-a676-3841ce17bf60-kube-api-access-js57s\") pod \"whisker-5d77cd896-njjgs\" (UID: \"4948fac2-22f0-493c-a676-3841ce17bf60\") " pod="calico-system/whisker-5d77cd896-njjgs" Jun 21 05:29:12.592880 kubelet[2681]: I0621 05:29:12.592895 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4948fac2-22f0-493c-a676-3841ce17bf60-whisker-backend-key-pair\") pod \"whisker-5d77cd896-njjgs\" (UID: \"4948fac2-22f0-493c-a676-3841ce17bf60\") " pod="calico-system/whisker-5d77cd896-njjgs" Jun 21 05:29:12.593520 kubelet[2681]: I0621 05:29:12.592932 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4948fac2-22f0-493c-a676-3841ce17bf60-whisker-ca-bundle\") pod \"whisker-5d77cd896-njjgs\" (UID: \"4948fac2-22f0-493c-a676-3841ce17bf60\") " pod="calico-system/whisker-5d77cd896-njjgs" Jun 21 05:29:12.742980 containerd[1532]: time="2025-06-21T05:29:12.742910265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6eb43f4ff0d3df7ea99fefd6922a0e1ba922c7b47f1270d865e67abce564d66\" id:\"ba47f45b7da2cef031409b2a82b7188eb6829d7cef38d1fddb2503558843cca8\" pid:3827 exit_status:1 exited_at:{seconds:1750483752 nanos:741980140}" Jun 21 05:29:12.820092 containerd[1532]: time="2025-06-21T05:29:12.820021248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d77cd896-njjgs,Uid:4948fac2-22f0-493c-a676-3841ce17bf60,Namespace:calico-system,Attempt:0,}" Jun 21 05:29:13.222918 systemd-networkd[1454]: calibc66954c49f: Link UP Jun 21 05:29:13.225758 systemd-networkd[1454]: calibc66954c49f: Gained carrier Jun 21 05:29:13.273485 containerd[1532]: 2025-06-21 05:29:12.863 [INFO][3840] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 05:29:13.273485 containerd[1532]: 2025-06-21 05:29:12.902 [INFO][3840] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-eth0 whisker-5d77cd896- calico-system 4948fac2-22f0-493c-a676-3841ce17bf60 898 0 2025-06-21 05:29:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d77cd896 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.0.0-e-bb84d467cd whisker-5d77cd896-njjgs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibc66954c49f [] [] }} ContainerID="4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" Namespace="calico-system" Pod="whisker-5d77cd896-njjgs" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-" Jun 21 05:29:13.273485 containerd[1532]: 2025-06-21 05:29:12.903 [INFO][3840] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" Namespace="calico-system" Pod="whisker-5d77cd896-njjgs" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-eth0" Jun 21 05:29:13.273485 containerd[1532]: 2025-06-21 05:29:13.116 [INFO][3852] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" HandleID="k8s-pod-network.4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" Workload="ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-eth0" Jun 21 05:29:13.274225 containerd[1532]: 2025-06-21 05:29:13.118 [INFO][3852] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" HandleID="k8s-pod-network.4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" Workload="ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000394580), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.0-e-bb84d467cd", "pod":"whisker-5d77cd896-njjgs", "timestamp":"2025-06-21 05:29:13.116917944 +0000 UTC"}, Hostname:"ci-4372.0.0-e-bb84d467cd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:29:13.274225 containerd[1532]: 2025-06-21 05:29:13.119 [INFO][3852] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:29:13.274225 containerd[1532]: 2025-06-21 05:29:13.120 [INFO][3852] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:29:13.274225 containerd[1532]: 2025-06-21 05:29:13.120 [INFO][3852] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-e-bb84d467cd' Jun 21 05:29:13.274225 containerd[1532]: 2025-06-21 05:29:13.146 [INFO][3852] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:13.274225 containerd[1532]: 2025-06-21 05:29:13.160 [INFO][3852] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:13.274225 containerd[1532]: 2025-06-21 05:29:13.167 [INFO][3852] ipam/ipam.go 511: Trying affinity for 192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:13.274225 containerd[1532]: 2025-06-21 05:29:13.171 [INFO][3852] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:13.274225 containerd[1532]: 2025-06-21 05:29:13.174 [INFO][3852] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:13.274671 containerd[1532]: 2025-06-21 05:29:13.174 [INFO][3852] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.64/26 handle="k8s-pod-network.4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:13.274671 containerd[1532]: 2025-06-21 05:29:13.176 [INFO][3852] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878 Jun 21 05:29:13.274671 containerd[1532]: 2025-06-21 05:29:13.183 [INFO][3852] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.64/26 handle="k8s-pod-network.4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:13.274671 containerd[1532]: 2025-06-21 05:29:13.201 [INFO][3852] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.65/26] block=192.168.46.64/26 handle="k8s-pod-network.4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:13.274671 containerd[1532]: 2025-06-21 05:29:13.201 [INFO][3852] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.65/26] handle="k8s-pod-network.4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:13.274671 containerd[1532]: 2025-06-21 05:29:13.201 [INFO][3852] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:29:13.274671 containerd[1532]: 2025-06-21 05:29:13.201 [INFO][3852] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.65/26] IPv6=[] ContainerID="4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" HandleID="k8s-pod-network.4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" Workload="ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-eth0" Jun 21 05:29:13.274941 containerd[1532]: 2025-06-21 05:29:13.205 [INFO][3840] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" Namespace="calico-system" Pod="whisker-5d77cd896-njjgs" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-eth0", GenerateName:"whisker-5d77cd896-", Namespace:"calico-system", SelfLink:"", UID:"4948fac2-22f0-493c-a676-3841ce17bf60", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d77cd896", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"", Pod:"whisker-5d77cd896-njjgs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.46.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc66954c49f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:13.274941 containerd[1532]: 2025-06-21 05:29:13.206 [INFO][3840] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.65/32] ContainerID="4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" Namespace="calico-system" Pod="whisker-5d77cd896-njjgs" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-eth0" Jun 21 05:29:13.275111 containerd[1532]: 2025-06-21 05:29:13.206 [INFO][3840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc66954c49f ContainerID="4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" Namespace="calico-system" Pod="whisker-5d77cd896-njjgs" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-eth0" Jun 21 05:29:13.275111 containerd[1532]: 2025-06-21 05:29:13.226 [INFO][3840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" Namespace="calico-system" Pod="whisker-5d77cd896-njjgs" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-eth0" Jun 21 05:29:13.275202 containerd[1532]: 2025-06-21 05:29:13.229 [INFO][3840] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" Namespace="calico-system" Pod="whisker-5d77cd896-njjgs" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-eth0", GenerateName:"whisker-5d77cd896-", Namespace:"calico-system", SelfLink:"", UID:"4948fac2-22f0-493c-a676-3841ce17bf60", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d77cd896", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878", Pod:"whisker-5d77cd896-njjgs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.46.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc66954c49f", MAC:"d2:42:82:ca:25:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:13.275292 containerd[1532]: 2025-06-21 05:29:13.265 [INFO][3840] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" Namespace="calico-system" Pod="whisker-5d77cd896-njjgs" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-whisker--5d77cd896--njjgs-eth0" Jun 21 05:29:13.519857 containerd[1532]: time="2025-06-21T05:29:13.519668494Z" level=info msg="connecting to shim 4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878" address="unix:///run/containerd/s/433dca2ea8eb419f11e997e1703dcfa291f762c63d439a43176f55a8a1a1a60b" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:29:13.602663 systemd[1]: Started cri-containerd-4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878.scope - libcontainer container 4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878. Jun 21 05:29:13.741219 containerd[1532]: time="2025-06-21T05:29:13.741173468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d77cd896-njjgs,Uid:4948fac2-22f0-493c-a676-3841ce17bf60,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878\"" Jun 21 05:29:13.744913 containerd[1532]: time="2025-06-21T05:29:13.744705576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 21 05:29:13.879307 containerd[1532]: time="2025-06-21T05:29:13.879240685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6eb43f4ff0d3df7ea99fefd6922a0e1ba922c7b47f1270d865e67abce564d66\" id:\"c6628794c1e21cc817b5bd9c5e27cde960eac7a85eff258677e69e2c8dc45c22\" pid:3968 exit_status:1 exited_at:{seconds:1750483753 nanos:878386986}" Jun 21 05:29:14.119516 kubelet[2681]: E0621 05:29:14.118990 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:14.121210 containerd[1532]: time="2025-06-21T05:29:14.120848863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d42wl,Uid:47771cd6-42bc-4e44-9ac1-516f10966eb8,Namespace:calico-system,Attempt:0,}" Jun 21 05:29:14.137384 kubelet[2681]: I0621 05:29:14.137250 2681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9012398b-cd9e-4e40-be6b-a069caec21d3" path="/var/lib/kubelet/pods/9012398b-cd9e-4e40-be6b-a069caec21d3/volumes" Jun 21 05:29:14.143044 containerd[1532]: time="2025-06-21T05:29:14.121739787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fnmnk,Uid:5c6287d7-4572-4ea1-b2d5-7d6f8762f244,Namespace:kube-system,Attempt:0,}" Jun 21 05:29:14.410722 systemd-networkd[1454]: calia0f8be99147: Link UP Jun 21 05:29:14.413877 systemd-networkd[1454]: calia0f8be99147: Gained carrier Jun 21 05:29:14.451778 systemd-networkd[1454]: vxlan.calico: Link UP Jun 21 05:29:14.451787 systemd-networkd[1454]: vxlan.calico: Gained carrier Jun 21 05:29:14.464104 containerd[1532]: 2025-06-21 05:29:14.234 [INFO][4062] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-eth0 csi-node-driver- calico-system 47771cd6-42bc-4e44-9ac1-516f10966eb8 705 0 2025-06-21 05:28:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.0.0-e-bb84d467cd csi-node-driver-d42wl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia0f8be99147 [] [] }} ContainerID="91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" Namespace="calico-system" Pod="csi-node-driver-d42wl" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-" Jun 21 05:29:14.464104 containerd[1532]: 2025-06-21 05:29:14.234 [INFO][4062] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" Namespace="calico-system" Pod="csi-node-driver-d42wl" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-eth0" Jun 21 05:29:14.464104 containerd[1532]: 2025-06-21 05:29:14.309 [INFO][4092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" HandleID="k8s-pod-network.91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" Workload="ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-eth0" Jun 21 05:29:14.465217 containerd[1532]: 2025-06-21 05:29:14.310 [INFO][4092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" HandleID="k8s-pod-network.91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" Workload="ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5940), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.0-e-bb84d467cd", "pod":"csi-node-driver-d42wl", "timestamp":"2025-06-21 05:29:14.309697951 +0000 UTC"}, Hostname:"ci-4372.0.0-e-bb84d467cd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:29:14.465217 containerd[1532]: 2025-06-21 05:29:14.310 [INFO][4092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:29:14.465217 containerd[1532]: 2025-06-21 05:29:14.311 [INFO][4092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:29:14.465217 containerd[1532]: 2025-06-21 05:29:14.311 [INFO][4092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-e-bb84d467cd' Jun 21 05:29:14.465217 containerd[1532]: 2025-06-21 05:29:14.327 [INFO][4092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.465217 containerd[1532]: 2025-06-21 05:29:14.337 [INFO][4092] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.465217 containerd[1532]: 2025-06-21 05:29:14.348 [INFO][4092] ipam/ipam.go 511: Trying affinity for 192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.465217 containerd[1532]: 2025-06-21 05:29:14.354 [INFO][4092] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.465217 containerd[1532]: 2025-06-21 05:29:14.359 [INFO][4092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.465642 containerd[1532]: 2025-06-21 05:29:14.360 [INFO][4092] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.64/26 handle="k8s-pod-network.91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.465642 containerd[1532]: 2025-06-21 05:29:14.362 [INFO][4092] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8 Jun 21 05:29:14.465642 containerd[1532]: 2025-06-21 05:29:14.369 [INFO][4092] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.64/26 handle="k8s-pod-network.91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.465642 containerd[1532]: 2025-06-21 05:29:14.381 [INFO][4092] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.66/26] block=192.168.46.64/26 handle="k8s-pod-network.91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.465642 containerd[1532]: 2025-06-21 05:29:14.381 [INFO][4092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.66/26] handle="k8s-pod-network.91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.465642 containerd[1532]: 2025-06-21 05:29:14.382 [INFO][4092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:29:14.465642 containerd[1532]: 2025-06-21 05:29:14.382 [INFO][4092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.66/26] IPv6=[] ContainerID="91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" HandleID="k8s-pod-network.91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" Workload="ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-eth0" Jun 21 05:29:14.467437 containerd[1532]: 2025-06-21 05:29:14.394 [INFO][4062] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" Namespace="calico-system" Pod="csi-node-driver-d42wl" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47771cd6-42bc-4e44-9ac1-516f10966eb8", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"", Pod:"csi-node-driver-d42wl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0f8be99147", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:14.467614 containerd[1532]: 2025-06-21 05:29:14.395 [INFO][4062] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.66/32] ContainerID="91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" Namespace="calico-system" Pod="csi-node-driver-d42wl" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-eth0" Jun 21 05:29:14.467614 containerd[1532]: 2025-06-21 05:29:14.398 [INFO][4062] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0f8be99147 ContainerID="91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" Namespace="calico-system" Pod="csi-node-driver-d42wl" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-eth0" Jun 21 05:29:14.467614 containerd[1532]: 2025-06-21 05:29:14.415 [INFO][4062] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" Namespace="calico-system" Pod="csi-node-driver-d42wl" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-eth0" Jun 21 05:29:14.467740 containerd[1532]: 2025-06-21 05:29:14.421 [INFO][4062] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" Namespace="calico-system" Pod="csi-node-driver-d42wl" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47771cd6-42bc-4e44-9ac1-516f10966eb8", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8", Pod:"csi-node-driver-d42wl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0f8be99147", MAC:"0e:13:6e:04:af:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:14.468163 containerd[1532]: 2025-06-21 05:29:14.454 [INFO][4062] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" Namespace="calico-system" Pod="csi-node-driver-d42wl" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-csi--node--driver--d42wl-eth0" Jun 21 05:29:14.553381 containerd[1532]: time="2025-06-21T05:29:14.547571612Z" level=info msg="connecting to shim 91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8" address="unix:///run/containerd/s/ebaec65352a48b15ba9d42a151a2d413d6132dccbf142437e8b4fc91c8f9c542" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:29:14.568419 systemd-networkd[1454]: calif5ac0328fcd: Link UP Jun 21 05:29:14.571880 systemd-networkd[1454]: calif5ac0328fcd: Gained carrier Jun 21 05:29:14.620482 containerd[1532]: 2025-06-21 05:29:14.239 [INFO][4071] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-eth0 coredns-668d6bf9bc- kube-system 5c6287d7-4572-4ea1-b2d5-7d6f8762f244 824 0 2025-06-21 05:28:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.0-e-bb84d467cd coredns-668d6bf9bc-fnmnk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif5ac0328fcd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" Namespace="kube-system" Pod="coredns-668d6bf9bc-fnmnk" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-" Jun 21 05:29:14.620482 containerd[1532]: 2025-06-21 05:29:14.240 [INFO][4071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" Namespace="kube-system" Pod="coredns-668d6bf9bc-fnmnk" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-eth0" Jun 21 05:29:14.620482 containerd[1532]: 2025-06-21 05:29:14.321 [INFO][4103] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" HandleID="k8s-pod-network.7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" Workload="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-eth0" Jun 21 05:29:14.620750 containerd[1532]: 2025-06-21 05:29:14.323 [INFO][4103] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" HandleID="k8s-pod-network.7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" Workload="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cdce0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.0-e-bb84d467cd", "pod":"coredns-668d6bf9bc-fnmnk", "timestamp":"2025-06-21 05:29:14.321236459 +0000 UTC"}, Hostname:"ci-4372.0.0-e-bb84d467cd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:29:14.620750 containerd[1532]: 2025-06-21 05:29:14.323 [INFO][4103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:29:14.620750 containerd[1532]: 2025-06-21 05:29:14.382 [INFO][4103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:29:14.620750 containerd[1532]: 2025-06-21 05:29:14.382 [INFO][4103] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-e-bb84d467cd' Jun 21 05:29:14.620750 containerd[1532]: 2025-06-21 05:29:14.431 [INFO][4103] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.620750 containerd[1532]: 2025-06-21 05:29:14.453 [INFO][4103] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.620750 containerd[1532]: 2025-06-21 05:29:14.476 [INFO][4103] ipam/ipam.go 511: Trying affinity for 192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.620750 containerd[1532]: 2025-06-21 05:29:14.486 [INFO][4103] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.620750 containerd[1532]: 2025-06-21 05:29:14.492 [INFO][4103] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.620979 containerd[1532]: 2025-06-21 05:29:14.492 [INFO][4103] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.64/26 handle="k8s-pod-network.7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.620979 containerd[1532]: 2025-06-21 05:29:14.506 [INFO][4103] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043 Jun 21 05:29:14.620979 containerd[1532]: 2025-06-21 05:29:14.521 [INFO][4103] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.64/26 handle="k8s-pod-network.7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.620979 containerd[1532]: 2025-06-21 05:29:14.540 [INFO][4103] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.67/26] block=192.168.46.64/26 handle="k8s-pod-network.7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.620979 containerd[1532]: 2025-06-21 05:29:14.540 [INFO][4103] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.67/26] handle="k8s-pod-network.7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:14.620979 containerd[1532]: 2025-06-21 05:29:14.540 [INFO][4103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:29:14.620979 containerd[1532]: 2025-06-21 05:29:14.540 [INFO][4103] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.67/26] IPv6=[] ContainerID="7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" HandleID="k8s-pod-network.7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" Workload="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-eth0" Jun 21 05:29:14.621191 containerd[1532]: 2025-06-21 05:29:14.562 [INFO][4071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" Namespace="kube-system" Pod="coredns-668d6bf9bc-fnmnk" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5c6287d7-4572-4ea1-b2d5-7d6f8762f244", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"", Pod:"coredns-668d6bf9bc-fnmnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5ac0328fcd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:14.621191 containerd[1532]: 2025-06-21 05:29:14.563 [INFO][4071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.67/32] ContainerID="7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" Namespace="kube-system" Pod="coredns-668d6bf9bc-fnmnk" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-eth0" Jun 21 05:29:14.621191 containerd[1532]: 2025-06-21 05:29:14.563 [INFO][4071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5ac0328fcd ContainerID="7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" Namespace="kube-system" Pod="coredns-668d6bf9bc-fnmnk" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-eth0" Jun 21 05:29:14.621191 containerd[1532]: 2025-06-21 05:29:14.571 [INFO][4071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" Namespace="kube-system" Pod="coredns-668d6bf9bc-fnmnk" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-eth0" Jun 21 05:29:14.621191 containerd[1532]: 2025-06-21 05:29:14.572 [INFO][4071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" Namespace="kube-system" Pod="coredns-668d6bf9bc-fnmnk" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5c6287d7-4572-4ea1-b2d5-7d6f8762f244", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043", Pod:"coredns-668d6bf9bc-fnmnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5ac0328fcd", MAC:"16:d9:76:08:98:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:14.621191 containerd[1532]: 2025-06-21 05:29:14.612 [INFO][4071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" Namespace="kube-system" Pod="coredns-668d6bf9bc-fnmnk" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--fnmnk-eth0" Jun 21 05:29:14.622910 systemd[1]: Started cri-containerd-91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8.scope - libcontainer container 91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8. Jun 21 05:29:14.662856 containerd[1532]: time="2025-06-21T05:29:14.662707202Z" level=info msg="connecting to shim 7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043" address="unix:///run/containerd/s/fcb72486cdfa3f0bb80050625652a71401b0d8bd4b5cdfccf345c59629ffc2d7" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:29:14.719628 containerd[1532]: time="2025-06-21T05:29:14.719584270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d42wl,Uid:47771cd6-42bc-4e44-9ac1-516f10966eb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8\"" Jun 21 05:29:14.740775 systemd[1]: Started cri-containerd-7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043.scope - libcontainer container 7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043. Jun 21 05:29:14.826368 containerd[1532]: time="2025-06-21T05:29:14.825756168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fnmnk,Uid:5c6287d7-4572-4ea1-b2d5-7d6f8762f244,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043\"" Jun 21 05:29:14.831549 kubelet[2681]: E0621 05:29:14.831503 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:14.837827 containerd[1532]: time="2025-06-21T05:29:14.837783935Z" level=info msg="CreateContainer within sandbox \"7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 05:29:14.859649 containerd[1532]: time="2025-06-21T05:29:14.859578607Z" level=info msg="Container 128910b4c3d9af9360bcc40674cb6c9ddbf8cf3c35d58ee69d973d76c1b8784a: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:14.872827 containerd[1532]: time="2025-06-21T05:29:14.872777363Z" level=info msg="CreateContainer within sandbox \"7cff3955663aed08c472834760b2c148bdc695eeaf8d2903a49613272d5e2043\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"128910b4c3d9af9360bcc40674cb6c9ddbf8cf3c35d58ee69d973d76c1b8784a\"" Jun 21 05:29:14.873751 containerd[1532]: time="2025-06-21T05:29:14.873721006Z" level=info msg="StartContainer for \"128910b4c3d9af9360bcc40674cb6c9ddbf8cf3c35d58ee69d973d76c1b8784a\"" Jun 21 05:29:14.880489 containerd[1532]: time="2025-06-21T05:29:14.880407525Z" level=info msg="connecting to shim 128910b4c3d9af9360bcc40674cb6c9ddbf8cf3c35d58ee69d973d76c1b8784a" address="unix:///run/containerd/s/fcb72486cdfa3f0bb80050625652a71401b0d8bd4b5cdfccf345c59629ffc2d7" protocol=ttrpc version=3 Jun 21 05:29:14.924046 systemd[1]: Started cri-containerd-128910b4c3d9af9360bcc40674cb6c9ddbf8cf3c35d58ee69d973d76c1b8784a.scope - libcontainer container 128910b4c3d9af9360bcc40674cb6c9ddbf8cf3c35d58ee69d973d76c1b8784a. Jun 21 05:29:15.007659 containerd[1532]: time="2025-06-21T05:29:15.007447724Z" level=info msg="StartContainer for \"128910b4c3d9af9360bcc40674cb6c9ddbf8cf3c35d58ee69d973d76c1b8784a\" returns successfully" Jun 21 05:29:15.074710 systemd-networkd[1454]: calibc66954c49f: Gained IPv6LL Jun 21 05:29:15.381971 containerd[1532]: time="2025-06-21T05:29:15.381902283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:15.383068 containerd[1532]: time="2025-06-21T05:29:15.382984589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 21 05:29:15.383863 containerd[1532]: time="2025-06-21T05:29:15.383791166Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:15.386193 containerd[1532]: time="2025-06-21T05:29:15.386091264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:15.387558 containerd[1532]: time="2025-06-21T05:29:15.387243834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 1.641813489s" Jun 21 05:29:15.387558 containerd[1532]: time="2025-06-21T05:29:15.387300893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 21 05:29:15.389893 containerd[1532]: time="2025-06-21T05:29:15.389864503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 21 05:29:15.394317 containerd[1532]: time="2025-06-21T05:29:15.394230072Z" level=info msg="CreateContainer within sandbox \"4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 21 05:29:15.404698 containerd[1532]: time="2025-06-21T05:29:15.404632948Z" level=info msg="Container c4faf5a5eca736177b06484edfc7afb40772867211a7db1a40567847641e2170: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:15.426371 containerd[1532]: time="2025-06-21T05:29:15.426313522Z" level=info msg="CreateContainer within sandbox \"4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c4faf5a5eca736177b06484edfc7afb40772867211a7db1a40567847641e2170\"" Jun 21 05:29:15.427722 containerd[1532]: time="2025-06-21T05:29:15.427329165Z" level=info msg="StartContainer for \"c4faf5a5eca736177b06484edfc7afb40772867211a7db1a40567847641e2170\"" Jun 21 05:29:15.429657 kubelet[2681]: E0621 05:29:15.429601 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:15.432528 containerd[1532]: time="2025-06-21T05:29:15.432487883Z" level=info msg="connecting to shim c4faf5a5eca736177b06484edfc7afb40772867211a7db1a40567847641e2170" address="unix:///run/containerd/s/433dca2ea8eb419f11e997e1703dcfa291f762c63d439a43176f55a8a1a1a60b" protocol=ttrpc version=3 Jun 21 05:29:15.499430 kubelet[2681]: I0621 05:29:15.499302 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fnmnk" podStartSLOduration=39.49922789 podStartE2EDuration="39.49922789s" podCreationTimestamp="2025-06-21 05:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:29:15.462440355 +0000 UTC m=+43.520580596" watchObservedRunningTime="2025-06-21 05:29:15.49922789 +0000 UTC m=+43.557368137" Jun 21 05:29:15.501277 systemd[1]: Started cri-containerd-c4faf5a5eca736177b06484edfc7afb40772867211a7db1a40567847641e2170.scope - libcontainer container c4faf5a5eca736177b06484edfc7afb40772867211a7db1a40567847641e2170. Jun 21 05:29:15.586901 systemd-networkd[1454]: calia0f8be99147: Gained IPv6LL Jun 21 05:29:15.616857 containerd[1532]: time="2025-06-21T05:29:15.616804964Z" level=info msg="StartContainer for \"c4faf5a5eca736177b06484edfc7afb40772867211a7db1a40567847641e2170\" returns successfully" Jun 21 05:29:16.119393 containerd[1532]: time="2025-06-21T05:29:16.119274007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d6895b5f-77tzw,Uid:2487498c-8136-4994-aca7-e0dddb1eb173,Namespace:calico-apiserver,Attempt:0,}" Jun 21 05:29:16.278818 systemd-networkd[1454]: caliac736a88e08: Link UP Jun 21 05:29:16.279794 systemd-networkd[1454]: caliac736a88e08: Gained carrier Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.178 [INFO][4347] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-eth0 calico-apiserver-86d6895b5f- calico-apiserver 2487498c-8136-4994-aca7-e0dddb1eb173 827 0 2025-06-21 05:28:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86d6895b5f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.0-e-bb84d467cd calico-apiserver-86d6895b5f-77tzw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliac736a88e08 [] [] }} ContainerID="0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-77tzw" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-" Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.178 [INFO][4347] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-77tzw" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-eth0" Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.222 [INFO][4359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" HandleID="k8s-pod-network.0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" Workload="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-eth0" Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.222 [INFO][4359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" HandleID="k8s-pod-network.0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" Workload="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5210), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.0-e-bb84d467cd", "pod":"calico-apiserver-86d6895b5f-77tzw", "timestamp":"2025-06-21 05:29:16.22213226 +0000 UTC"}, Hostname:"ci-4372.0.0-e-bb84d467cd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.222 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.222 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.222 [INFO][4359] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-e-bb84d467cd' Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.233 [INFO][4359] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.240 [INFO][4359] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.247 [INFO][4359] ipam/ipam.go 511: Trying affinity for 192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.250 [INFO][4359] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.254 [INFO][4359] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.254 [INFO][4359] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.64/26 handle="k8s-pod-network.0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.257 [INFO][4359] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.263 [INFO][4359] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.64/26 handle="k8s-pod-network.0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.271 [INFO][4359] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.68/26] block=192.168.46.64/26 handle="k8s-pod-network.0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.271 [INFO][4359] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.68/26] handle="k8s-pod-network.0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.272 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:29:16.309043 containerd[1532]: 2025-06-21 05:29:16.272 [INFO][4359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.68/26] IPv6=[] ContainerID="0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" HandleID="k8s-pod-network.0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" Workload="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-eth0" Jun 21 05:29:16.311193 containerd[1532]: 2025-06-21 05:29:16.274 [INFO][4347] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-77tzw" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-eth0", GenerateName:"calico-apiserver-86d6895b5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"2487498c-8136-4994-aca7-e0dddb1eb173", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86d6895b5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"", Pod:"calico-apiserver-86d6895b5f-77tzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac736a88e08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:16.311193 containerd[1532]: 2025-06-21 05:29:16.275 [INFO][4347] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.68/32] ContainerID="0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-77tzw" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-eth0" Jun 21 05:29:16.311193 containerd[1532]: 2025-06-21 05:29:16.275 [INFO][4347] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac736a88e08 ContainerID="0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-77tzw" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-eth0" Jun 21 05:29:16.311193 containerd[1532]: 2025-06-21 05:29:16.282 [INFO][4347] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-77tzw" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-eth0" Jun 21 05:29:16.311193 containerd[1532]: 2025-06-21 05:29:16.283 [INFO][4347] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-77tzw" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-eth0", GenerateName:"calico-apiserver-86d6895b5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"2487498c-8136-4994-aca7-e0dddb1eb173", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86d6895b5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f", Pod:"calico-apiserver-86d6895b5f-77tzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac736a88e08", MAC:"6e:f7:d0:9b:e3:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:16.311193 containerd[1532]: 2025-06-21 05:29:16.303 [INFO][4347] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-77tzw" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--77tzw-eth0" Jun 21 05:29:16.338961 containerd[1532]: time="2025-06-21T05:29:16.338892155Z" level=info msg="connecting to shim 0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f" address="unix:///run/containerd/s/9244cd47771a9fe2d6c72615aaed140c6d39a080fe283914ff9f424bc8eb1709" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:29:16.375797 systemd[1]: Started cri-containerd-0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f.scope - libcontainer container 0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f. Jun 21 05:29:16.418708 systemd-networkd[1454]: vxlan.calico: Gained IPv6LL Jun 21 05:29:16.435860 kubelet[2681]: E0621 05:29:16.435824 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:16.464323 containerd[1532]: time="2025-06-21T05:29:16.464270602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d6895b5f-77tzw,Uid:2487498c-8136-4994-aca7-e0dddb1eb173,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f\"" Jun 21 05:29:16.548556 systemd-networkd[1454]: calif5ac0328fcd: Gained IPv6LL Jun 21 05:29:17.036584 containerd[1532]: time="2025-06-21T05:29:17.035901539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:17.038050 containerd[1532]: time="2025-06-21T05:29:17.038003321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 21 05:29:17.039026 containerd[1532]: time="2025-06-21T05:29:17.038953385Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:17.042508 containerd[1532]: time="2025-06-21T05:29:17.042301805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:17.043322 containerd[1532]: time="2025-06-21T05:29:17.043183251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 1.652655784s" Jun 21 05:29:17.043322 containerd[1532]: time="2025-06-21T05:29:17.043221076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 21 05:29:17.045779 containerd[1532]: time="2025-06-21T05:29:17.045548636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 21 05:29:17.048013 containerd[1532]: time="2025-06-21T05:29:17.047918598Z" level=info msg="CreateContainer within sandbox \"91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 21 05:29:17.102857 containerd[1532]: time="2025-06-21T05:29:17.101726457Z" level=info msg="Container beb78b7cc62b8caa1a313bb8e2fd5692226d901de1d089b873efc58d58854e70: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:17.121141 containerd[1532]: time="2025-06-21T05:29:17.121079101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-tkkdb,Uid:561d093e-330b-43dd-ad62-1acd896769be,Namespace:calico-system,Attempt:0,}" Jun 21 05:29:17.121468 kubelet[2681]: E0621 05:29:17.121098 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:17.122086 containerd[1532]: time="2025-06-21T05:29:17.121837598Z" level=info msg="CreateContainer within sandbox \"91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"beb78b7cc62b8caa1a313bb8e2fd5692226d901de1d089b873efc58d58854e70\"" Jun 21 05:29:17.122215 containerd[1532]: time="2025-06-21T05:29:17.122185985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d6895b5f-nmhc6,Uid:41978ebe-aee9-4eb1-b675-daa95daac0a7,Namespace:calico-apiserver,Attempt:0,}" Jun 21 05:29:17.124743 containerd[1532]: time="2025-06-21T05:29:17.124687262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8lkqm,Uid:e103371f-c594-4fc8-ada3-19b59332d837,Namespace:kube-system,Attempt:0,}" Jun 21 05:29:17.126120 containerd[1532]: time="2025-06-21T05:29:17.126073722Z" level=info msg="StartContainer for \"beb78b7cc62b8caa1a313bb8e2fd5692226d901de1d089b873efc58d58854e70\"" Jun 21 05:29:17.144491 containerd[1532]: time="2025-06-21T05:29:17.140288576Z" level=info msg="connecting to shim beb78b7cc62b8caa1a313bb8e2fd5692226d901de1d089b873efc58d58854e70" address="unix:///run/containerd/s/ebaec65352a48b15ba9d42a151a2d413d6132dccbf142437e8b4fc91c8f9c542" protocol=ttrpc version=3 Jun 21 05:29:17.226822 systemd[1]: Started cri-containerd-beb78b7cc62b8caa1a313bb8e2fd5692226d901de1d089b873efc58d58854e70.scope - libcontainer container beb78b7cc62b8caa1a313bb8e2fd5692226d901de1d089b873efc58d58854e70. Jun 21 05:29:17.379637 systemd-networkd[1454]: caliac736a88e08: Gained IPv6LL Jun 21 05:29:17.485355 kubelet[2681]: E0621 05:29:17.484548 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:17.509710 containerd[1532]: time="2025-06-21T05:29:17.509644971Z" level=info msg="StartContainer for \"beb78b7cc62b8caa1a313bb8e2fd5692226d901de1d089b873efc58d58854e70\" returns successfully" Jun 21 05:29:17.640274 systemd-networkd[1454]: calica485d4b023: Link UP Jun 21 05:29:17.643628 systemd-networkd[1454]: calica485d4b023: Gained carrier Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.428 [INFO][4465] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-eth0 coredns-668d6bf9bc- kube-system e103371f-c594-4fc8-ada3-19b59332d837 820 0 2025-06-21 05:28:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.0-e-bb84d467cd coredns-668d6bf9bc-8lkqm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calica485d4b023 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lkqm" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-" Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.430 [INFO][4465] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lkqm" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-eth0" Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.552 [INFO][4491] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" HandleID="k8s-pod-network.cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" Workload="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-eth0" Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.552 [INFO][4491] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" HandleID="k8s-pod-network.cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" Workload="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000337880), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.0-e-bb84d467cd", "pod":"coredns-668d6bf9bc-8lkqm", "timestamp":"2025-06-21 05:29:17.552645554 +0000 UTC"}, Hostname:"ci-4372.0.0-e-bb84d467cd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.552 [INFO][4491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.553 [INFO][4491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.553 [INFO][4491] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-e-bb84d467cd' Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.564 [INFO][4491] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.576 [INFO][4491] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.591 [INFO][4491] ipam/ipam.go 511: Trying affinity for 192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.595 [INFO][4491] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.601 [INFO][4491] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.601 [INFO][4491] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.64/26 handle="k8s-pod-network.cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.608 [INFO][4491] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.615 [INFO][4491] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.64/26 handle="k8s-pod-network.cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.628 [INFO][4491] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.69/26] block=192.168.46.64/26 handle="k8s-pod-network.cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.628 [INFO][4491] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.69/26] handle="k8s-pod-network.cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.628 [INFO][4491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:29:17.680478 containerd[1532]: 2025-06-21 05:29:17.628 [INFO][4491] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.69/26] IPv6=[] ContainerID="cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" HandleID="k8s-pod-network.cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" Workload="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-eth0" Jun 21 05:29:17.681414 containerd[1532]: 2025-06-21 05:29:17.634 [INFO][4465] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lkqm" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e103371f-c594-4fc8-ada3-19b59332d837", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"", Pod:"coredns-668d6bf9bc-8lkqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica485d4b023", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:17.681414 containerd[1532]: 2025-06-21 05:29:17.634 [INFO][4465] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.69/32] ContainerID="cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lkqm" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-eth0" Jun 21 05:29:17.681414 containerd[1532]: 2025-06-21 05:29:17.635 [INFO][4465] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica485d4b023 ContainerID="cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lkqm" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-eth0" Jun 21 05:29:17.681414 containerd[1532]: 2025-06-21 05:29:17.644 [INFO][4465] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lkqm" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-eth0" Jun 21 05:29:17.681414 containerd[1532]: 2025-06-21 05:29:17.645 [INFO][4465] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lkqm" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e103371f-c594-4fc8-ada3-19b59332d837", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf", Pod:"coredns-668d6bf9bc-8lkqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica485d4b023", MAC:"72:53:f9:07:53:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:17.681414 containerd[1532]: 2025-06-21 05:29:17.674 [INFO][4465] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lkqm" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-coredns--668d6bf9bc--8lkqm-eth0" Jun 21 05:29:17.745568 containerd[1532]: time="2025-06-21T05:29:17.745423164Z" level=info msg="connecting to shim cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf" address="unix:///run/containerd/s/1d624d8d335265ab84a6437c78ffeed8ad1aecf7e38b583046963bb0f59e6bba" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:29:17.795707 systemd-networkd[1454]: cali46e90d9e58d: Link UP Jun 21 05:29:17.807400 systemd-networkd[1454]: cali46e90d9e58d: Gained carrier Jun 21 05:29:17.812022 systemd[1]: Started cri-containerd-cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf.scope - libcontainer container cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf. Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.374 [INFO][4428] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-eth0 calico-apiserver-86d6895b5f- calico-apiserver 41978ebe-aee9-4eb1-b675-daa95daac0a7 830 0 2025-06-21 05:28:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86d6895b5f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.0-e-bb84d467cd calico-apiserver-86d6895b5f-nmhc6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali46e90d9e58d [] [] }} ContainerID="106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-nmhc6" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-" Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.375 [INFO][4428] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-nmhc6" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-eth0" Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.554 [INFO][4484] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" HandleID="k8s-pod-network.106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" Workload="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-eth0" Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.555 [INFO][4484] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" HandleID="k8s-pod-network.106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" Workload="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.0-e-bb84d467cd", "pod":"calico-apiserver-86d6895b5f-nmhc6", "timestamp":"2025-06-21 05:29:17.55410574 +0000 UTC"}, Hostname:"ci-4372.0.0-e-bb84d467cd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.555 [INFO][4484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.628 [INFO][4484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.628 [INFO][4484] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-e-bb84d467cd' Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.667 [INFO][4484] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.679 [INFO][4484] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.696 [INFO][4484] ipam/ipam.go 511: Trying affinity for 192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.702 [INFO][4484] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.718 [INFO][4484] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.718 [INFO][4484] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.64/26 handle="k8s-pod-network.106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.724 [INFO][4484] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794 Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.731 [INFO][4484] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.64/26 handle="k8s-pod-network.106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.753 [INFO][4484] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.70/26] block=192.168.46.64/26 handle="k8s-pod-network.106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.753 [INFO][4484] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.70/26] handle="k8s-pod-network.106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.753 [INFO][4484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:29:17.833512 containerd[1532]: 2025-06-21 05:29:17.754 [INFO][4484] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.70/26] IPv6=[] ContainerID="106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" HandleID="k8s-pod-network.106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" Workload="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-eth0" Jun 21 05:29:17.835074 containerd[1532]: 2025-06-21 05:29:17.778 [INFO][4428] cni-plugin/k8s.go 418: Populated endpoint ContainerID="106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-nmhc6" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-eth0", GenerateName:"calico-apiserver-86d6895b5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"41978ebe-aee9-4eb1-b675-daa95daac0a7", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86d6895b5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"", Pod:"calico-apiserver-86d6895b5f-nmhc6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46e90d9e58d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:17.835074 containerd[1532]: 2025-06-21 05:29:17.779 [INFO][4428] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.70/32] ContainerID="106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-nmhc6" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-eth0" Jun 21 05:29:17.835074 containerd[1532]: 2025-06-21 05:29:17.779 [INFO][4428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46e90d9e58d ContainerID="106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-nmhc6" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-eth0" Jun 21 05:29:17.835074 containerd[1532]: 2025-06-21 05:29:17.808 [INFO][4428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-nmhc6" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-eth0" Jun 21 05:29:17.835074 containerd[1532]: 2025-06-21 05:29:17.810 [INFO][4428] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-nmhc6" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-eth0", GenerateName:"calico-apiserver-86d6895b5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"41978ebe-aee9-4eb1-b675-daa95daac0a7", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86d6895b5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794", Pod:"calico-apiserver-86d6895b5f-nmhc6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46e90d9e58d", MAC:"26:3b:06:ae:98:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:17.835074 containerd[1532]: 2025-06-21 05:29:17.830 [INFO][4428] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" Namespace="calico-apiserver" Pod="calico-apiserver-86d6895b5f-nmhc6" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--apiserver--86d6895b5f--nmhc6-eth0" Jun 21 05:29:17.883646 containerd[1532]: time="2025-06-21T05:29:17.883548000Z" level=info msg="connecting to shim 106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794" address="unix:///run/containerd/s/2e0b98b28bab66a03d2fbc5fc0b62b383d94f4181f3ce351b2eef36e41d66455" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:29:17.942732 systemd-networkd[1454]: calice52b1017af: Link UP Jun 21 05:29:17.948313 systemd-networkd[1454]: calice52b1017af: Gained carrier Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.420 [INFO][4424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-eth0 goldmane-5bd85449d4- calico-system 561d093e-330b-43dd-ad62-1acd896769be 828 0 2025-06-21 05:28:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.0.0-e-bb84d467cd goldmane-5bd85449d4-tkkdb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calice52b1017af [] [] }} ContainerID="0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" Namespace="calico-system" Pod="goldmane-5bd85449d4-tkkdb" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-" Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.421 [INFO][4424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" Namespace="calico-system" Pod="goldmane-5bd85449d4-tkkdb" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-eth0" Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.580 [INFO][4492] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" HandleID="k8s-pod-network.0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" Workload="ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-eth0" Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.580 [INFO][4492] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" HandleID="k8s-pod-network.0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" Workload="ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.0-e-bb84d467cd", "pod":"goldmane-5bd85449d4-tkkdb", "timestamp":"2025-06-21 05:29:17.580254315 +0000 UTC"}, Hostname:"ci-4372.0.0-e-bb84d467cd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.580 [INFO][4492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.755 [INFO][4492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.756 [INFO][4492] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-e-bb84d467cd' Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.806 [INFO][4492] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.824 [INFO][4492] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.849 [INFO][4492] ipam/ipam.go 511: Trying affinity for 192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.863 [INFO][4492] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.873 [INFO][4492] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.873 [INFO][4492] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.64/26 handle="k8s-pod-network.0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.881 [INFO][4492] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.907 [INFO][4492] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.64/26 handle="k8s-pod-network.0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.923 [INFO][4492] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.71/26] block=192.168.46.64/26 handle="k8s-pod-network.0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.924 [INFO][4492] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.71/26] handle="k8s-pod-network.0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.925 [INFO][4492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:29:17.996426 containerd[1532]: 2025-06-21 05:29:17.925 [INFO][4492] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.71/26] IPv6=[] ContainerID="0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" HandleID="k8s-pod-network.0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" Workload="ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-eth0" Jun 21 05:29:17.998185 containerd[1532]: 2025-06-21 05:29:17.932 [INFO][4424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" Namespace="calico-system" Pod="goldmane-5bd85449d4-tkkdb" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"561d093e-330b-43dd-ad62-1acd896769be", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"", Pod:"goldmane-5bd85449d4-tkkdb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calice52b1017af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:17.998185 containerd[1532]: 2025-06-21 05:29:17.932 [INFO][4424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.71/32] ContainerID="0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" Namespace="calico-system" Pod="goldmane-5bd85449d4-tkkdb" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-eth0" Jun 21 05:29:17.998185 containerd[1532]: 2025-06-21 05:29:17.932 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice52b1017af ContainerID="0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" Namespace="calico-system" Pod="goldmane-5bd85449d4-tkkdb" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-eth0" Jun 21 05:29:17.998185 containerd[1532]: 2025-06-21 05:29:17.950 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" Namespace="calico-system" Pod="goldmane-5bd85449d4-tkkdb" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-eth0" Jun 21 05:29:17.998185 containerd[1532]: 2025-06-21 05:29:17.953 [INFO][4424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" Namespace="calico-system" Pod="goldmane-5bd85449d4-tkkdb" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"561d093e-330b-43dd-ad62-1acd896769be", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b", Pod:"goldmane-5bd85449d4-tkkdb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calice52b1017af", MAC:"36:7f:8b:d8:20:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:17.998185 containerd[1532]: 2025-06-21 05:29:17.983 [INFO][4424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" Namespace="calico-system" Pod="goldmane-5bd85449d4-tkkdb" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-goldmane--5bd85449d4--tkkdb-eth0" Jun 21 05:29:18.003794 containerd[1532]: time="2025-06-21T05:29:18.003733117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8lkqm,Uid:e103371f-c594-4fc8-ada3-19b59332d837,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf\"" Jun 21 05:29:18.005745 systemd[1]: Started cri-containerd-106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794.scope - libcontainer container 106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794. Jun 21 05:29:18.008668 kubelet[2681]: E0621 05:29:18.008295 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:18.015147 containerd[1532]: time="2025-06-21T05:29:18.015102401Z" level=info msg="CreateContainer within sandbox \"cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 05:29:18.054763 containerd[1532]: time="2025-06-21T05:29:18.054714155Z" level=info msg="Container 00482444b5597ddad504a35e9fe238a26745e840f45ed79a2c0ea870a61574eb: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:18.062491 containerd[1532]: time="2025-06-21T05:29:18.062387274Z" level=info msg="connecting to shim 0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b" address="unix:///run/containerd/s/16013bc7ed3b2c9b65fa064840c40e7711525cafa3792def327a40efc91b2226" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:29:18.063354 containerd[1532]: time="2025-06-21T05:29:18.063242623Z" level=info msg="CreateContainer within sandbox \"cc4a39f57142e63c288e7a6b4ab976a07d2c1cf663c3f097150dfc3c58f6d2bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00482444b5597ddad504a35e9fe238a26745e840f45ed79a2c0ea870a61574eb\"" Jun 21 05:29:18.064658 containerd[1532]: time="2025-06-21T05:29:18.064631459Z" level=info msg="StartContainer for \"00482444b5597ddad504a35e9fe238a26745e840f45ed79a2c0ea870a61574eb\"" Jun 21 05:29:18.066997 containerd[1532]: time="2025-06-21T05:29:18.066769007Z" level=info msg="connecting to shim 00482444b5597ddad504a35e9fe238a26745e840f45ed79a2c0ea870a61574eb" address="unix:///run/containerd/s/1d624d8d335265ab84a6437c78ffeed8ad1aecf7e38b583046963bb0f59e6bba" protocol=ttrpc version=3 Jun 21 05:29:18.129735 systemd[1]: Started cri-containerd-00482444b5597ddad504a35e9fe238a26745e840f45ed79a2c0ea870a61574eb.scope - libcontainer container 00482444b5597ddad504a35e9fe238a26745e840f45ed79a2c0ea870a61574eb. Jun 21 05:29:18.132969 systemd[1]: Started cri-containerd-0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b.scope - libcontainer container 0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b. Jun 21 05:29:18.171193 containerd[1532]: time="2025-06-21T05:29:18.171152234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d6895b5f-nmhc6,Uid:41978ebe-aee9-4eb1-b675-daa95daac0a7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794\"" Jun 21 05:29:18.225196 containerd[1532]: time="2025-06-21T05:29:18.224852272Z" level=info msg="StartContainer for \"00482444b5597ddad504a35e9fe238a26745e840f45ed79a2c0ea870a61574eb\" returns successfully" Jun 21 05:29:18.321779 containerd[1532]: time="2025-06-21T05:29:18.321733421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-tkkdb,Uid:561d093e-330b-43dd-ad62-1acd896769be,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b\"" Jun 21 05:29:18.496570 kubelet[2681]: E0621 05:29:18.496432 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:18.525835 kubelet[2681]: I0621 05:29:18.525744 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8lkqm" podStartSLOduration=43.525715835 podStartE2EDuration="43.525715835s" podCreationTimestamp="2025-06-21 05:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:29:18.524031118 +0000 UTC m=+46.582171361" watchObservedRunningTime="2025-06-21 05:29:18.525715835 +0000 UTC m=+46.583856077" Jun 21 05:29:19.120403 containerd[1532]: time="2025-06-21T05:29:19.120360330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-845486d6f4-4qppc,Uid:6f880db5-867a-4143-aed9-1a92c814b9e8,Namespace:calico-system,Attempt:0,}" Jun 21 05:29:19.170729 systemd-networkd[1454]: cali46e90d9e58d: Gained IPv6LL Jun 21 05:29:19.362795 systemd-networkd[1454]: calica485d4b023: Gained IPv6LL Jun 21 05:29:19.411959 systemd-networkd[1454]: cali03d712eca8a: Link UP Jun 21 05:29:19.412307 systemd-networkd[1454]: cali03d712eca8a: Gained carrier Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.226 [INFO][4719] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-eth0 calico-kube-controllers-845486d6f4- calico-system 6f880db5-867a-4143-aed9-1a92c814b9e8 829 0 2025-06-21 05:28:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:845486d6f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.0.0-e-bb84d467cd calico-kube-controllers-845486d6f4-4qppc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali03d712eca8a [] [] }} ContainerID="ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" Namespace="calico-system" Pod="calico-kube-controllers-845486d6f4-4qppc" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-" Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.226 [INFO][4719] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" Namespace="calico-system" Pod="calico-kube-controllers-845486d6f4-4qppc" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-eth0" Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.311 [INFO][4731] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" HandleID="k8s-pod-network.ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" Workload="ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-eth0" Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.311 [INFO][4731] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" HandleID="k8s-pod-network.ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" Workload="ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e3e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.0-e-bb84d467cd", "pod":"calico-kube-controllers-845486d6f4-4qppc", "timestamp":"2025-06-21 05:29:19.310542017 +0000 UTC"}, Hostname:"ci-4372.0.0-e-bb84d467cd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.312 [INFO][4731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.312 [INFO][4731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.312 [INFO][4731] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-e-bb84d467cd' Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.326 [INFO][4731] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.337 [INFO][4731] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.350 [INFO][4731] ipam/ipam.go 511: Trying affinity for 192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.354 [INFO][4731] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.361 [INFO][4731] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.64/26 host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.361 [INFO][4731] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.64/26 handle="k8s-pod-network.ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.367 [INFO][4731] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.376 [INFO][4731] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.64/26 handle="k8s-pod-network.ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.397 [INFO][4731] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.72/26] block=192.168.46.64/26 handle="k8s-pod-network.ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.397 [INFO][4731] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.72/26] handle="k8s-pod-network.ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" host="ci-4372.0.0-e-bb84d467cd" Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.397 [INFO][4731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:29:19.445785 containerd[1532]: 2025-06-21 05:29:19.397 [INFO][4731] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.72/26] IPv6=[] ContainerID="ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" HandleID="k8s-pod-network.ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" Workload="ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-eth0" Jun 21 05:29:19.450946 containerd[1532]: 2025-06-21 05:29:19.405 [INFO][4719] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" Namespace="calico-system" Pod="calico-kube-controllers-845486d6f4-4qppc" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-eth0", GenerateName:"calico-kube-controllers-845486d6f4-", Namespace:"calico-system", SelfLink:"", UID:"6f880db5-867a-4143-aed9-1a92c814b9e8", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"845486d6f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"", Pod:"calico-kube-controllers-845486d6f4-4qppc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali03d712eca8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:19.450946 containerd[1532]: 2025-06-21 05:29:19.406 [INFO][4719] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.72/32] ContainerID="ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" Namespace="calico-system" Pod="calico-kube-controllers-845486d6f4-4qppc" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-eth0" Jun 21 05:29:19.450946 containerd[1532]: 2025-06-21 05:29:19.406 [INFO][4719] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03d712eca8a ContainerID="ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" Namespace="calico-system" Pod="calico-kube-controllers-845486d6f4-4qppc" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-eth0" Jun 21 05:29:19.450946 containerd[1532]: 2025-06-21 05:29:19.412 [INFO][4719] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" Namespace="calico-system" Pod="calico-kube-controllers-845486d6f4-4qppc" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-eth0" Jun 21 05:29:19.450946 containerd[1532]: 2025-06-21 05:29:19.412 [INFO][4719] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" Namespace="calico-system" Pod="calico-kube-controllers-845486d6f4-4qppc" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-eth0", GenerateName:"calico-kube-controllers-845486d6f4-", Namespace:"calico-system", SelfLink:"", UID:"6f880db5-867a-4143-aed9-1a92c814b9e8", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"845486d6f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-e-bb84d467cd", ContainerID:"ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b", Pod:"calico-kube-controllers-845486d6f4-4qppc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali03d712eca8a", MAC:"e6:24:ff:8f:ce:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:29:19.450946 containerd[1532]: 2025-06-21 05:29:19.438 [INFO][4719] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" Namespace="calico-system" Pod="calico-kube-controllers-845486d6f4-4qppc" WorkloadEndpoint="ci--4372.0.0--e--bb84d467cd-k8s-calico--kube--controllers--845486d6f4--4qppc-eth0" Jun 21 05:29:19.504125 kubelet[2681]: E0621 05:29:19.502333 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:19.579972 containerd[1532]: time="2025-06-21T05:29:19.579910058Z" level=info msg="connecting to shim ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b" address="unix:///run/containerd/s/be92088241e9e7639455c31d7b2645d8ade34f05f70be8e1eebe95b7e8046fa5" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:29:19.649991 systemd[1]: Started cri-containerd-ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b.scope - libcontainer container ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b. Jun 21 05:29:19.750892 containerd[1532]: time="2025-06-21T05:29:19.750846016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-845486d6f4-4qppc,Uid:6f880db5-867a-4143-aed9-1a92c814b9e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b\"" Jun 21 05:29:19.876640 systemd-networkd[1454]: calice52b1017af: Gained IPv6LL Jun 21 05:29:20.035354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235863485.mount: Deactivated successfully. Jun 21 05:29:20.054899 containerd[1532]: time="2025-06-21T05:29:20.054830630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:20.056793 containerd[1532]: time="2025-06-21T05:29:20.056727303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 21 05:29:20.057605 containerd[1532]: time="2025-06-21T05:29:20.057538373Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:20.065495 containerd[1532]: time="2025-06-21T05:29:20.065377790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:20.067518 containerd[1532]: time="2025-06-21T05:29:20.066470784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 3.020163197s" Jun 21 05:29:20.067518 containerd[1532]: time="2025-06-21T05:29:20.066563166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 21 05:29:20.069621 containerd[1532]: time="2025-06-21T05:29:20.069585591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 21 05:29:20.075641 containerd[1532]: time="2025-06-21T05:29:20.075357732Z" level=info msg="CreateContainer within sandbox \"4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 21 05:29:20.089792 containerd[1532]: time="2025-06-21T05:29:20.089732746Z" level=info msg="Container bdae4abf0baa665f4f6edebddf2b7a89db4d260eaa8f1b71f8b20932c5b701ce: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:20.106819 containerd[1532]: time="2025-06-21T05:29:20.106765845Z" level=info msg="CreateContainer within sandbox \"4f4a01c763dcd902cd91b7b020a989353ad2453fea3dfc238bf488a904b2d878\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"bdae4abf0baa665f4f6edebddf2b7a89db4d260eaa8f1b71f8b20932c5b701ce\"" Jun 21 05:29:20.107797 containerd[1532]: time="2025-06-21T05:29:20.107766374Z" level=info msg="StartContainer for \"bdae4abf0baa665f4f6edebddf2b7a89db4d260eaa8f1b71f8b20932c5b701ce\"" Jun 21 05:29:20.113080 containerd[1532]: time="2025-06-21T05:29:20.112980990Z" level=info msg="connecting to shim bdae4abf0baa665f4f6edebddf2b7a89db4d260eaa8f1b71f8b20932c5b701ce" address="unix:///run/containerd/s/433dca2ea8eb419f11e997e1703dcfa291f762c63d439a43176f55a8a1a1a60b" protocol=ttrpc version=3 Jun 21 05:29:20.159832 systemd[1]: Started cri-containerd-bdae4abf0baa665f4f6edebddf2b7a89db4d260eaa8f1b71f8b20932c5b701ce.scope - libcontainer container bdae4abf0baa665f4f6edebddf2b7a89db4d260eaa8f1b71f8b20932c5b701ce. Jun 21 05:29:20.259955 containerd[1532]: time="2025-06-21T05:29:20.259780821Z" level=info msg="StartContainer for \"bdae4abf0baa665f4f6edebddf2b7a89db4d260eaa8f1b71f8b20932c5b701ce\" returns successfully" Jun 21 05:29:20.520827 kubelet[2681]: E0621 05:29:20.520704 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:20.538615 kubelet[2681]: I0621 05:29:20.538099 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5d77cd896-njjgs" podStartSLOduration=2.212944191 podStartE2EDuration="8.538072704s" podCreationTimestamp="2025-06-21 05:29:12 +0000 UTC" firstStartedPulling="2025-06-21 05:29:13.744223022 +0000 UTC m=+41.802363245" lastFinishedPulling="2025-06-21 05:29:20.069351525 +0000 UTC m=+48.127491758" observedRunningTime="2025-06-21 05:29:20.537628802 +0000 UTC m=+48.595769043" watchObservedRunningTime="2025-06-21 05:29:20.538072704 +0000 UTC m=+48.596212946" Jun 21 05:29:20.578775 systemd-networkd[1454]: cali03d712eca8a: Gained IPv6LL Jun 21 05:29:23.804882 containerd[1532]: time="2025-06-21T05:29:23.804751518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:23.806964 containerd[1532]: time="2025-06-21T05:29:23.806897810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 21 05:29:23.808498 containerd[1532]: time="2025-06-21T05:29:23.807857014Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:23.811243 containerd[1532]: time="2025-06-21T05:29:23.811149698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:23.812761 containerd[1532]: time="2025-06-21T05:29:23.812221951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 3.742459703s" Jun 21 05:29:23.812761 containerd[1532]: time="2025-06-21T05:29:23.812270873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 21 05:29:23.815718 containerd[1532]: time="2025-06-21T05:29:23.815679060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 21 05:29:23.822521 containerd[1532]: time="2025-06-21T05:29:23.821961445Z" level=info msg="CreateContainer within sandbox \"0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 05:29:23.831961 containerd[1532]: time="2025-06-21T05:29:23.831885360Z" level=info msg="Container ba37a5b731dc695e11612d94dca76c081e65c6da31ac6af285bc651e491a4da7: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:23.849122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645414955.mount: Deactivated successfully. Jun 21 05:29:23.856958 containerd[1532]: time="2025-06-21T05:29:23.856875910Z" level=info msg="CreateContainer within sandbox \"0a15aefbf15bfc8dc01797e65ca216091f585e295b0b5ac5c98164ae4a699b4f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ba37a5b731dc695e11612d94dca76c081e65c6da31ac6af285bc651e491a4da7\"" Jun 21 05:29:23.860803 containerd[1532]: time="2025-06-21T05:29:23.860738872Z" level=info msg="StartContainer for \"ba37a5b731dc695e11612d94dca76c081e65c6da31ac6af285bc651e491a4da7\"" Jun 21 05:29:23.864223 containerd[1532]: time="2025-06-21T05:29:23.864112675Z" level=info msg="connecting to shim ba37a5b731dc695e11612d94dca76c081e65c6da31ac6af285bc651e491a4da7" address="unix:///run/containerd/s/9244cd47771a9fe2d6c72615aaed140c6d39a080fe283914ff9f424bc8eb1709" protocol=ttrpc version=3 Jun 21 05:29:23.916836 systemd[1]: Started cri-containerd-ba37a5b731dc695e11612d94dca76c081e65c6da31ac6af285bc651e491a4da7.scope - libcontainer container ba37a5b731dc695e11612d94dca76c081e65c6da31ac6af285bc651e491a4da7. Jun 21 05:29:23.997449 containerd[1532]: time="2025-06-21T05:29:23.997297742Z" level=info msg="StartContainer for \"ba37a5b731dc695e11612d94dca76c081e65c6da31ac6af285bc651e491a4da7\" returns successfully" Jun 21 05:29:25.613736 kubelet[2681]: I0621 05:29:25.613660 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:29:25.895115 systemd[1]: Started sshd@7-143.198.235.111:22-139.178.68.195:50762.service - OpenSSH per-connection server daemon (139.178.68.195:50762). Jun 21 05:29:26.130857 sshd[4899]: Accepted publickey for core from 139.178.68.195 port 50762 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:26.134379 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:26.153709 systemd-logind[1491]: New session 8 of user core. Jun 21 05:29:26.160833 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 05:29:26.744191 containerd[1532]: time="2025-06-21T05:29:26.744143633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:26.747332 containerd[1532]: time="2025-06-21T05:29:26.747285376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 21 05:29:26.748480 containerd[1532]: time="2025-06-21T05:29:26.748149941Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:26.757319 containerd[1532]: time="2025-06-21T05:29:26.757248508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:26.760291 containerd[1532]: time="2025-06-21T05:29:26.760078269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 2.944022039s" Jun 21 05:29:26.760291 containerd[1532]: time="2025-06-21T05:29:26.760127796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 21 05:29:26.764546 containerd[1532]: time="2025-06-21T05:29:26.764376915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 21 05:29:26.773675 containerd[1532]: time="2025-06-21T05:29:26.771955426Z" level=info msg="CreateContainer within sandbox \"91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 21 05:29:26.805098 containerd[1532]: time="2025-06-21T05:29:26.804806598Z" level=info msg="Container f339ff6317a7693a2b23755d039f09fc69050e8ff2e8a062618e674dfcaa5a9a: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:26.824588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2947903449.mount: Deactivated successfully. Jun 21 05:29:26.836238 containerd[1532]: time="2025-06-21T05:29:26.836198366Z" level=info msg="CreateContainer within sandbox \"91ecf52030d37e8a7c6f80671cdc2b10a071a7362e19537c216b7b19eb3b54c8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f339ff6317a7693a2b23755d039f09fc69050e8ff2e8a062618e674dfcaa5a9a\"" Jun 21 05:29:26.837314 containerd[1532]: time="2025-06-21T05:29:26.837260760Z" level=info msg="StartContainer for \"f339ff6317a7693a2b23755d039f09fc69050e8ff2e8a062618e674dfcaa5a9a\"" Jun 21 05:29:26.839144 containerd[1532]: time="2025-06-21T05:29:26.839105673Z" level=info msg="connecting to shim f339ff6317a7693a2b23755d039f09fc69050e8ff2e8a062618e674dfcaa5a9a" address="unix:///run/containerd/s/ebaec65352a48b15ba9d42a151a2d413d6132dccbf142437e8b4fc91c8f9c542" protocol=ttrpc version=3 Jun 21 05:29:26.947026 systemd[1]: Started cri-containerd-f339ff6317a7693a2b23755d039f09fc69050e8ff2e8a062618e674dfcaa5a9a.scope - libcontainer container f339ff6317a7693a2b23755d039f09fc69050e8ff2e8a062618e674dfcaa5a9a. Jun 21 05:29:27.075309 containerd[1532]: time="2025-06-21T05:29:27.075256192Z" level=info msg="StartContainer for \"f339ff6317a7693a2b23755d039f09fc69050e8ff2e8a062618e674dfcaa5a9a\" returns successfully" Jun 21 05:29:27.133870 sshd[4901]: Connection closed by 139.178.68.195 port 50762 Jun 21 05:29:27.133882 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:27.142128 containerd[1532]: time="2025-06-21T05:29:27.141938755Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:27.144690 containerd[1532]: time="2025-06-21T05:29:27.144644951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 21 05:29:27.150084 systemd[1]: sshd@7-143.198.235.111:22-139.178.68.195:50762.service: Deactivated successfully. Jun 21 05:29:27.154896 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 05:29:27.155576 containerd[1532]: time="2025-06-21T05:29:27.155046880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 388.549857ms" Jun 21 05:29:27.155576 containerd[1532]: time="2025-06-21T05:29:27.155094071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 21 05:29:27.158889 systemd-logind[1491]: Session 8 logged out. Waiting for processes to exit. Jun 21 05:29:27.161287 containerd[1532]: time="2025-06-21T05:29:27.161258190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 21 05:29:27.162047 systemd-logind[1491]: Removed session 8. Jun 21 05:29:27.164953 containerd[1532]: time="2025-06-21T05:29:27.164911667Z" level=info msg="CreateContainer within sandbox \"106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 05:29:27.175915 containerd[1532]: time="2025-06-21T05:29:27.175853709Z" level=info msg="Container ee88e265b366f5ad9a1605950594884451fa93c4a1d32db948585c2b4698249c: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:27.194125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount853352193.mount: Deactivated successfully. Jun 21 05:29:27.197156 containerd[1532]: time="2025-06-21T05:29:27.197098703Z" level=info msg="CreateContainer within sandbox \"106ac9bd3794108ecb771acd5a41a4865beccf19bd2f3375c3a6786586072794\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ee88e265b366f5ad9a1605950594884451fa93c4a1d32db948585c2b4698249c\"" Jun 21 05:29:27.198073 containerd[1532]: time="2025-06-21T05:29:27.198026720Z" level=info msg="StartContainer for \"ee88e265b366f5ad9a1605950594884451fa93c4a1d32db948585c2b4698249c\"" Jun 21 05:29:27.203444 containerd[1532]: time="2025-06-21T05:29:27.203372809Z" level=info msg="connecting to shim ee88e265b366f5ad9a1605950594884451fa93c4a1d32db948585c2b4698249c" address="unix:///run/containerd/s/2e0b98b28bab66a03d2fbc5fc0b62b383d94f4181f3ce351b2eef36e41d66455" protocol=ttrpc version=3 Jun 21 05:29:27.236249 systemd[1]: Started cri-containerd-ee88e265b366f5ad9a1605950594884451fa93c4a1d32db948585c2b4698249c.scope - libcontainer container ee88e265b366f5ad9a1605950594884451fa93c4a1d32db948585c2b4698249c. Jun 21 05:29:27.341420 containerd[1532]: time="2025-06-21T05:29:27.341286663Z" level=info msg="StartContainer for \"ee88e265b366f5ad9a1605950594884451fa93c4a1d32db948585c2b4698249c\" returns successfully" Jun 21 05:29:27.510857 kubelet[2681]: I0621 05:29:27.510697 2681 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 21 05:29:27.515524 kubelet[2681]: I0621 05:29:27.515370 2681 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 21 05:29:27.681119 kubelet[2681]: I0621 05:29:27.680633 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-d42wl" podStartSLOduration=24.638844328 podStartE2EDuration="36.680609463s" podCreationTimestamp="2025-06-21 05:28:51 +0000 UTC" firstStartedPulling="2025-06-21 05:29:14.722307854 +0000 UTC m=+42.780448078" lastFinishedPulling="2025-06-21 05:29:26.764072974 +0000 UTC m=+54.822213213" observedRunningTime="2025-06-21 05:29:27.677247043 +0000 UTC m=+55.735387293" watchObservedRunningTime="2025-06-21 05:29:27.680609463 +0000 UTC m=+55.738749703" Jun 21 05:29:27.681119 kubelet[2681]: I0621 05:29:27.680789 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86d6895b5f-77tzw" podStartSLOduration=33.332207738 podStartE2EDuration="40.680784684s" podCreationTimestamp="2025-06-21 05:28:47 +0000 UTC" firstStartedPulling="2025-06-21 05:29:16.46572871 +0000 UTC m=+44.523868948" lastFinishedPulling="2025-06-21 05:29:23.814305659 +0000 UTC m=+51.872445894" observedRunningTime="2025-06-21 05:29:24.616926016 +0000 UTC m=+52.675066259" watchObservedRunningTime="2025-06-21 05:29:27.680784684 +0000 UTC m=+55.738924920" Jun 21 05:29:28.643078 kubelet[2681]: I0621 05:29:28.643010 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:29:30.054683 kubelet[2681]: I0621 05:29:30.054303 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:29:30.146977 kubelet[2681]: I0621 05:29:30.146863 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86d6895b5f-nmhc6" podStartSLOduration=34.166523839 podStartE2EDuration="43.146834535s" podCreationTimestamp="2025-06-21 05:28:47 +0000 UTC" firstStartedPulling="2025-06-21 05:29:18.176880901 +0000 UTC m=+46.235021135" lastFinishedPulling="2025-06-21 05:29:27.157191597 +0000 UTC m=+55.215331831" observedRunningTime="2025-06-21 05:29:27.69997731 +0000 UTC m=+55.758117546" watchObservedRunningTime="2025-06-21 05:29:30.146834535 +0000 UTC m=+58.204974782" Jun 21 05:29:31.199530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140037419.mount: Deactivated successfully. Jun 21 05:29:31.989116 containerd[1532]: time="2025-06-21T05:29:31.989056991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:31.995155 containerd[1532]: time="2025-06-21T05:29:31.995066214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 21 05:29:32.006721 containerd[1532]: time="2025-06-21T05:29:32.006615110Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:32.010030 containerd[1532]: time="2025-06-21T05:29:32.009494062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:32.010684 containerd[1532]: time="2025-06-21T05:29:32.010553815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 4.849126508s" Jun 21 05:29:32.010684 containerd[1532]: time="2025-06-21T05:29:32.010587001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 21 05:29:32.015158 containerd[1532]: time="2025-06-21T05:29:32.014313144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 21 05:29:32.015518 containerd[1532]: time="2025-06-21T05:29:32.015198586Z" level=info msg="CreateContainer within sandbox \"0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 21 05:29:32.065655 containerd[1532]: time="2025-06-21T05:29:32.063651266Z" level=info msg="Container 2afff8918e5db2b350290e3b94d2b8d4d34cfe8c0dbebb9bef9b9afe91e1d09f: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:32.162931 systemd[1]: Started sshd@8-143.198.235.111:22-139.178.68.195:50768.service - OpenSSH per-connection server daemon (139.178.68.195:50768). Jun 21 05:29:32.262129 containerd[1532]: time="2025-06-21T05:29:32.261919778Z" level=info msg="CreateContainer within sandbox \"0ab582642b8d58530da07922ae95c8e854030c43ec011fd3b29770aa35b0f53b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2afff8918e5db2b350290e3b94d2b8d4d34cfe8c0dbebb9bef9b9afe91e1d09f\"" Jun 21 05:29:32.322496 containerd[1532]: time="2025-06-21T05:29:32.322314881Z" level=info msg="StartContainer for \"2afff8918e5db2b350290e3b94d2b8d4d34cfe8c0dbebb9bef9b9afe91e1d09f\"" Jun 21 05:29:32.325690 containerd[1532]: time="2025-06-21T05:29:32.324117098Z" level=info msg="connecting to shim 2afff8918e5db2b350290e3b94d2b8d4d34cfe8c0dbebb9bef9b9afe91e1d09f" address="unix:///run/containerd/s/16013bc7ed3b2c9b65fa064840c40e7711525cafa3792def327a40efc91b2226" protocol=ttrpc version=3 Jun 21 05:29:32.464969 sshd[5007]: Accepted publickey for core from 139.178.68.195 port 50768 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:32.470859 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:32.492556 systemd-logind[1491]: New session 9 of user core. Jun 21 05:29:32.498944 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 05:29:32.554760 systemd[1]: Started cri-containerd-2afff8918e5db2b350290e3b94d2b8d4d34cfe8c0dbebb9bef9b9afe91e1d09f.scope - libcontainer container 2afff8918e5db2b350290e3b94d2b8d4d34cfe8c0dbebb9bef9b9afe91e1d09f. Jun 21 05:29:32.723331 containerd[1532]: time="2025-06-21T05:29:32.723226766Z" level=info msg="StartContainer for \"2afff8918e5db2b350290e3b94d2b8d4d34cfe8c0dbebb9bef9b9afe91e1d09f\" returns successfully" Jun 21 05:29:33.209319 sshd[5020]: Connection closed by 139.178.68.195 port 50768 Jun 21 05:29:33.210564 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:33.218580 systemd-logind[1491]: Session 9 logged out. Waiting for processes to exit. Jun 21 05:29:33.219009 systemd[1]: sshd@8-143.198.235.111:22-139.178.68.195:50768.service: Deactivated successfully. Jun 21 05:29:33.224332 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 05:29:33.231661 systemd-logind[1491]: Removed session 9. Jun 21 05:29:33.739891 kubelet[2681]: I0621 05:29:33.739398 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-tkkdb" podStartSLOduration=30.055712118 podStartE2EDuration="43.739373539s" podCreationTimestamp="2025-06-21 05:28:50 +0000 UTC" firstStartedPulling="2025-06-21 05:29:18.329273545 +0000 UTC m=+46.387413779" lastFinishedPulling="2025-06-21 05:29:32.012934976 +0000 UTC m=+60.071075200" observedRunningTime="2025-06-21 05:29:33.732896937 +0000 UTC m=+61.791037197" watchObservedRunningTime="2025-06-21 05:29:33.739373539 +0000 UTC m=+61.797513782" Jun 21 05:29:33.996085 containerd[1532]: time="2025-06-21T05:29:33.995237141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2afff8918e5db2b350290e3b94d2b8d4d34cfe8c0dbebb9bef9b9afe91e1d09f\" id:\"be3fa3883f0db0942f370424e632994bf58311a5ccd7b2c564b2aaf62fb0ec53\" pid:5076 exit_status:1 exited_at:{seconds:1750483773 nanos:929524632}" Jun 21 05:29:34.916150 containerd[1532]: time="2025-06-21T05:29:34.916024756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2afff8918e5db2b350290e3b94d2b8d4d34cfe8c0dbebb9bef9b9afe91e1d09f\" id:\"854a4def276e7e7b0dd549c4e000b01ca4212ac9ddc6aeaa9c1c49216a07b241\" pid:5098 exit_status:1 exited_at:{seconds:1750483774 nanos:915611061}" Jun 21 05:29:35.963695 containerd[1532]: time="2025-06-21T05:29:35.963644223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2afff8918e5db2b350290e3b94d2b8d4d34cfe8c0dbebb9bef9b9afe91e1d09f\" id:\"841468aa979acde74730d70246f793cc92901bd2eceb324b35793be94c19190c\" pid:5128 exit_status:1 exited_at:{seconds:1750483775 nanos:961093971}" Jun 21 05:29:36.899836 kubelet[2681]: I0621 05:29:36.899792 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:29:37.700149 containerd[1532]: time="2025-06-21T05:29:37.699916389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:37.701477 containerd[1532]: time="2025-06-21T05:29:37.701414765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 21 05:29:37.701983 containerd[1532]: time="2025-06-21T05:29:37.701944974Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:37.704895 containerd[1532]: time="2025-06-21T05:29:37.704808371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:37.705770 containerd[1532]: time="2025-06-21T05:29:37.705563928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 5.69015772s" Jun 21 05:29:37.705770 containerd[1532]: time="2025-06-21T05:29:37.705603572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 21 05:29:37.795311 containerd[1532]: time="2025-06-21T05:29:37.795273527Z" level=info msg="CreateContainer within sandbox \"ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 21 05:29:37.822360 containerd[1532]: time="2025-06-21T05:29:37.822283947Z" level=info msg="Container f5ef4012b6fe4df5a70d721d3a787d13950cad4f0b4396e462f6ec34a3f50d33: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:37.837119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2470549806.mount: Deactivated successfully. Jun 21 05:29:37.842585 containerd[1532]: time="2025-06-21T05:29:37.842525745Z" level=info msg="CreateContainer within sandbox \"ea5a950b1e772393aca4d6f32027dc59647a8af0dc1ea6ed578e34231d60a72b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f5ef4012b6fe4df5a70d721d3a787d13950cad4f0b4396e462f6ec34a3f50d33\"" Jun 21 05:29:37.845218 containerd[1532]: time="2025-06-21T05:29:37.845158291Z" level=info msg="StartContainer for \"f5ef4012b6fe4df5a70d721d3a787d13950cad4f0b4396e462f6ec34a3f50d33\"" Jun 21 05:29:37.867555 containerd[1532]: time="2025-06-21T05:29:37.867429112Z" level=info msg="connecting to shim f5ef4012b6fe4df5a70d721d3a787d13950cad4f0b4396e462f6ec34a3f50d33" address="unix:///run/containerd/s/be92088241e9e7639455c31d7b2645d8ade34f05f70be8e1eebe95b7e8046fa5" protocol=ttrpc version=3 Jun 21 05:29:37.916857 systemd[1]: Started cri-containerd-f5ef4012b6fe4df5a70d721d3a787d13950cad4f0b4396e462f6ec34a3f50d33.scope - libcontainer container f5ef4012b6fe4df5a70d721d3a787d13950cad4f0b4396e462f6ec34a3f50d33. Jun 21 05:29:38.019938 containerd[1532]: time="2025-06-21T05:29:38.019706853Z" level=info msg="StartContainer for \"f5ef4012b6fe4df5a70d721d3a787d13950cad4f0b4396e462f6ec34a3f50d33\" returns successfully" Jun 21 05:29:38.230763 systemd[1]: Started sshd@9-143.198.235.111:22-139.178.68.195:59024.service - OpenSSH per-connection server daemon (139.178.68.195:59024). Jun 21 05:29:38.395489 sshd[5191]: Accepted publickey for core from 139.178.68.195 port 59024 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:38.398614 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:38.407500 systemd-logind[1491]: New session 10 of user core. Jun 21 05:29:38.415786 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 05:29:39.040714 containerd[1532]: time="2025-06-21T05:29:39.040603351Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5ef4012b6fe4df5a70d721d3a787d13950cad4f0b4396e462f6ec34a3f50d33\" id:\"0ec3d88944fe34cbfb120782ef8633c1e63c483b5927788c9a5f70693072931b\" pid:5213 exited_at:{seconds:1750483779 nanos:39131090}" Jun 21 05:29:39.100767 kubelet[2681]: I0621 05:29:39.078444 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-845486d6f4-4qppc" podStartSLOduration=30.16981818 podStartE2EDuration="48.078414801s" podCreationTimestamp="2025-06-21 05:28:51 +0000 UTC" firstStartedPulling="2025-06-21 05:29:19.799399645 +0000 UTC m=+47.857539883" lastFinishedPulling="2025-06-21 05:29:37.707996269 +0000 UTC m=+65.766136504" observedRunningTime="2025-06-21 05:29:39.012918908 +0000 UTC m=+67.071059149" watchObservedRunningTime="2025-06-21 05:29:39.078414801 +0000 UTC m=+67.136555045" Jun 21 05:29:39.181980 sshd[5193]: Connection closed by 139.178.68.195 port 59024 Jun 21 05:29:39.181313 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:39.195149 systemd[1]: sshd@9-143.198.235.111:22-139.178.68.195:59024.service: Deactivated successfully. Jun 21 05:29:39.197782 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 05:29:39.200148 systemd-logind[1491]: Session 10 logged out. Waiting for processes to exit. Jun 21 05:29:39.204180 systemd[1]: Started sshd@10-143.198.235.111:22-139.178.68.195:59026.service - OpenSSH per-connection server daemon (139.178.68.195:59026). Jun 21 05:29:39.205139 systemd-logind[1491]: Removed session 10. Jun 21 05:29:39.270352 sshd[5228]: Accepted publickey for core from 139.178.68.195 port 59026 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:39.273166 sshd-session[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:39.280427 systemd-logind[1491]: New session 11 of user core. Jun 21 05:29:39.293642 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 05:29:39.523086 sshd[5230]: Connection closed by 139.178.68.195 port 59026 Jun 21 05:29:39.524161 sshd-session[5228]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:39.540389 systemd[1]: sshd@10-143.198.235.111:22-139.178.68.195:59026.service: Deactivated successfully. Jun 21 05:29:39.545183 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 05:29:39.548148 systemd-logind[1491]: Session 11 logged out. Waiting for processes to exit. Jun 21 05:29:39.553829 systemd[1]: Started sshd@11-143.198.235.111:22-139.178.68.195:59036.service - OpenSSH per-connection server daemon (139.178.68.195:59036). Jun 21 05:29:39.556967 systemd-logind[1491]: Removed session 11. Jun 21 05:29:39.647464 sshd[5240]: Accepted publickey for core from 139.178.68.195 port 59036 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:39.649505 sshd-session[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:39.656398 systemd-logind[1491]: New session 12 of user core. Jun 21 05:29:39.661783 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 05:29:39.814471 sshd[5244]: Connection closed by 139.178.68.195 port 59036 Jun 21 05:29:39.815246 sshd-session[5240]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:39.820095 systemd[1]: sshd@11-143.198.235.111:22-139.178.68.195:59036.service: Deactivated successfully. Jun 21 05:29:39.822909 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 05:29:39.824813 systemd-logind[1491]: Session 12 logged out. Waiting for processes to exit. Jun 21 05:29:39.827060 systemd-logind[1491]: Removed session 12. Jun 21 05:29:43.764127 containerd[1532]: time="2025-06-21T05:29:43.763995488Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6eb43f4ff0d3df7ea99fefd6922a0e1ba922c7b47f1270d865e67abce564d66\" id:\"c4e491d252e6d9d31404252a50acb5a042f07e1af6a43d3d4c716658476cfb08\" pid:5272 exit_status:1 exited_at:{seconds:1750483783 nanos:763309212}" Jun 21 05:29:44.833548 systemd[1]: Started sshd@12-143.198.235.111:22-139.178.68.195:49046.service - OpenSSH per-connection server daemon (139.178.68.195:49046). Jun 21 05:29:44.980012 sshd[5286]: Accepted publickey for core from 139.178.68.195 port 49046 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:44.983735 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:44.993145 systemd-logind[1491]: New session 13 of user core. Jun 21 05:29:45.000768 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 05:29:45.153771 kubelet[2681]: E0621 05:29:45.153130 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:45.593239 sshd[5288]: Connection closed by 139.178.68.195 port 49046 Jun 21 05:29:45.593764 sshd-session[5286]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:45.601402 systemd[1]: sshd@12-143.198.235.111:22-139.178.68.195:49046.service: Deactivated successfully. Jun 21 05:29:45.604874 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 05:29:45.607734 systemd-logind[1491]: Session 13 logged out. Waiting for processes to exit. Jun 21 05:29:45.611856 systemd-logind[1491]: Removed session 13. Jun 21 05:29:50.613285 systemd[1]: Started sshd@13-143.198.235.111:22-139.178.68.195:49056.service - OpenSSH per-connection server daemon (139.178.68.195:49056). Jun 21 05:29:50.745882 sshd[5302]: Accepted publickey for core from 139.178.68.195 port 49056 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:50.748977 sshd-session[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:50.756446 systemd-logind[1491]: New session 14 of user core. Jun 21 05:29:50.761727 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 05:29:51.112293 sshd[5304]: Connection closed by 139.178.68.195 port 49056 Jun 21 05:29:51.112152 sshd-session[5302]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:51.118211 systemd[1]: sshd@13-143.198.235.111:22-139.178.68.195:49056.service: Deactivated successfully. Jun 21 05:29:51.122244 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 05:29:51.125517 systemd-logind[1491]: Session 14 logged out. Waiting for processes to exit. Jun 21 05:29:51.131036 systemd-logind[1491]: Removed session 14. Jun 21 05:29:56.131598 systemd[1]: Started sshd@14-143.198.235.111:22-139.178.68.195:59078.service - OpenSSH per-connection server daemon (139.178.68.195:59078). Jun 21 05:29:56.315501 sshd[5322]: Accepted publickey for core from 139.178.68.195 port 59078 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:56.317213 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:56.325755 systemd-logind[1491]: New session 15 of user core. Jun 21 05:29:56.334787 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 05:29:56.958681 sshd[5324]: Connection closed by 139.178.68.195 port 59078 Jun 21 05:29:56.959477 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:56.967364 systemd[1]: sshd@14-143.198.235.111:22-139.178.68.195:59078.service: Deactivated successfully. Jun 21 05:29:56.971383 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 05:29:56.973039 systemd-logind[1491]: Session 15 logged out. Waiting for processes to exit. Jun 21 05:29:56.975997 systemd-logind[1491]: Removed session 15. Jun 21 05:29:58.122005 kubelet[2681]: E0621 05:29:58.121843 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:02.021354 systemd[1]: Started sshd@15-143.198.235.111:22-139.178.68.195:59094.service - OpenSSH per-connection server daemon (139.178.68.195:59094). Jun 21 05:30:02.127529 kubelet[2681]: E0621 05:30:02.127471 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:02.154998 sshd[5338]: Accepted publickey for core from 139.178.68.195 port 59094 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:02.158212 sshd-session[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:02.178368 systemd-logind[1491]: New session 16 of user core. Jun 21 05:30:02.188028 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 05:30:02.551128 sshd[5340]: Connection closed by 139.178.68.195 port 59094 Jun 21 05:30:02.554183 sshd-session[5338]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:02.572492 systemd[1]: sshd@15-143.198.235.111:22-139.178.68.195:59094.service: Deactivated successfully. Jun 21 05:30:02.578001 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 05:30:02.581028 systemd-logind[1491]: Session 16 logged out. Waiting for processes to exit. Jun 21 05:30:02.587887 systemd[1]: Started sshd@16-143.198.235.111:22-139.178.68.195:59098.service - OpenSSH per-connection server daemon (139.178.68.195:59098). Jun 21 05:30:02.593771 systemd-logind[1491]: Removed session 16. Jun 21 05:30:02.717855 sshd[5352]: Accepted publickey for core from 139.178.68.195 port 59098 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:02.722242 sshd-session[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:02.739675 systemd-logind[1491]: New session 17 of user core. Jun 21 05:30:02.755333 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 05:30:03.617302 sshd[5354]: Connection closed by 139.178.68.195 port 59098 Jun 21 05:30:03.620011 sshd-session[5352]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:03.638146 systemd[1]: sshd@16-143.198.235.111:22-139.178.68.195:59098.service: Deactivated successfully. Jun 21 05:30:03.649286 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 05:30:03.668833 systemd-logind[1491]: Session 17 logged out. Waiting for processes to exit. Jun 21 05:30:03.686303 systemd[1]: Started sshd@17-143.198.235.111:22-139.178.68.195:54442.service - OpenSSH per-connection server daemon (139.178.68.195:54442). Jun 21 05:30:03.688388 systemd-logind[1491]: Removed session 17. Jun 21 05:30:03.866176 sshd[5364]: Accepted publickey for core from 139.178.68.195 port 54442 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:03.879463 sshd-session[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:03.896175 systemd-logind[1491]: New session 18 of user core. Jun 21 05:30:03.907859 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 05:30:05.563845 sshd[5366]: Connection closed by 139.178.68.195 port 54442 Jun 21 05:30:05.565316 sshd-session[5364]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:05.590410 systemd[1]: sshd@17-143.198.235.111:22-139.178.68.195:54442.service: Deactivated successfully. Jun 21 05:30:05.597359 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 05:30:05.601534 systemd-logind[1491]: Session 18 logged out. Waiting for processes to exit. Jun 21 05:30:05.617031 systemd[1]: Started sshd@18-143.198.235.111:22-139.178.68.195:54446.service - OpenSSH per-connection server daemon (139.178.68.195:54446). Jun 21 05:30:05.619436 systemd-logind[1491]: Removed session 18. Jun 21 05:30:05.730967 sshd[5382]: Accepted publickey for core from 139.178.68.195 port 54446 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:05.734905 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:05.747547 systemd-logind[1491]: New session 19 of user core. Jun 21 05:30:05.755874 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 05:30:06.488212 containerd[1532]: time="2025-06-21T05:30:06.488096173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2afff8918e5db2b350290e3b94d2b8d4d34cfe8c0dbebb9bef9b9afe91e1d09f\" id:\"7e39c9d211ec738f27eb25f18fc5961c45c806cabbdd17186e1dcdf25c172dba\" pid:5405 exited_at:{seconds:1750483806 nanos:301001168}" Jun 21 05:30:06.964513 sshd[5386]: Connection closed by 139.178.68.195 port 54446 Jun 21 05:30:06.967099 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:06.986601 systemd[1]: sshd@18-143.198.235.111:22-139.178.68.195:54446.service: Deactivated successfully. Jun 21 05:30:06.994468 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 05:30:06.999800 systemd-logind[1491]: Session 19 logged out. Waiting for processes to exit. Jun 21 05:30:07.008911 systemd[1]: Started sshd@19-143.198.235.111:22-139.178.68.195:54450.service - OpenSSH per-connection server daemon (139.178.68.195:54450). Jun 21 05:30:07.012982 systemd-logind[1491]: Removed session 19. Jun 21 05:30:07.170200 sshd[5422]: Accepted publickey for core from 139.178.68.195 port 54450 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:07.173207 sshd-session[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:07.181258 systemd-logind[1491]: New session 20 of user core. Jun 21 05:30:07.189181 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 05:30:07.439127 sshd[5424]: Connection closed by 139.178.68.195 port 54450 Jun 21 05:30:07.441809 sshd-session[5422]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:07.449551 systemd-logind[1491]: Session 20 logged out. Waiting for processes to exit. Jun 21 05:30:07.449908 systemd[1]: sshd@19-143.198.235.111:22-139.178.68.195:54450.service: Deactivated successfully. Jun 21 05:30:07.454063 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 05:30:07.458520 systemd-logind[1491]: Removed session 20. Jun 21 05:30:09.016119 containerd[1532]: time="2025-06-21T05:30:09.016047107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5ef4012b6fe4df5a70d721d3a787d13950cad4f0b4396e462f6ec34a3f50d33\" id:\"74f2036843beb77a3c940d019d1b8c92e862d14b855a8f6d3b693fb690085ded\" pid:5449 exited_at:{seconds:1750483809 nanos:15043964}" Jun 21 05:30:09.125182 kubelet[2681]: E0621 05:30:09.122306 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:12.455941 systemd[1]: Started sshd@20-143.198.235.111:22-139.178.68.195:54462.service - OpenSSH per-connection server daemon (139.178.68.195:54462). Jun 21 05:30:12.568059 sshd[5462]: Accepted publickey for core from 139.178.68.195 port 54462 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:12.570712 sshd-session[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:12.578779 systemd-logind[1491]: New session 21 of user core. Jun 21 05:30:12.585818 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 05:30:13.014625 sshd[5464]: Connection closed by 139.178.68.195 port 54462 Jun 21 05:30:13.015485 sshd-session[5462]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:13.031504 systemd[1]: sshd@20-143.198.235.111:22-139.178.68.195:54462.service: Deactivated successfully. Jun 21 05:30:13.035550 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 05:30:13.037533 systemd-logind[1491]: Session 21 logged out. Waiting for processes to exit. Jun 21 05:30:13.040232 systemd-logind[1491]: Removed session 21. Jun 21 05:30:13.647885 containerd[1532]: time="2025-06-21T05:30:13.646534873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6eb43f4ff0d3df7ea99fefd6922a0e1ba922c7b47f1270d865e67abce564d66\" id:\"9335720e3186eb0302928baab21833e513517847e7d2074cc86ddb7e5bf779c5\" pid:5487 exited_at:{seconds:1750483813 nanos:646063898}" Jun 21 05:30:16.119533 kubelet[2681]: E0621 05:30:16.119053 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:18.032660 systemd[1]: Started sshd@21-143.198.235.111:22-139.178.68.195:41864.service - OpenSSH per-connection server daemon (139.178.68.195:41864). Jun 21 05:30:18.180092 sshd[5502]: Accepted publickey for core from 139.178.68.195 port 41864 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:18.182923 sshd-session[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:18.189532 systemd-logind[1491]: New session 22 of user core. Jun 21 05:30:18.198774 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 05:30:18.912869 sshd[5504]: Connection closed by 139.178.68.195 port 41864 Jun 21 05:30:18.914522 sshd-session[5502]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:18.924242 systemd[1]: sshd@21-143.198.235.111:22-139.178.68.195:41864.service: Deactivated successfully. Jun 21 05:30:18.929418 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 05:30:18.936067 systemd-logind[1491]: Session 22 logged out. Waiting for processes to exit. Jun 21 05:30:18.938390 systemd-logind[1491]: Removed session 22. Jun 21 05:30:21.233292 containerd[1532]: time="2025-06-21T05:30:21.233240118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2afff8918e5db2b350290e3b94d2b8d4d34cfe8c0dbebb9bef9b9afe91e1d09f\" id:\"66ea0b4c9c8c78a4635973e493f4ec118bfa7e333447f5bb98004e3fd50df410\" pid:5527 exited_at:{seconds:1750483821 nanos:232810153}" Jun 21 05:30:23.118481 kubelet[2681]: E0621 05:30:23.118410 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:23.930675 systemd[1]: Started sshd@22-143.198.235.111:22-139.178.68.195:36654.service - OpenSSH per-connection server daemon (139.178.68.195:36654). Jun 21 05:30:24.039381 sshd[5538]: Accepted publickey for core from 139.178.68.195 port 36654 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:24.041738 sshd-session[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:24.051535 systemd-logind[1491]: New session 23 of user core. Jun 21 05:30:24.056746 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 21 05:30:24.385555 sshd[5540]: Connection closed by 139.178.68.195 port 36654 Jun 21 05:30:24.386765 sshd-session[5538]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:24.397887 systemd[1]: sshd@22-143.198.235.111:22-139.178.68.195:36654.service: Deactivated successfully. Jun 21 05:30:24.402275 systemd[1]: session-23.scope: Deactivated successfully. Jun 21 05:30:24.408554 systemd-logind[1491]: Session 23 logged out. Waiting for processes to exit. Jun 21 05:30:24.411828 systemd-logind[1491]: Removed session 23. Jun 21 05:30:25.582438 containerd[1532]: time="2025-06-21T05:30:25.582380764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5ef4012b6fe4df5a70d721d3a787d13950cad4f0b4396e462f6ec34a3f50d33\" id:\"85d4b2433170522089b10f5a4e5930128940c90b370d5fff6a9a0d6a2d3e73a2\" pid:5562 exited_at:{seconds:1750483825 nanos:581767717}"