Jun 21 05:29:19.892480 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 23:59:04 -00 2025 Jun 21 05:29:19.892511 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:29:19.892521 kernel: BIOS-provided physical RAM map: Jun 21 05:29:19.892528 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 21 05:29:19.892535 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 21 05:29:19.892541 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 21 05:29:19.892549 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jun 21 05:29:19.892559 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jun 21 05:29:19.892569 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 21 05:29:19.892576 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 21 05:29:19.892583 kernel: NX (Execute Disable) protection: active Jun 21 05:29:19.892590 kernel: APIC: Static calls initialized Jun 21 05:29:19.892597 kernel: SMBIOS 2.8 present. Jun 21 05:29:19.892604 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jun 21 05:29:19.892615 kernel: DMI: Memory slots populated: 1/1 Jun 21 05:29:19.892623 kernel: Hypervisor detected: KVM Jun 21 05:29:19.892634 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 21 05:29:19.892642 kernel: kvm-clock: using sched offset of 4422219564 cycles Jun 21 05:29:19.892650 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 21 05:29:19.892658 kernel: tsc: Detected 2494.140 MHz processor Jun 21 05:29:19.892666 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 21 05:29:19.892674 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 21 05:29:19.892682 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jun 21 05:29:19.892693 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 21 05:29:19.892701 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 21 05:29:19.892709 kernel: ACPI: Early table checksum verification disabled Jun 21 05:29:19.892717 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jun 21 05:29:19.892725 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:29:19.892733 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:29:19.892741 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:29:19.892749 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 21 05:29:19.892757 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:29:19.892768 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:29:19.892776 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:29:19.892784 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:29:19.892792 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jun 21 05:29:19.892800 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jun 21 05:29:19.892808 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 21 05:29:19.892816 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jun 21 05:29:19.892824 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jun 21 05:29:19.892838 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jun 21 05:29:19.892846 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jun 21 05:29:19.892854 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 21 05:29:19.892863 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 21 05:29:19.892871 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Jun 21 05:29:19.892879 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Jun 21 05:29:19.892890 kernel: Zone ranges: Jun 21 05:29:19.892898 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 21 05:29:19.892906 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jun 21 05:29:19.892915 kernel: Normal empty Jun 21 05:29:19.892923 kernel: Device empty Jun 21 05:29:19.892931 kernel: Movable zone start for each node Jun 21 05:29:19.892939 kernel: Early memory node ranges Jun 21 05:29:19.892947 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 21 05:29:19.892956 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jun 21 05:29:19.892966 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jun 21 05:29:19.892975 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 21 05:29:19.892983 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 21 05:29:19.892991 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jun 21 05:29:19.893000 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 21 05:29:19.893008 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 21 05:29:19.893019 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 21 05:29:19.893027 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 21 05:29:19.893038 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 21 05:29:19.893050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 21 05:29:19.893061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 21 05:29:19.893069 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 21 05:29:19.893077 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 21 05:29:19.893086 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 21 05:29:19.893094 kernel: TSC deadline timer available Jun 21 05:29:19.893102 kernel: CPU topo: Max. logical packages: 1 Jun 21 05:29:19.893111 kernel: CPU topo: Max. logical dies: 1 Jun 21 05:29:19.893141 kernel: CPU topo: Max. dies per package: 1 Jun 21 05:29:19.893150 kernel: CPU topo: Max. threads per core: 1 Jun 21 05:29:19.893161 kernel: CPU topo: Num. cores per package: 2 Jun 21 05:29:19.893170 kernel: CPU topo: Num. threads per package: 2 Jun 21 05:29:19.893178 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 21 05:29:19.893186 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 21 05:29:19.893195 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 21 05:29:19.893203 kernel: Booting paravirtualized kernel on KVM Jun 21 05:29:19.893212 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 21 05:29:19.893220 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 21 05:29:19.893229 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 21 05:29:19.893240 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 21 05:29:19.893248 kernel: pcpu-alloc: [0] 0 1 Jun 21 05:29:19.893256 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 21 05:29:19.893266 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:29:19.893275 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 05:29:19.893284 kernel: random: crng init done Jun 21 05:29:19.893292 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 21 05:29:19.893300 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 21 05:29:19.893312 kernel: Fallback order for Node 0: 0 Jun 21 05:29:19.893321 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Jun 21 05:29:19.893329 kernel: Policy zone: DMA32 Jun 21 05:29:19.893338 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 05:29:19.893346 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 21 05:29:19.893355 kernel: Kernel/User page tables isolation: enabled Jun 21 05:29:19.893364 kernel: ftrace: allocating 40093 entries in 157 pages Jun 21 05:29:19.893372 kernel: ftrace: allocated 157 pages with 5 groups Jun 21 05:29:19.893380 kernel: Dynamic Preempt: voluntary Jun 21 05:29:19.893392 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 05:29:19.893402 kernel: rcu: RCU event tracing is enabled. Jun 21 05:29:19.893411 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 21 05:29:19.893420 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 05:29:19.893428 kernel: Rude variant of Tasks RCU enabled. Jun 21 05:29:19.893437 kernel: Tracing variant of Tasks RCU enabled. Jun 21 05:29:19.893445 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 05:29:19.893454 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 21 05:29:19.893462 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 05:29:19.893477 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 05:29:19.893485 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 05:29:19.893494 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 21 05:29:19.893503 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 05:29:19.893511 kernel: Console: colour VGA+ 80x25 Jun 21 05:29:19.893520 kernel: printk: legacy console [tty0] enabled Jun 21 05:29:19.893529 kernel: printk: legacy console [ttyS0] enabled Jun 21 05:29:19.893537 kernel: ACPI: Core revision 20240827 Jun 21 05:29:19.893546 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 21 05:29:19.893566 kernel: APIC: Switch to symmetric I/O mode setup Jun 21 05:29:19.893575 kernel: x2apic enabled Jun 21 05:29:19.893584 kernel: APIC: Switched APIC routing to: physical x2apic Jun 21 05:29:19.893597 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 21 05:29:19.893608 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jun 21 05:29:19.893618 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jun 21 05:29:19.893627 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 21 05:29:19.893636 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 21 05:29:19.893645 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 21 05:29:19.893657 kernel: Spectre V2 : Mitigation: Retpolines Jun 21 05:29:19.893666 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 21 05:29:19.893675 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jun 21 05:29:19.893684 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 21 05:29:19.893693 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 21 05:29:19.893702 kernel: MDS: Mitigation: Clear CPU buffers Jun 21 05:29:19.893711 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 21 05:29:19.893723 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 21 05:29:19.893732 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 21 05:29:19.893741 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 21 05:29:19.893750 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 21 05:29:19.893759 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 21 05:29:19.893768 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 21 05:29:19.893777 kernel: Freeing SMP alternatives memory: 32K Jun 21 05:29:19.893786 kernel: pid_max: default: 32768 minimum: 301 Jun 21 05:29:19.893795 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 05:29:19.893807 kernel: landlock: Up and running. Jun 21 05:29:19.893816 kernel: SELinux: Initializing. Jun 21 05:29:19.893826 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 21 05:29:19.893835 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 21 05:29:19.893844 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jun 21 05:29:19.893853 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jun 21 05:29:19.893862 kernel: signal: max sigframe size: 1776 Jun 21 05:29:19.893872 kernel: rcu: Hierarchical SRCU implementation. Jun 21 05:29:19.893881 kernel: rcu: Max phase no-delay instances is 400. Jun 21 05:29:19.893893 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 05:29:19.893902 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 21 05:29:19.893911 kernel: smp: Bringing up secondary CPUs ... Jun 21 05:29:19.893920 kernel: smpboot: x86: Booting SMP configuration: Jun 21 05:29:19.893932 kernel: .... node #0, CPUs: #1 Jun 21 05:29:19.893942 kernel: smp: Brought up 1 node, 2 CPUs Jun 21 05:29:19.893951 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jun 21 05:29:19.893961 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 125140K reserved, 0K cma-reserved) Jun 21 05:29:19.893969 kernel: devtmpfs: initialized Jun 21 05:29:19.893982 kernel: x86/mm: Memory block size: 128MB Jun 21 05:29:19.893991 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 05:29:19.894000 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 21 05:29:19.894009 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 05:29:19.894018 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 05:29:19.894027 kernel: audit: initializing netlink subsys (disabled) Jun 21 05:29:19.894036 kernel: audit: type=2000 audit(1750483756.271:1): state=initialized audit_enabled=0 res=1 Jun 21 05:29:19.894045 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 05:29:19.894054 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 21 05:29:19.894066 kernel: cpuidle: using governor menu Jun 21 05:29:19.894075 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 05:29:19.894084 kernel: dca service started, version 1.12.1 Jun 21 05:29:19.894093 kernel: PCI: Using configuration type 1 for base access Jun 21 05:29:19.894102 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 21 05:29:19.894111 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 05:29:19.894135 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 05:29:19.894144 kernel: ACPI: Added _OSI(Module Device) Jun 21 05:29:19.894153 kernel: ACPI: Added _OSI(Processor Device) Jun 21 05:29:19.894166 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 05:29:19.894175 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 05:29:19.894184 kernel: ACPI: Interpreter enabled Jun 21 05:29:19.894193 kernel: ACPI: PM: (supports S0 S5) Jun 21 05:29:19.894202 kernel: ACPI: Using IOAPIC for interrupt routing Jun 21 05:29:19.894211 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 21 05:29:19.894220 kernel: PCI: Using E820 reservations for host bridge windows Jun 21 05:29:19.894233 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 21 05:29:19.894250 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 21 05:29:19.894558 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 21 05:29:19.894717 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 21 05:29:19.894934 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 21 05:29:19.894960 kernel: acpiphp: Slot [3] registered Jun 21 05:29:19.894979 kernel: acpiphp: Slot [4] registered Jun 21 05:29:19.894996 kernel: acpiphp: Slot [5] registered Jun 21 05:29:19.895014 kernel: acpiphp: Slot [6] registered Jun 21 05:29:19.895032 kernel: acpiphp: Slot [7] registered Jun 21 05:29:19.895063 kernel: acpiphp: Slot [8] registered Jun 21 05:29:19.895081 kernel: acpiphp: Slot [9] registered Jun 21 05:29:19.895098 kernel: acpiphp: Slot [10] registered Jun 21 05:29:19.895115 kernel: acpiphp: Slot [11] registered Jun 21 05:29:19.895158 kernel: acpiphp: Slot [12] registered Jun 21 05:29:19.895173 kernel: acpiphp: Slot [13] registered Jun 21 05:29:19.895189 kernel: acpiphp: Slot [14] registered Jun 21 05:29:19.895208 kernel: acpiphp: Slot [15] registered Jun 21 05:29:19.895224 kernel: acpiphp: Slot [16] registered Jun 21 05:29:19.895249 kernel: acpiphp: Slot [17] registered Jun 21 05:29:19.895266 kernel: acpiphp: Slot [18] registered Jun 21 05:29:19.895286 kernel: acpiphp: Slot [19] registered Jun 21 05:29:19.895303 kernel: acpiphp: Slot [20] registered Jun 21 05:29:19.895319 kernel: acpiphp: Slot [21] registered Jun 21 05:29:19.895336 kernel: acpiphp: Slot [22] registered Jun 21 05:29:19.895352 kernel: acpiphp: Slot [23] registered Jun 21 05:29:19.895367 kernel: acpiphp: Slot [24] registered Jun 21 05:29:19.895379 kernel: acpiphp: Slot [25] registered Jun 21 05:29:19.895398 kernel: acpiphp: Slot [26] registered Jun 21 05:29:19.895409 kernel: acpiphp: Slot [27] registered Jun 21 05:29:19.895422 kernel: acpiphp: Slot [28] registered Jun 21 05:29:19.895436 kernel: acpiphp: Slot [29] registered Jun 21 05:29:19.895448 kernel: acpiphp: Slot [30] registered Jun 21 05:29:19.895460 kernel: acpiphp: Slot [31] registered Jun 21 05:29:19.895473 kernel: PCI host bridge to bus 0000:00 Jun 21 05:29:19.895673 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 21 05:29:19.895808 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 21 05:29:19.895938 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 21 05:29:19.896060 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 21 05:29:19.896227 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 21 05:29:19.896362 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 21 05:29:19.896620 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jun 21 05:29:19.896799 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jun 21 05:29:19.897041 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jun 21 05:29:19.897221 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Jun 21 05:29:19.897377 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jun 21 05:29:19.897533 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jun 21 05:29:19.897680 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jun 21 05:29:19.897836 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jun 21 05:29:19.898059 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jun 21 05:29:19.898355 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Jun 21 05:29:19.898547 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jun 21 05:29:19.898709 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 21 05:29:19.898849 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 21 05:29:19.899012 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jun 21 05:29:19.899210 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jun 21 05:29:19.899390 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Jun 21 05:29:19.899534 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Jun 21 05:29:19.901293 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Jun 21 05:29:19.901474 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 21 05:29:19.901643 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 21 05:29:19.901785 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Jun 21 05:29:19.901922 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Jun 21 05:29:19.902075 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Jun 21 05:29:19.903605 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 21 05:29:19.903807 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Jun 21 05:29:19.903966 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Jun 21 05:29:19.905194 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 21 05:29:19.905444 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jun 21 05:29:19.905608 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Jun 21 05:29:19.905774 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Jun 21 05:29:19.905922 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 21 05:29:19.906195 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 21 05:29:19.906357 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Jun 21 05:29:19.906499 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Jun 21 05:29:19.906667 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Jun 21 05:29:19.906821 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 21 05:29:19.906994 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Jun 21 05:29:19.907169 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Jun 21 05:29:19.907323 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Jun 21 05:29:19.909365 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jun 21 05:29:19.909571 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Jun 21 05:29:19.909737 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Jun 21 05:29:19.909772 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 21 05:29:19.909788 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 21 05:29:19.909801 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 21 05:29:19.909815 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 21 05:29:19.909839 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 21 05:29:19.909853 kernel: iommu: Default domain type: Translated Jun 21 05:29:19.909868 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 21 05:29:19.909882 kernel: PCI: Using ACPI for IRQ routing Jun 21 05:29:19.909896 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 21 05:29:19.909917 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 21 05:29:19.909932 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jun 21 05:29:19.910101 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 21 05:29:19.912554 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 21 05:29:19.912736 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 21 05:29:19.912758 kernel: vgaarb: loaded Jun 21 05:29:19.912774 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 21 05:29:19.912789 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 21 05:29:19.912818 kernel: clocksource: Switched to clocksource kvm-clock Jun 21 05:29:19.912832 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 05:29:19.912848 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 05:29:19.912863 kernel: pnp: PnP ACPI init Jun 21 05:29:19.912878 kernel: pnp: PnP ACPI: found 4 devices Jun 21 05:29:19.912893 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 21 05:29:19.912907 kernel: NET: Registered PF_INET protocol family Jun 21 05:29:19.912931 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 21 05:29:19.912946 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 21 05:29:19.912965 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 05:29:19.912985 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 21 05:29:19.912999 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 21 05:29:19.913014 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 21 05:29:19.913028 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 21 05:29:19.913043 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 21 05:29:19.913057 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 05:29:19.913071 kernel: NET: Registered PF_XDP protocol family Jun 21 05:29:19.913255 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 21 05:29:19.913401 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 21 05:29:19.913528 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 21 05:29:19.913664 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 21 05:29:19.913792 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 21 05:29:19.913945 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 21 05:29:19.914087 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 21 05:29:19.914106 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 21 05:29:19.915335 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 26332 usecs Jun 21 05:29:19.915368 kernel: PCI: CLS 0 bytes, default 64 Jun 21 05:29:19.915384 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 21 05:29:19.915400 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jun 21 05:29:19.915414 kernel: Initialise system trusted keyrings Jun 21 05:29:19.915429 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 21 05:29:19.915444 kernel: Key type asymmetric registered Jun 21 05:29:19.915460 kernel: Asymmetric key parser 'x509' registered Jun 21 05:29:19.915475 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 21 05:29:19.915494 kernel: io scheduler mq-deadline registered Jun 21 05:29:19.915509 kernel: io scheduler kyber registered Jun 21 05:29:19.915524 kernel: io scheduler bfq registered Jun 21 05:29:19.915539 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 21 05:29:19.915555 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 21 05:29:19.915571 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 21 05:29:19.915587 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 21 05:29:19.915602 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 05:29:19.915617 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 05:29:19.915633 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 21 05:29:19.915651 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 21 05:29:19.915666 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 21 05:29:19.915682 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 21 05:29:19.915852 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 21 05:29:19.915978 kernel: rtc_cmos 00:03: registered as rtc0 Jun 21 05:29:19.916104 kernel: rtc_cmos 00:03: setting system clock to 2025-06-21T05:29:19 UTC (1750483759) Jun 21 05:29:19.917300 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jun 21 05:29:19.917330 kernel: intel_pstate: CPU model not supported Jun 21 05:29:19.917346 kernel: NET: Registered PF_INET6 protocol family Jun 21 05:29:19.917361 kernel: Segment Routing with IPv6 Jun 21 05:29:19.917376 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 05:29:19.917391 kernel: NET: Registered PF_PACKET protocol family Jun 21 05:29:19.917405 kernel: Key type dns_resolver registered Jun 21 05:29:19.917419 kernel: IPI shorthand broadcast: enabled Jun 21 05:29:19.917432 kernel: sched_clock: Marking stable (3246006324, 88705092)->(3435632197, -100920781) Jun 21 05:29:19.917448 kernel: registered taskstats version 1 Jun 21 05:29:19.917468 kernel: Loading compiled-in X.509 certificates Jun 21 05:29:19.917483 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: ec4617d162e00e1890f71f252cdf44036a7b66f7' Jun 21 05:29:19.917498 kernel: Demotion targets for Node 0: null Jun 21 05:29:19.917513 kernel: Key type .fscrypt registered Jun 21 05:29:19.917528 kernel: Key type fscrypt-provisioning registered Jun 21 05:29:19.917546 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 05:29:19.917585 kernel: ima: Allocated hash algorithm: sha1 Jun 21 05:29:19.917604 kernel: ima: No architecture policies found Jun 21 05:29:19.917620 kernel: clk: Disabling unused clocks Jun 21 05:29:19.917639 kernel: Warning: unable to open an initial console. Jun 21 05:29:19.917655 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 21 05:29:19.917671 kernel: Write protecting the kernel read-only data: 24576k Jun 21 05:29:19.917688 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 21 05:29:19.917704 kernel: Run /init as init process Jun 21 05:29:19.917719 kernel: with arguments: Jun 21 05:29:19.917736 kernel: /init Jun 21 05:29:19.917751 kernel: with environment: Jun 21 05:29:19.917766 kernel: HOME=/ Jun 21 05:29:19.917785 kernel: TERM=linux Jun 21 05:29:19.917800 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 05:29:19.917819 systemd[1]: Successfully made /usr/ read-only. Jun 21 05:29:19.917841 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 05:29:19.917858 systemd[1]: Detected virtualization kvm. Jun 21 05:29:19.917874 systemd[1]: Detected architecture x86-64. Jun 21 05:29:19.917890 systemd[1]: Running in initrd. Jun 21 05:29:19.917910 systemd[1]: No hostname configured, using default hostname. Jun 21 05:29:19.917924 systemd[1]: Hostname set to . Jun 21 05:29:19.917938 systemd[1]: Initializing machine ID from VM UUID. Jun 21 05:29:19.917951 systemd[1]: Queued start job for default target initrd.target. Jun 21 05:29:19.917965 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:29:19.917978 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:29:19.917993 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 05:29:19.918008 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 05:29:19.918029 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 05:29:19.918048 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 05:29:19.918066 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 05:29:19.918086 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 05:29:19.918106 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:29:19.920184 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:29:19.920231 systemd[1]: Reached target paths.target - Path Units. Jun 21 05:29:19.920248 systemd[1]: Reached target slices.target - Slice Units. Jun 21 05:29:19.920265 systemd[1]: Reached target swap.target - Swaps. Jun 21 05:29:19.920287 systemd[1]: Reached target timers.target - Timer Units. Jun 21 05:29:19.920305 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 05:29:19.920321 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 05:29:19.920336 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 05:29:19.920363 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 05:29:19.920379 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:29:19.920393 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 05:29:19.920408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:29:19.920424 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 05:29:19.920438 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 05:29:19.920453 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 05:29:19.920467 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 05:29:19.920504 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 05:29:19.920521 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 05:29:19.920536 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 05:29:19.920551 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 05:29:19.920567 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:29:19.920583 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 05:29:19.920605 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:29:19.920621 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 05:29:19.920637 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 05:29:19.920727 systemd-journald[211]: Collecting audit messages is disabled. Jun 21 05:29:19.920772 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 05:29:19.920799 systemd-journald[211]: Journal started Jun 21 05:29:19.920838 systemd-journald[211]: Runtime Journal (/run/log/journal/8f82529953ea4fcd80b8dd3b41eebf9f) is 4.9M, max 39.5M, 34.6M free. Jun 21 05:29:19.882185 systemd-modules-load[212]: Inserted module 'overlay' Jun 21 05:29:19.955403 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 05:29:19.955443 kernel: Bridge firewalling registered Jun 21 05:29:19.955461 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 05:29:19.931425 systemd-modules-load[212]: Inserted module 'br_netfilter' Jun 21 05:29:19.956199 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 05:29:19.957056 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:29:19.962597 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 05:29:19.965315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:29:19.970387 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 05:29:19.975335 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 05:29:19.992696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:29:19.996774 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:29:20.004409 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 05:29:20.008018 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 05:29:20.009996 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 05:29:20.014057 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:29:20.017431 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 05:29:20.048958 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:29:20.074082 systemd-resolved[251]: Positive Trust Anchors: Jun 21 05:29:20.074743 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 05:29:20.074797 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 05:29:20.081035 systemd-resolved[251]: Defaulting to hostname 'linux'. Jun 21 05:29:20.083838 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 05:29:20.084717 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:29:20.177154 kernel: SCSI subsystem initialized Jun 21 05:29:20.188165 kernel: Loading iSCSI transport class v2.0-870. Jun 21 05:29:20.200172 kernel: iscsi: registered transport (tcp) Jun 21 05:29:20.228244 kernel: iscsi: registered transport (qla4xxx) Jun 21 05:29:20.228328 kernel: QLogic iSCSI HBA Driver Jun 21 05:29:20.255237 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 05:29:20.277362 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:29:20.281010 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 05:29:20.345486 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 05:29:20.348345 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 05:29:20.411183 kernel: raid6: avx2x4 gen() 16905 MB/s Jun 21 05:29:20.428185 kernel: raid6: avx2x2 gen() 16820 MB/s Jun 21 05:29:20.445409 kernel: raid6: avx2x1 gen() 12552 MB/s Jun 21 05:29:20.445508 kernel: raid6: using algorithm avx2x4 gen() 16905 MB/s Jun 21 05:29:20.463238 kernel: raid6: .... xor() 6528 MB/s, rmw enabled Jun 21 05:29:20.463322 kernel: raid6: using avx2x2 recovery algorithm Jun 21 05:29:20.488172 kernel: xor: automatically using best checksumming function avx Jun 21 05:29:20.700203 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 05:29:20.711709 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 05:29:20.714814 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:29:20.750542 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jun 21 05:29:20.759327 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:29:20.763860 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 05:29:20.806674 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jun 21 05:29:20.849635 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 05:29:20.853402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 05:29:20.934740 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:29:20.940053 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 05:29:21.022166 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Jun 21 05:29:21.030156 kernel: scsi host0: Virtio SCSI HBA Jun 21 05:29:21.075215 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jun 21 05:29:21.081162 kernel: ACPI: bus type USB registered Jun 21 05:29:21.083948 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jun 21 05:29:21.084452 kernel: usbcore: registered new interface driver usbfs Jun 21 05:29:21.084473 kernel: usbcore: registered new interface driver hub Jun 21 05:29:21.086178 kernel: cryptd: max_cpu_qlen set to 1000 Jun 21 05:29:21.093197 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 21 05:29:21.096151 kernel: usbcore: registered new device driver usb Jun 21 05:29:21.111668 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 21 05:29:21.111729 kernel: GPT:9289727 != 125829119 Jun 21 05:29:21.111743 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 21 05:29:21.111755 kernel: GPT:9289727 != 125829119 Jun 21 05:29:21.113379 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 21 05:29:21.113447 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:29:21.124159 kernel: AES CTR mode by8 optimization enabled Jun 21 05:29:21.142223 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jun 21 05:29:21.144141 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jun 21 05:29:21.145161 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jun 21 05:29:21.148185 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jun 21 05:29:21.153167 kernel: hub 1-0:1.0: USB hub found Jun 21 05:29:21.153460 kernel: hub 1-0:1.0: 2 ports detected Jun 21 05:29:21.169271 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jun 21 05:29:21.169566 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jun 21 05:29:21.171157 kernel: libata version 3.00 loaded. Jun 21 05:29:21.180804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:29:21.181013 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:29:21.183231 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:29:21.185573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:29:21.186722 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:29:21.202626 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 21 05:29:21.212155 kernel: scsi host1: ata_piix Jun 21 05:29:21.212518 kernel: scsi host2: ata_piix Jun 21 05:29:21.212697 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Jun 21 05:29:21.212718 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Jun 21 05:29:21.280637 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 21 05:29:21.291000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:29:21.306136 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 05:29:21.320647 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 21 05:29:21.329141 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 21 05:29:21.329744 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 21 05:29:21.331922 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 05:29:21.350796 disk-uuid[611]: Primary Header is updated. Jun 21 05:29:21.350796 disk-uuid[611]: Secondary Entries is updated. Jun 21 05:29:21.350796 disk-uuid[611]: Secondary Header is updated. Jun 21 05:29:21.360343 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:29:21.368951 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:29:21.545318 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 05:29:21.575465 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 05:29:21.576069 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:29:21.577281 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 05:29:21.579681 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 05:29:21.625396 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 05:29:22.366324 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:29:22.368265 disk-uuid[612]: The operation has completed successfully. Jun 21 05:29:22.441669 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 05:29:22.441828 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 05:29:22.470954 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 05:29:22.488084 sh[636]: Success Jun 21 05:29:22.511186 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 05:29:22.511316 kernel: device-mapper: uevent: version 1.0.3 Jun 21 05:29:22.511335 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 05:29:22.524352 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jun 21 05:29:22.589953 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 05:29:22.594235 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 05:29:22.613093 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 05:29:22.625455 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 05:29:22.625564 kernel: BTRFS: device fsid bfb8168c-5be0-428c-83e7-820ccaf1f8e9 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (648) Jun 21 05:29:22.627350 kernel: BTRFS info (device dm-0): first mount of filesystem bfb8168c-5be0-428c-83e7-820ccaf1f8e9 Jun 21 05:29:22.629340 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:29:22.629415 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 05:29:22.637740 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 05:29:22.638882 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 05:29:22.639431 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 05:29:22.640634 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 05:29:22.643945 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 05:29:22.679314 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (679) Jun 21 05:29:22.682518 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:29:22.682615 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:29:22.683485 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 05:29:22.693476 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:29:22.693540 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 05:29:22.697606 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 05:29:22.799061 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 05:29:22.805397 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 05:29:22.862396 systemd-networkd[819]: lo: Link UP Jun 21 05:29:22.862415 systemd-networkd[819]: lo: Gained carrier Jun 21 05:29:22.864968 systemd-networkd[819]: Enumeration completed Jun 21 05:29:22.866472 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 21 05:29:22.866479 systemd-networkd[819]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jun 21 05:29:22.868352 systemd-networkd[819]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:29:22.868365 systemd-networkd[819]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 05:29:22.868549 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 05:29:22.870523 systemd-networkd[819]: eth0: Link UP Jun 21 05:29:22.870529 systemd-networkd[819]: eth0: Gained carrier Jun 21 05:29:22.870548 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 21 05:29:22.870632 systemd[1]: Reached target network.target - Network. Jun 21 05:29:22.877580 systemd-networkd[819]: eth1: Link UP Jun 21 05:29:22.877587 systemd-networkd[819]: eth1: Gained carrier Jun 21 05:29:22.877608 systemd-networkd[819]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:29:22.899261 systemd-networkd[819]: eth0: DHCPv4 address 164.92.73.218/20, gateway 164.92.64.1 acquired from 169.254.169.253 Jun 21 05:29:22.905315 systemd-networkd[819]: eth1: DHCPv4 address 10.124.0.15/20 acquired from 169.254.169.253 Jun 21 05:29:22.921266 ignition[724]: Ignition 2.21.0 Jun 21 05:29:22.921281 ignition[724]: Stage: fetch-offline Jun 21 05:29:22.921325 ignition[724]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:29:22.921334 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:29:22.921428 ignition[724]: parsed url from cmdline: "" Jun 21 05:29:22.921433 ignition[724]: no config URL provided Jun 21 05:29:22.921440 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 05:29:22.921450 ignition[724]: no config at "/usr/lib/ignition/user.ign" Jun 21 05:29:22.925602 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 05:29:22.921463 ignition[724]: failed to fetch config: resource requires networking Jun 21 05:29:22.922728 ignition[724]: Ignition finished successfully Jun 21 05:29:22.929359 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 21 05:29:22.967201 ignition[829]: Ignition 2.21.0 Jun 21 05:29:22.967216 ignition[829]: Stage: fetch Jun 21 05:29:22.967373 ignition[829]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:29:22.967384 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:29:22.967474 ignition[829]: parsed url from cmdline: "" Jun 21 05:29:22.967478 ignition[829]: no config URL provided Jun 21 05:29:22.967483 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 05:29:22.967491 ignition[829]: no config at "/usr/lib/ignition/user.ign" Jun 21 05:29:22.967531 ignition[829]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jun 21 05:29:22.998014 ignition[829]: GET result: OK Jun 21 05:29:22.998166 ignition[829]: parsing config with SHA512: 01307ae4b5b95a811eb12388ff603cd053a5af7119ac465b1492fd51446c7d2723a579f0bf18fc72375d2ee9a09ceae7072c25504a34f993f3880c653c65fc31 Jun 21 05:29:23.005324 unknown[829]: fetched base config from "system" Jun 21 05:29:23.005336 unknown[829]: fetched base config from "system" Jun 21 05:29:23.005670 ignition[829]: fetch: fetch complete Jun 21 05:29:23.005342 unknown[829]: fetched user config from "digitalocean" Jun 21 05:29:23.005676 ignition[829]: fetch: fetch passed Jun 21 05:29:23.008682 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 21 05:29:23.005740 ignition[829]: Ignition finished successfully Jun 21 05:29:23.012340 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 05:29:23.064527 ignition[835]: Ignition 2.21.0 Jun 21 05:29:23.065168 ignition[835]: Stage: kargs Jun 21 05:29:23.065352 ignition[835]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:29:23.065363 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:29:23.066767 ignition[835]: kargs: kargs passed Jun 21 05:29:23.066866 ignition[835]: Ignition finished successfully Jun 21 05:29:23.070217 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 05:29:23.073071 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 05:29:23.115023 ignition[842]: Ignition 2.21.0 Jun 21 05:29:23.115040 ignition[842]: Stage: disks Jun 21 05:29:23.115264 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:29:23.115278 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:29:23.117903 ignition[842]: disks: disks passed Jun 21 05:29:23.118372 ignition[842]: Ignition finished successfully Jun 21 05:29:23.120760 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 05:29:23.121884 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 05:29:23.122743 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 05:29:23.123641 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 05:29:23.124337 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 05:29:23.125235 systemd[1]: Reached target basic.target - Basic System. Jun 21 05:29:23.126669 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 05:29:23.158692 systemd-fsck[852]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 21 05:29:23.161066 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 05:29:23.163849 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 05:29:23.284169 kernel: EXT4-fs (vda9): mounted filesystem 6d18c974-0fd6-4e4a-98cf-62524fcf9e99 r/w with ordered data mode. Quota mode: none. Jun 21 05:29:23.286874 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 05:29:23.288022 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 05:29:23.290767 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 05:29:23.292838 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 05:29:23.302300 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jun 21 05:29:23.304950 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 21 05:29:23.307456 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 05:29:23.307572 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 05:29:23.315160 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 05:29:23.320567 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (860) Jun 21 05:29:23.326409 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:29:23.326499 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:29:23.326515 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 05:29:23.326507 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 05:29:23.339155 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 05:29:23.444001 coreos-metadata[863]: Jun 21 05:29:23.443 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 21 05:29:23.457761 coreos-metadata[863]: Jun 21 05:29:23.457 INFO Fetch successful Jun 21 05:29:23.473293 coreos-metadata[863]: Jun 21 05:29:23.473 INFO wrote hostname ci-4372.0.0-0-a0fa6d352b to /sysroot/etc/hostname Jun 21 05:29:23.476561 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 21 05:29:23.478414 initrd-setup-root[891]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 05:29:23.482477 coreos-metadata[862]: Jun 21 05:29:23.482 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 21 05:29:23.486303 initrd-setup-root[898]: cut: /sysroot/etc/group: No such file or directory Jun 21 05:29:23.491587 initrd-setup-root[905]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 05:29:23.495203 coreos-metadata[862]: Jun 21 05:29:23.494 INFO Fetch successful Jun 21 05:29:23.499355 initrd-setup-root[912]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 05:29:23.502885 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jun 21 05:29:23.503315 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jun 21 05:29:23.613670 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 05:29:23.615904 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 05:29:23.617609 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 05:29:23.641492 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 05:29:23.643442 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:29:23.665728 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 05:29:23.692401 ignition[980]: INFO : Ignition 2.21.0 Jun 21 05:29:23.692401 ignition[980]: INFO : Stage: mount Jun 21 05:29:23.693365 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:29:23.693365 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:29:23.694267 ignition[980]: INFO : mount: mount passed Jun 21 05:29:23.694267 ignition[980]: INFO : Ignition finished successfully Jun 21 05:29:23.695701 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 05:29:23.698051 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 05:29:23.720981 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 05:29:23.748171 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (993) Jun 21 05:29:23.750517 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:29:23.750582 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:29:23.750596 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 05:29:23.755643 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 05:29:23.792841 ignition[1009]: INFO : Ignition 2.21.0 Jun 21 05:29:23.792841 ignition[1009]: INFO : Stage: files Jun 21 05:29:23.792841 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:29:23.792841 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:29:23.794856 ignition[1009]: DEBUG : files: compiled without relabeling support, skipping Jun 21 05:29:23.794856 ignition[1009]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 05:29:23.794856 ignition[1009]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 05:29:23.797163 ignition[1009]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 05:29:23.797698 ignition[1009]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 05:29:23.797698 ignition[1009]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 05:29:23.797623 unknown[1009]: wrote ssh authorized keys file for user: core Jun 21 05:29:23.799343 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 21 05:29:23.799343 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jun 21 05:29:23.853873 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 05:29:23.942059 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 21 05:29:23.943087 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 21 05:29:23.943087 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 05:29:23.943087 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 05:29:23.943087 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 05:29:23.943087 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 05:29:23.943087 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 05:29:23.943087 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 05:29:23.943087 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 05:29:23.951478 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 05:29:23.951478 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 05:29:23.951478 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 21 05:29:23.951478 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 21 05:29:23.951478 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 21 05:29:23.951478 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jun 21 05:29:24.328676 systemd-networkd[819]: eth0: Gained IPv6LL Jun 21 05:29:24.456464 systemd-networkd[819]: eth1: Gained IPv6LL Jun 21 05:29:24.706081 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 21 05:29:26.050730 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 21 05:29:26.050730 ignition[1009]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 21 05:29:26.053305 ignition[1009]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 05:29:26.054020 ignition[1009]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 05:29:26.054020 ignition[1009]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 21 05:29:26.054020 ignition[1009]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 21 05:29:26.056861 ignition[1009]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 05:29:26.056861 ignition[1009]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 05:29:26.056861 ignition[1009]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 05:29:26.056861 ignition[1009]: INFO : files: files passed Jun 21 05:29:26.056861 ignition[1009]: INFO : Ignition finished successfully Jun 21 05:29:26.056504 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 05:29:26.060432 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 05:29:26.064336 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 05:29:26.081339 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 05:29:26.081456 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 05:29:26.090674 initrd-setup-root-after-ignition[1040]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:29:26.090674 initrd-setup-root-after-ignition[1040]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:29:26.093705 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:29:26.094449 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 05:29:26.095611 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 05:29:26.097239 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 05:29:26.150824 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 05:29:26.151514 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 05:29:26.152610 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 05:29:26.153097 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 05:29:26.153990 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 05:29:26.155021 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 05:29:26.197788 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 05:29:26.200185 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 05:29:26.227939 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:29:26.229059 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:29:26.230021 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 05:29:26.230427 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 05:29:26.230562 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 05:29:26.231605 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 05:29:26.232058 systemd[1]: Stopped target basic.target - Basic System. Jun 21 05:29:26.232801 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 05:29:26.233354 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 05:29:26.233963 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 05:29:26.234658 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 05:29:26.235344 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 05:29:26.235991 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 05:29:26.236851 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 05:29:26.237427 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 05:29:26.238170 systemd[1]: Stopped target swap.target - Swaps. Jun 21 05:29:26.238771 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 05:29:26.238908 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 05:29:26.239734 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:29:26.240541 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:29:26.241196 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 05:29:26.241327 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:29:26.241827 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 05:29:26.241978 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 05:29:26.242932 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 05:29:26.243058 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 05:29:26.243744 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 05:29:26.243879 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 05:29:26.244416 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 21 05:29:26.244548 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 21 05:29:26.247271 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 05:29:26.247804 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 05:29:26.247989 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:29:26.251227 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 05:29:26.251615 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 05:29:26.251779 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:29:26.253972 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 05:29:26.254214 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 05:29:26.264987 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 05:29:26.265089 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 05:29:26.285427 ignition[1064]: INFO : Ignition 2.21.0 Jun 21 05:29:26.285427 ignition[1064]: INFO : Stage: umount Jun 21 05:29:26.286658 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:29:26.286658 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 21 05:29:26.290081 ignition[1064]: INFO : umount: umount passed Jun 21 05:29:26.286756 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 05:29:26.291131 ignition[1064]: INFO : Ignition finished successfully Jun 21 05:29:26.291801 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 05:29:26.291915 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 05:29:26.292895 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 05:29:26.292980 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 05:29:26.294321 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 05:29:26.294399 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 05:29:26.295052 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 05:29:26.295104 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 05:29:26.295779 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 21 05:29:26.295848 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 21 05:29:26.296436 systemd[1]: Stopped target network.target - Network. Jun 21 05:29:26.296977 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 05:29:26.297030 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 05:29:26.297732 systemd[1]: Stopped target paths.target - Path Units. Jun 21 05:29:26.298451 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 05:29:26.302210 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:29:26.302643 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 05:29:26.303554 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 05:29:26.304407 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 05:29:26.304464 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 05:29:26.305083 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 05:29:26.305145 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 05:29:26.305627 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 05:29:26.305704 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 05:29:26.306279 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 05:29:26.306342 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 05:29:26.306839 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 05:29:26.306921 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 05:29:26.307955 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 05:29:26.308553 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 05:29:26.314838 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 05:29:26.315299 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 05:29:26.318761 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 05:29:26.319828 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 05:29:26.319932 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:29:26.321620 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:29:26.325098 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 05:29:26.325288 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 05:29:26.327034 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 05:29:26.327294 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 05:29:26.327972 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 05:29:26.328012 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:29:26.329640 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 05:29:26.330390 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 05:29:26.330461 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 05:29:26.330976 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 05:29:26.331029 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:29:26.331506 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 05:29:26.331552 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 05:29:26.331922 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:29:26.335544 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 05:29:26.351524 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 05:29:26.351725 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:29:26.353943 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 05:29:26.354037 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 05:29:26.354822 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 05:29:26.354856 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:29:26.357157 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 05:29:26.357220 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 05:29:26.357745 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 05:29:26.357792 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 05:29:26.358229 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 05:29:26.358275 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 05:29:26.360652 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 05:29:26.363094 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 05:29:26.363785 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:29:26.365752 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 05:29:26.365838 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:29:26.367279 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:29:26.367353 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:29:26.369802 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 05:29:26.371462 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 05:29:26.381067 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 05:29:26.381286 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 05:29:26.382985 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 05:29:26.385220 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 05:29:26.401566 systemd[1]: Switching root. Jun 21 05:29:26.447853 systemd-journald[211]: Journal stopped Jun 21 05:29:27.750463 systemd-journald[211]: Received SIGTERM from PID 1 (systemd). Jun 21 05:29:27.750538 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 05:29:27.750563 kernel: SELinux: policy capability open_perms=1 Jun 21 05:29:27.750575 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 05:29:27.750587 kernel: SELinux: policy capability always_check_network=0 Jun 21 05:29:27.750601 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 05:29:27.750617 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 05:29:27.750629 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 05:29:27.750641 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 05:29:27.750652 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 05:29:27.750664 kernel: audit: type=1403 audit(1750483766.621:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 05:29:27.750677 systemd[1]: Successfully loaded SELinux policy in 55.440ms. Jun 21 05:29:27.750703 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.102ms. Jun 21 05:29:27.750717 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 05:29:27.750741 systemd[1]: Detected virtualization kvm. Jun 21 05:29:27.750754 systemd[1]: Detected architecture x86-64. Jun 21 05:29:27.750766 systemd[1]: Detected first boot. Jun 21 05:29:27.750779 systemd[1]: Hostname set to . Jun 21 05:29:27.750793 systemd[1]: Initializing machine ID from VM UUID. Jun 21 05:29:27.750806 zram_generator::config[1111]: No configuration found. Jun 21 05:29:27.750822 kernel: Guest personality initialized and is inactive Jun 21 05:29:27.750846 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 21 05:29:27.750863 kernel: Initialized host personality Jun 21 05:29:27.750880 kernel: NET: Registered PF_VSOCK protocol family Jun 21 05:29:27.750898 systemd[1]: Populated /etc with preset unit settings. Jun 21 05:29:27.750915 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 05:29:27.750928 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 05:29:27.750943 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 05:29:27.750960 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 05:29:27.750973 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 05:29:27.750985 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 05:29:27.751002 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 05:29:27.751014 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 05:29:27.751026 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 05:29:27.751038 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 05:29:27.751051 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 05:29:27.751063 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 05:29:27.751075 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:29:27.751088 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:29:27.751100 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 05:29:27.751115 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 05:29:27.751142 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 05:29:27.751154 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 05:29:27.751168 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 21 05:29:27.751184 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:29:27.751197 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:29:27.751213 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 05:29:27.751228 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 05:29:27.751242 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 05:29:27.751260 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 05:29:27.751273 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:29:27.751287 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 05:29:27.751299 systemd[1]: Reached target slices.target - Slice Units. Jun 21 05:29:27.751312 systemd[1]: Reached target swap.target - Swaps. Jun 21 05:29:27.751324 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 05:29:27.751339 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 05:29:27.751352 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 05:29:27.751366 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:29:27.751382 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 05:29:27.751399 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:29:27.751412 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 05:29:27.751424 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 05:29:27.751436 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 05:29:27.751448 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 05:29:27.751463 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:29:27.751479 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 05:29:27.751492 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 05:29:27.751505 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 05:29:27.751518 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 05:29:27.751530 systemd[1]: Reached target machines.target - Containers. Jun 21 05:29:27.751543 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 05:29:27.751555 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:29:27.751571 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 05:29:27.751586 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 05:29:27.751600 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 05:29:27.751613 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 05:29:27.751625 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 05:29:27.751638 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 05:29:27.751665 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 05:29:27.751678 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 05:29:27.751692 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 05:29:27.751707 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 05:29:27.751719 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 05:29:27.751733 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 05:29:27.751747 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:29:27.751759 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 05:29:27.751772 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 05:29:27.751784 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 05:29:27.751797 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 05:29:27.751814 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 05:29:27.751827 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 05:29:27.751841 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 05:29:27.751853 systemd[1]: Stopped verity-setup.service. Jun 21 05:29:27.751866 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:29:27.751879 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 05:29:27.751895 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 05:29:27.751909 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 05:29:27.751926 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 05:29:27.751939 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 05:29:27.751967 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 05:29:27.751982 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:29:27.751995 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 05:29:27.752371 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 05:29:27.752395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 05:29:27.752414 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 05:29:27.752431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 05:29:27.752444 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 05:29:27.752508 systemd-journald[1178]: Collecting audit messages is disabled. Jun 21 05:29:27.752553 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 05:29:27.752566 kernel: loop: module loaded Jun 21 05:29:27.752579 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:29:27.752593 systemd-journald[1178]: Journal started Jun 21 05:29:27.752618 systemd-journald[1178]: Runtime Journal (/run/log/journal/8f82529953ea4fcd80b8dd3b41eebf9f) is 4.9M, max 39.5M, 34.6M free. Jun 21 05:29:27.410951 systemd[1]: Queued start job for default target multi-user.target. Jun 21 05:29:27.434314 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 21 05:29:27.435066 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 05:29:27.757170 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 05:29:27.760846 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 05:29:27.761194 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 05:29:27.789632 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 05:29:27.805611 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 05:29:27.810226 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 05:29:27.810812 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 05:29:27.810868 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 05:29:27.814183 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 05:29:27.818516 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 05:29:27.819735 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:29:27.827525 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 05:29:27.834071 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 05:29:27.836349 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 05:29:27.846428 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 05:29:27.848323 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 05:29:27.852546 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:29:27.857558 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 05:29:27.860642 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 05:29:27.864237 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 05:29:27.890264 kernel: fuse: init (API version 7.41) Jun 21 05:29:27.884546 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 05:29:27.890777 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 05:29:27.903471 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 05:29:27.903858 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 05:29:27.908847 systemd-journald[1178]: Time spent on flushing to /var/log/journal/8f82529953ea4fcd80b8dd3b41eebf9f is 165.899ms for 1000 entries. Jun 21 05:29:27.908847 systemd-journald[1178]: System Journal (/var/log/journal/8f82529953ea4fcd80b8dd3b41eebf9f) is 8M, max 195.6M, 187.6M free. Jun 21 05:29:28.115318 systemd-journald[1178]: Received client request to flush runtime journal. Jun 21 05:29:28.115407 kernel: loop0: detected capacity change from 0 to 229808 Jun 21 05:29:28.115444 kernel: ACPI: bus type drm_connector registered Jun 21 05:29:28.115494 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 05:29:28.115534 kernel: loop1: detected capacity change from 0 to 113872 Jun 21 05:29:27.949214 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 05:29:27.951728 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 05:29:27.973807 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 05:29:27.987500 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 05:29:27.993723 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 05:29:28.059246 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:29:28.075051 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 05:29:28.080897 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 05:29:28.082510 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 05:29:28.122454 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 05:29:28.138147 kernel: loop2: detected capacity change from 0 to 8 Jun 21 05:29:28.177188 kernel: loop3: detected capacity change from 0 to 146240 Jun 21 05:29:28.210731 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jun 21 05:29:28.210761 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jun 21 05:29:28.238462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:29:28.251659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:29:28.270689 kernel: loop4: detected capacity change from 0 to 229808 Jun 21 05:29:28.296372 kernel: loop5: detected capacity change from 0 to 113872 Jun 21 05:29:28.311201 kernel: loop6: detected capacity change from 0 to 8 Jun 21 05:29:28.318363 kernel: loop7: detected capacity change from 0 to 146240 Jun 21 05:29:28.371807 (sd-merge)[1256]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jun 21 05:29:28.373614 (sd-merge)[1256]: Merged extensions into '/usr'. Jun 21 05:29:28.398808 systemd[1]: Reload requested from client PID 1228 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 05:29:28.400855 systemd[1]: Reloading... Jun 21 05:29:28.573209 zram_generator::config[1282]: No configuration found. Jun 21 05:29:28.791803 ldconfig[1223]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 05:29:28.828175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:29:28.970923 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 05:29:28.972134 systemd[1]: Reloading finished in 570 ms. Jun 21 05:29:28.987043 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 05:29:28.989585 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 05:29:28.999023 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 05:29:29.013154 systemd[1]: Starting ensure-sysext.service... Jun 21 05:29:29.017151 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 05:29:29.036953 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 05:29:29.079890 systemd[1]: Reload requested from client PID 1326 ('systemctl') (unit ensure-sysext.service)... Jun 21 05:29:29.079929 systemd[1]: Reloading... Jun 21 05:29:29.098898 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 05:29:29.098985 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 05:29:29.099489 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 05:29:29.099888 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 05:29:29.103332 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 05:29:29.103808 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jun 21 05:29:29.103896 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jun 21 05:29:29.115877 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 05:29:29.115898 systemd-tmpfiles[1327]: Skipping /boot Jun 21 05:29:29.150485 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 05:29:29.150503 systemd-tmpfiles[1327]: Skipping /boot Jun 21 05:29:29.212223 zram_generator::config[1355]: No configuration found. Jun 21 05:29:29.374491 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:29:29.495396 systemd[1]: Reloading finished in 414 ms. Jun 21 05:29:29.511976 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 05:29:29.525193 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:29:29.536399 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 05:29:29.541520 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 05:29:29.544112 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 05:29:29.550160 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 05:29:29.556907 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:29:29.561638 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 05:29:29.569333 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:29:29.569697 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:29:29.573555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 05:29:29.588189 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 05:29:29.609725 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 05:29:29.611106 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:29:29.611354 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:29:29.611505 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:29:29.617486 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:29:29.617797 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:29:29.618070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:29:29.619389 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:29:29.627058 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 05:29:29.628270 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:29:29.633749 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:29:29.635178 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:29:29.641039 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 05:29:29.642317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:29:29.642543 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:29:29.642754 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:29:29.653218 systemd[1]: Finished ensure-sysext.service. Jun 21 05:29:29.671397 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 21 05:29:29.672816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 05:29:29.673201 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 05:29:29.679871 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 05:29:29.685306 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 05:29:29.686981 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 05:29:29.688382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 05:29:29.699506 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 05:29:29.703279 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 05:29:29.711591 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 05:29:29.712701 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 05:29:29.714607 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 05:29:29.722394 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 05:29:29.724321 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 05:29:29.766254 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 05:29:29.768706 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 05:29:29.771540 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 05:29:29.774521 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Jun 21 05:29:29.776165 augenrules[1440]: No rules Jun 21 05:29:29.778440 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 05:29:29.778734 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 05:29:29.803349 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 05:29:29.810927 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:29:29.817319 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 05:29:29.975115 systemd-resolved[1403]: Positive Trust Anchors: Jun 21 05:29:29.975151 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 05:29:29.975193 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 05:29:29.982687 systemd-resolved[1403]: Using system hostname 'ci-4372.0.0-0-a0fa6d352b'. Jun 21 05:29:29.985066 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 05:29:29.992401 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:29:30.089331 systemd-networkd[1456]: lo: Link UP Jun 21 05:29:30.089786 systemd-networkd[1456]: lo: Gained carrier Jun 21 05:29:30.091359 systemd-networkd[1456]: Enumeration completed Jun 21 05:29:30.091673 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 05:29:30.093558 systemd[1]: Reached target network.target - Network. Jun 21 05:29:30.096808 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 05:29:30.103468 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 05:29:30.149296 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 05:29:30.169738 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Jun 21 05:29:30.176620 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jun 21 05:29:30.179187 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:29:30.179713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:29:30.185498 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 05:29:30.193968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 05:29:30.210438 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 05:29:30.211028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:29:30.211077 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:29:30.211129 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 05:29:30.211150 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:29:30.211502 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 21 05:29:30.212231 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 05:29:30.233106 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 21 05:29:30.234218 systemd-networkd[1456]: eth1: Configuring with /run/systemd/network/10-a6:57:d8:54:e8:e9.network. Jun 21 05:29:30.235418 systemd-networkd[1456]: eth1: Link UP Jun 21 05:29:30.235857 systemd-networkd[1456]: eth1: Gained carrier Jun 21 05:29:30.238156 kernel: ISO 9660 Extensions: RRIP_1991A Jun 21 05:29:30.241112 systemd-timesyncd[1420]: Network configuration changed, trying to establish connection. Jun 21 05:29:30.250336 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jun 21 05:29:30.259685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 05:29:30.260253 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 05:29:30.265216 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 05:29:30.265518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 05:29:30.266916 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 05:29:30.268737 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 05:29:30.272677 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 05:29:30.273430 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 05:29:30.274156 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 05:29:30.275383 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 05:29:30.276654 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 21 05:29:30.277400 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 05:29:30.278058 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 05:29:30.278657 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 05:29:30.280266 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 05:29:30.280325 systemd[1]: Reached target paths.target - Path Units. Jun 21 05:29:30.284829 systemd[1]: Reached target timers.target - Timer Units. Jun 21 05:29:30.286998 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 05:29:30.290931 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 05:29:30.300275 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 05:29:30.301264 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 05:29:30.301834 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 05:29:30.313653 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 05:29:30.315842 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 05:29:30.316970 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 05:29:30.318051 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 05:29:30.320967 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 05:29:30.321664 systemd[1]: Reached target basic.target - Basic System. Jun 21 05:29:30.322210 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 05:29:30.322254 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 05:29:30.323879 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 05:29:30.330491 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 21 05:29:30.334476 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 05:29:30.338560 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 05:29:30.346525 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 05:29:30.354363 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 05:29:30.354992 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 05:29:30.367758 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 21 05:29:30.376510 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 05:29:30.380763 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 05:29:30.388551 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 05:29:30.399594 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 05:29:30.407892 jq[1505]: false Jun 21 05:29:30.410057 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 05:29:30.413732 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 05:29:30.414570 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 05:29:30.421305 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 05:29:30.426624 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 05:29:30.436738 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 05:29:30.438695 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 05:29:30.439020 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 05:29:30.480234 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing passwd entry cache Jun 21 05:29:30.479656 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 05:29:30.472914 oslogin_cache_refresh[1507]: Refreshing passwd entry cache Jun 21 05:29:30.480051 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 05:29:30.484199 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting users, quitting Jun 21 05:29:30.484199 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 05:29:30.484199 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing group entry cache Jun 21 05:29:30.482598 oslogin_cache_refresh[1507]: Failure getting users, quitting Jun 21 05:29:30.482625 oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 05:29:30.484470 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting groups, quitting Jun 21 05:29:30.484470 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 05:29:30.482688 oslogin_cache_refresh[1507]: Refreshing group entry cache Jun 21 05:29:30.484425 oslogin_cache_refresh[1507]: Failure getting groups, quitting Jun 21 05:29:30.484444 oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 05:29:30.488382 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 21 05:29:30.488727 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 21 05:29:30.522076 update_engine[1516]: I20250621 05:29:30.520596 1516 main.cc:92] Flatcar Update Engine starting Jun 21 05:29:30.524531 extend-filesystems[1506]: Found /dev/vda6 Jun 21 05:29:30.541158 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 05:29:30.540376 dbus-daemon[1503]: [system] SELinux support is enabled Jun 21 05:29:30.543864 jq[1517]: true Jun 21 05:29:30.554151 update_engine[1516]: I20250621 05:29:30.545984 1516 update_check_scheduler.cc:74] Next update check in 8m51s Jun 21 05:29:30.554300 extend-filesystems[1506]: Found /dev/vda9 Jun 21 05:29:30.566862 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 05:29:30.572303 jq[1537]: true Jun 21 05:29:30.566981 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 05:29:30.567628 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 05:29:30.567812 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jun 21 05:29:30.567866 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 05:29:30.569669 systemd[1]: Started update-engine.service - Update Engine. Jun 21 05:29:30.583215 extend-filesystems[1506]: Checking size of /dev/vda9 Jun 21 05:29:30.608055 bash[1564]: Updated "/home/core/.ssh/authorized_keys" Jun 21 05:29:30.627724 (ntainerd)[1542]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 05:29:30.650694 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 05:29:30.652582 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 05:29:30.652887 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 05:29:30.653946 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 05:29:30.663602 tar[1519]: linux-amd64/LICENSE Jun 21 05:29:30.663602 tar[1519]: linux-amd64/helm Jun 21 05:29:30.664020 extend-filesystems[1506]: Resized partition /dev/vda9 Jun 21 05:29:30.672073 coreos-metadata[1502]: Jun 21 05:29:30.670 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 21 05:29:30.672073 coreos-metadata[1502]: Jun 21 05:29:30.672 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Jun 21 05:29:30.674587 extend-filesystems[1572]: resize2fs 1.47.2 (1-Jan-2025) Jun 21 05:29:30.679541 systemd[1]: Starting sshkeys.service... Jun 21 05:29:30.696799 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jun 21 05:29:30.759040 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 21 05:29:30.767690 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 21 05:29:30.838988 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jun 21 05:29:30.876560 extend-filesystems[1572]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 21 05:29:30.876560 extend-filesystems[1572]: old_desc_blocks = 1, new_desc_blocks = 8 Jun 21 05:29:30.876560 extend-filesystems[1572]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jun 21 05:29:30.875513 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 05:29:30.892684 extend-filesystems[1506]: Resized filesystem in /dev/vda9 Jun 21 05:29:30.875883 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 05:29:30.891282 systemd-networkd[1456]: eth0: Configuring with /run/systemd/network/10-62:6c:19:d8:fa:d2.network. Jun 21 05:29:30.896803 systemd-networkd[1456]: eth0: Link UP Jun 21 05:29:30.899484 systemd-networkd[1456]: eth0: Gained carrier Jun 21 05:29:30.949556 kernel: mousedev: PS/2 mouse device common for all mice Jun 21 05:29:31.017153 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 21 05:29:31.026225 kernel: ACPI: button: Power Button [PWRF] Jun 21 05:29:31.028621 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 05:29:31.033136 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 05:29:31.042059 coreos-metadata[1575]: Jun 21 05:29:31.041 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 21 05:29:31.059838 coreos-metadata[1575]: Jun 21 05:29:31.057 INFO Fetch successful Jun 21 05:29:31.078749 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 05:29:31.101246 unknown[1575]: wrote ssh authorized keys file for user: core Jun 21 05:29:31.143650 locksmithd[1547]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 05:29:31.149289 systemd-logind[1514]: New seat seat0. Jun 21 05:29:31.152990 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 05:29:31.157148 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 21 05:29:31.159147 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 21 05:29:31.167084 update-ssh-keys[1591]: Updated "/home/core/.ssh/authorized_keys" Jun 21 05:29:31.172303 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 21 05:29:31.184860 systemd[1]: Finished sshkeys.service. Jun 21 05:29:31.290766 sshd_keygen[1533]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 05:29:31.372225 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 05:29:31.380352 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 05:29:31.420888 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 05:29:31.422234 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 05:29:31.427150 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 05:29:31.495600 containerd[1542]: time="2025-06-21T05:29:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 05:29:31.496585 containerd[1542]: time="2025-06-21T05:29:31.496526020Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 05:29:31.513217 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 05:29:31.517568 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 05:29:31.524396 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 21 05:29:31.525235 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 05:29:31.554166 containerd[1542]: time="2025-06-21T05:29:31.553989119Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.53µs" Jun 21 05:29:31.554166 containerd[1542]: time="2025-06-21T05:29:31.554049322Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 05:29:31.554166 containerd[1542]: time="2025-06-21T05:29:31.554076418Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 05:29:31.555143 containerd[1542]: time="2025-06-21T05:29:31.554399477Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 05:29:31.555143 containerd[1542]: time="2025-06-21T05:29:31.554438605Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 05:29:31.555143 containerd[1542]: time="2025-06-21T05:29:31.554474968Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 05:29:31.555143 containerd[1542]: time="2025-06-21T05:29:31.554544560Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 05:29:31.555143 containerd[1542]: time="2025-06-21T05:29:31.554559878Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 05:29:31.555143 containerd[1542]: time="2025-06-21T05:29:31.554875968Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 05:29:31.555143 containerd[1542]: time="2025-06-21T05:29:31.554894973Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 05:29:31.555143 containerd[1542]: time="2025-06-21T05:29:31.554909651Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 05:29:31.555143 containerd[1542]: time="2025-06-21T05:29:31.554922093Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 05:29:31.555143 containerd[1542]: time="2025-06-21T05:29:31.555010745Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 05:29:31.558799 containerd[1542]: time="2025-06-21T05:29:31.558732205Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 05:29:31.558957 containerd[1542]: time="2025-06-21T05:29:31.558833725Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 05:29:31.558957 containerd[1542]: time="2025-06-21T05:29:31.558857025Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 05:29:31.560443 containerd[1542]: time="2025-06-21T05:29:31.560363745Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 05:29:31.560650 systemd-networkd[1456]: eth1: Gained IPv6LL Jun 21 05:29:31.561001 containerd[1542]: time="2025-06-21T05:29:31.560863858Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 05:29:31.561054 containerd[1542]: time="2025-06-21T05:29:31.561019217Z" level=info msg="metadata content store policy set" policy=shared Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.566229002Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.569080425Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.569202088Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.569219891Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.569232609Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.569243313Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.569257273Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.569268880Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.569280877Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.569291036Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.569300750Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.569313461Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.569465310Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 05:29:31.571178 containerd[1542]: time="2025-06-21T05:29:31.569492352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 05:29:31.567710 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 05:29:31.571837 containerd[1542]: time="2025-06-21T05:29:31.569508662Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 05:29:31.571837 containerd[1542]: time="2025-06-21T05:29:31.569520086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 05:29:31.571837 containerd[1542]: time="2025-06-21T05:29:31.569536130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 05:29:31.571837 containerd[1542]: time="2025-06-21T05:29:31.569546389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 05:29:31.571837 containerd[1542]: time="2025-06-21T05:29:31.569556184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 05:29:31.571837 containerd[1542]: time="2025-06-21T05:29:31.569567759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 05:29:31.571837 containerd[1542]: time="2025-06-21T05:29:31.569579080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 05:29:31.571837 containerd[1542]: time="2025-06-21T05:29:31.569589241Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 05:29:31.571837 containerd[1542]: time="2025-06-21T05:29:31.569601058Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 05:29:31.571837 containerd[1542]: time="2025-06-21T05:29:31.569708640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 05:29:31.571837 containerd[1542]: time="2025-06-21T05:29:31.569732042Z" level=info msg="Start snapshots syncer" Jun 21 05:29:31.571837 containerd[1542]: time="2025-06-21T05:29:31.569785835Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 05:29:31.569043 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 05:29:31.575498 containerd[1542]: time="2025-06-21T05:29:31.570074185Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 05:29:31.575498 containerd[1542]: time="2025-06-21T05:29:31.570147110Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.570241130Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.571509822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.571550301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.571562300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.571575674Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.571590352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.571613107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.571628160Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.571665535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.571680303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.571692072Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.571745390Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.571762316Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 05:29:31.575688 containerd[1542]: time="2025-06-21T05:29:31.571771380Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 05:29:31.576425 containerd[1542]: time="2025-06-21T05:29:31.571783047Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 05:29:31.576425 containerd[1542]: time="2025-06-21T05:29:31.571833343Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 05:29:31.576425 containerd[1542]: time="2025-06-21T05:29:31.571844206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 05:29:31.576425 containerd[1542]: time="2025-06-21T05:29:31.571854527Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 05:29:31.576425 containerd[1542]: time="2025-06-21T05:29:31.571873141Z" level=info msg="runtime interface created" Jun 21 05:29:31.576425 containerd[1542]: time="2025-06-21T05:29:31.571878193Z" level=info msg="created NRI interface" Jun 21 05:29:31.576425 containerd[1542]: time="2025-06-21T05:29:31.571886608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 05:29:31.576425 containerd[1542]: time="2025-06-21T05:29:31.571900920Z" level=info msg="Connect containerd service" Jun 21 05:29:31.576425 containerd[1542]: time="2025-06-21T05:29:31.571927471Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 05:29:31.576425 containerd[1542]: time="2025-06-21T05:29:31.574732668Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 05:29:31.577040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:29:31.580631 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 05:29:31.682775 coreos-metadata[1502]: Jun 21 05:29:31.682 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Jun 21 05:29:31.714185 coreos-metadata[1502]: Jun 21 05:29:31.709 INFO Fetch successful Jun 21 05:29:31.737228 kernel: EDAC MC: Ver: 3.0.0 Jun 21 05:29:31.746426 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 05:29:31.839973 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 21 05:29:31.841240 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 05:29:31.880049 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 21 05:29:31.880240 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 21 05:29:31.880659 kernel: Console: switching to colour dummy device 80x25 Jun 21 05:29:31.881333 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 21 05:29:31.881426 kernel: [drm] features: -context_init Jun 21 05:29:31.883205 kernel: [drm] number of scanouts: 1 Jun 21 05:29:31.883273 kernel: [drm] number of cap sets: 0 Jun 21 05:29:31.886097 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jun 21 05:29:31.915455 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:29:31.942964 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 21 05:29:31.981616 systemd-logind[1514]: Watching system buttons on /dev/input/event2 (Power Button) Jun 21 05:29:32.071630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:29:32.071985 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:29:32.079500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:29:32.082986 containerd[1542]: time="2025-06-21T05:29:32.082925757Z" level=info msg="Start subscribing containerd event" Jun 21 05:29:32.084346 containerd[1542]: time="2025-06-21T05:29:32.084260213Z" level=info msg="Start recovering state" Jun 21 05:29:32.085790 containerd[1542]: time="2025-06-21T05:29:32.085696650Z" level=info msg="Start event monitor" Jun 21 05:29:32.087025 containerd[1542]: time="2025-06-21T05:29:32.086986787Z" level=info msg="Start cni network conf syncer for default" Jun 21 05:29:32.088340 containerd[1542]: time="2025-06-21T05:29:32.083287304Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 05:29:32.088340 containerd[1542]: time="2025-06-21T05:29:32.088239962Z" level=info msg="Start streaming server" Jun 21 05:29:32.088340 containerd[1542]: time="2025-06-21T05:29:32.088320921Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 05:29:32.088340 containerd[1542]: time="2025-06-21T05:29:32.088332109Z" level=info msg="runtime interface starting up..." Jun 21 05:29:32.088340 containerd[1542]: time="2025-06-21T05:29:32.088341880Z" level=info msg="starting plugins..." Jun 21 05:29:32.090035 containerd[1542]: time="2025-06-21T05:29:32.088904918Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 05:29:32.090035 containerd[1542]: time="2025-06-21T05:29:32.089676471Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 05:29:32.090035 containerd[1542]: time="2025-06-21T05:29:32.089781086Z" level=info msg="containerd successfully booted in 0.595284s" Jun 21 05:29:32.089928 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 05:29:32.264581 systemd-networkd[1456]: eth0: Gained IPv6LL Jun 21 05:29:32.265262 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:29:32.430176 tar[1519]: linux-amd64/README.md Jun 21 05:29:32.457916 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 05:29:33.084938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:29:33.087457 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 05:29:33.088494 systemd[1]: Startup finished in 3.315s (kernel) + 6.988s (initrd) + 6.520s (userspace) = 16.824s. Jun 21 05:29:33.092094 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 05:29:33.554281 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 05:29:33.557990 systemd[1]: Started sshd@0-164.92.73.218:22-139.178.68.195:54152.service - OpenSSH per-connection server daemon (139.178.68.195:54152). Jun 21 05:29:33.650208 sshd[1686]: Accepted publickey for core from 139.178.68.195 port 54152 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:33.653060 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:33.667616 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 05:29:33.671138 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 05:29:33.680429 systemd-logind[1514]: New session 1 of user core. Jun 21 05:29:33.711890 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 05:29:33.717439 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 05:29:33.729083 (systemd)[1691]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 05:29:33.734160 systemd-logind[1514]: New session c1 of user core. Jun 21 05:29:33.758809 kubelet[1676]: E0621 05:29:33.758760 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 05:29:33.762891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 05:29:33.763110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 05:29:33.763981 systemd[1]: kubelet.service: Consumed 1.259s CPU time, 266.8M memory peak. Jun 21 05:29:33.888852 systemd[1691]: Queued start job for default target default.target. Jun 21 05:29:33.897551 systemd[1691]: Created slice app.slice - User Application Slice. Jun 21 05:29:33.897591 systemd[1691]: Reached target paths.target - Paths. Jun 21 05:29:33.897640 systemd[1691]: Reached target timers.target - Timers. Jun 21 05:29:33.899398 systemd[1691]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 05:29:33.914229 systemd[1691]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 05:29:33.914372 systemd[1691]: Reached target sockets.target - Sockets. Jun 21 05:29:33.914468 systemd[1691]: Reached target basic.target - Basic System. Jun 21 05:29:33.914528 systemd[1691]: Reached target default.target - Main User Target. Jun 21 05:29:33.914575 systemd[1691]: Startup finished in 170ms. Jun 21 05:29:33.914928 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 05:29:33.927701 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 05:29:33.998515 systemd[1]: Started sshd@1-164.92.73.218:22-139.178.68.195:54160.service - OpenSSH per-connection server daemon (139.178.68.195:54160). Jun 21 05:29:34.061666 sshd[1703]: Accepted publickey for core from 139.178.68.195 port 54160 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:34.063576 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:34.070208 systemd-logind[1514]: New session 2 of user core. Jun 21 05:29:34.077453 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 05:29:34.141002 sshd[1705]: Connection closed by 139.178.68.195 port 54160 Jun 21 05:29:34.142295 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:34.154854 systemd[1]: sshd@1-164.92.73.218:22-139.178.68.195:54160.service: Deactivated successfully. Jun 21 05:29:34.158081 systemd[1]: session-2.scope: Deactivated successfully. Jun 21 05:29:34.159826 systemd-logind[1514]: Session 2 logged out. Waiting for processes to exit. Jun 21 05:29:34.164838 systemd[1]: Started sshd@2-164.92.73.218:22-139.178.68.195:54164.service - OpenSSH per-connection server daemon (139.178.68.195:54164). Jun 21 05:29:34.166240 systemd-logind[1514]: Removed session 2. Jun 21 05:29:34.227747 sshd[1711]: Accepted publickey for core from 139.178.68.195 port 54164 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:34.229479 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:34.234660 systemd-logind[1514]: New session 3 of user core. Jun 21 05:29:34.242411 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 05:29:34.299560 sshd[1713]: Connection closed by 139.178.68.195 port 54164 Jun 21 05:29:34.300597 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:34.313967 systemd[1]: sshd@2-164.92.73.218:22-139.178.68.195:54164.service: Deactivated successfully. Jun 21 05:29:34.316464 systemd[1]: session-3.scope: Deactivated successfully. Jun 21 05:29:34.317718 systemd-logind[1514]: Session 3 logged out. Waiting for processes to exit. Jun 21 05:29:34.321498 systemd[1]: Started sshd@3-164.92.73.218:22-139.178.68.195:54176.service - OpenSSH per-connection server daemon (139.178.68.195:54176). Jun 21 05:29:34.323388 systemd-logind[1514]: Removed session 3. Jun 21 05:29:34.385313 sshd[1719]: Accepted publickey for core from 139.178.68.195 port 54176 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:34.386935 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:34.394235 systemd-logind[1514]: New session 4 of user core. Jun 21 05:29:34.407455 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 05:29:34.472031 sshd[1721]: Connection closed by 139.178.68.195 port 54176 Jun 21 05:29:34.472570 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:34.484803 systemd[1]: sshd@3-164.92.73.218:22-139.178.68.195:54176.service: Deactivated successfully. Jun 21 05:29:34.486912 systemd[1]: session-4.scope: Deactivated successfully. Jun 21 05:29:34.487853 systemd-logind[1514]: Session 4 logged out. Waiting for processes to exit. Jun 21 05:29:34.491518 systemd[1]: Started sshd@4-164.92.73.218:22-139.178.68.195:54184.service - OpenSSH per-connection server daemon (139.178.68.195:54184). Jun 21 05:29:34.492862 systemd-logind[1514]: Removed session 4. Jun 21 05:29:34.551917 sshd[1727]: Accepted publickey for core from 139.178.68.195 port 54184 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:34.554264 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:34.561215 systemd-logind[1514]: New session 5 of user core. Jun 21 05:29:34.570452 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 05:29:34.645021 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 05:29:34.646025 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:29:34.664619 sudo[1730]: pam_unix(sudo:session): session closed for user root Jun 21 05:29:34.669553 sshd[1729]: Connection closed by 139.178.68.195 port 54184 Jun 21 05:29:34.669327 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:34.685867 systemd[1]: sshd@4-164.92.73.218:22-139.178.68.195:54184.service: Deactivated successfully. Jun 21 05:29:34.688349 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 05:29:34.690469 systemd-logind[1514]: Session 5 logged out. Waiting for processes to exit. Jun 21 05:29:34.694040 systemd-logind[1514]: Removed session 5. Jun 21 05:29:34.697501 systemd[1]: Started sshd@5-164.92.73.218:22-139.178.68.195:54190.service - OpenSSH per-connection server daemon (139.178.68.195:54190). Jun 21 05:29:34.770549 sshd[1736]: Accepted publickey for core from 139.178.68.195 port 54190 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:34.772514 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:34.778189 systemd-logind[1514]: New session 6 of user core. Jun 21 05:29:34.786488 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 05:29:34.848591 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 05:29:34.849007 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:29:34.856584 sudo[1740]: pam_unix(sudo:session): session closed for user root Jun 21 05:29:34.863804 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 05:29:34.864676 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:29:34.877092 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 05:29:34.940666 augenrules[1762]: No rules Jun 21 05:29:34.942712 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 05:29:34.943081 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 05:29:34.945070 sudo[1739]: pam_unix(sudo:session): session closed for user root Jun 21 05:29:34.948678 sshd[1738]: Connection closed by 139.178.68.195 port 54190 Jun 21 05:29:34.949592 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Jun 21 05:29:34.963555 systemd[1]: sshd@5-164.92.73.218:22-139.178.68.195:54190.service: Deactivated successfully. Jun 21 05:29:34.965811 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 05:29:34.967856 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. Jun 21 05:29:34.971506 systemd[1]: Started sshd@6-164.92.73.218:22-139.178.68.195:54204.service - OpenSSH per-connection server daemon (139.178.68.195:54204). Jun 21 05:29:34.974286 systemd-logind[1514]: Removed session 6. Jun 21 05:29:35.044813 sshd[1771]: Accepted publickey for core from 139.178.68.195 port 54204 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:29:35.046765 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:29:35.052946 systemd-logind[1514]: New session 7 of user core. Jun 21 05:29:35.068505 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 05:29:35.131667 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 05:29:35.131997 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:29:35.637763 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 05:29:35.654963 (dockerd)[1794]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 05:29:36.023587 dockerd[1794]: time="2025-06-21T05:29:36.022684680Z" level=info msg="Starting up" Jun 21 05:29:36.025754 dockerd[1794]: time="2025-06-21T05:29:36.025712753Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 05:29:36.140096 dockerd[1794]: time="2025-06-21T05:29:36.139856628Z" level=info msg="Loading containers: start." Jun 21 05:29:36.157153 kernel: Initializing XFRM netlink socket Jun 21 05:29:36.471573 systemd-networkd[1456]: docker0: Link UP Jun 21 05:29:36.475306 dockerd[1794]: time="2025-06-21T05:29:36.475242604Z" level=info msg="Loading containers: done." Jun 21 05:29:36.494381 dockerd[1794]: time="2025-06-21T05:29:36.494324637Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 05:29:36.494588 dockerd[1794]: time="2025-06-21T05:29:36.494421463Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 05:29:36.494588 dockerd[1794]: time="2025-06-21T05:29:36.494527169Z" level=info msg="Initializing buildkit" Jun 21 05:29:36.495255 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1718081840-merged.mount: Deactivated successfully. Jun 21 05:29:36.523311 dockerd[1794]: time="2025-06-21T05:29:36.523253616Z" level=info msg="Completed buildkit initialization" Jun 21 05:29:36.534736 dockerd[1794]: time="2025-06-21T05:29:36.534657236Z" level=info msg="Daemon has completed initialization" Jun 21 05:29:36.535569 dockerd[1794]: time="2025-06-21T05:29:36.534765163Z" level=info msg="API listen on /run/docker.sock" Jun 21 05:29:36.535144 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 05:29:36.929353 systemd-timesyncd[1420]: Contacted time server 104.131.155.175:123 (1.flatcar.pool.ntp.org). Jun 21 05:29:36.930073 systemd-timesyncd[1420]: Initial clock synchronization to Sat 2025-06-21 05:29:37.035272 UTC. Jun 21 05:29:37.268944 containerd[1542]: time="2025-06-21T05:29:37.268810007Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 21 05:29:37.843600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount939302943.mount: Deactivated successfully. Jun 21 05:29:39.117160 containerd[1542]: time="2025-06-21T05:29:39.116733557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:39.118456 containerd[1542]: time="2025-06-21T05:29:39.118399572Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jun 21 05:29:39.119170 containerd[1542]: time="2025-06-21T05:29:39.119113274Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:39.127034 containerd[1542]: time="2025-06-21T05:29:39.125554429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:39.127034 containerd[1542]: time="2025-06-21T05:29:39.126823649Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.857952285s" Jun 21 05:29:39.127034 containerd[1542]: time="2025-06-21T05:29:39.126873271Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jun 21 05:29:39.127846 containerd[1542]: time="2025-06-21T05:29:39.127816533Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 21 05:29:40.499721 containerd[1542]: time="2025-06-21T05:29:40.499658154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:40.501179 containerd[1542]: time="2025-06-21T05:29:40.501111605Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jun 21 05:29:40.501999 containerd[1542]: time="2025-06-21T05:29:40.501959766Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:40.504938 containerd[1542]: time="2025-06-21T05:29:40.504882717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:40.506340 containerd[1542]: time="2025-06-21T05:29:40.506291842Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.378324782s" Jun 21 05:29:40.506340 containerd[1542]: time="2025-06-21T05:29:40.506340924Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jun 21 05:29:40.506959 containerd[1542]: time="2025-06-21T05:29:40.506923999Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 21 05:29:41.792024 containerd[1542]: time="2025-06-21T05:29:41.791935891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:41.793101 containerd[1542]: time="2025-06-21T05:29:41.793059220Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jun 21 05:29:41.794094 containerd[1542]: time="2025-06-21T05:29:41.793680756Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:41.796309 containerd[1542]: time="2025-06-21T05:29:41.796269069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:41.797912 containerd[1542]: time="2025-06-21T05:29:41.797465127Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.290505582s" Jun 21 05:29:41.797912 containerd[1542]: time="2025-06-21T05:29:41.797776885Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jun 21 05:29:41.800239 containerd[1542]: time="2025-06-21T05:29:41.800202881Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 21 05:29:42.867764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183573950.mount: Deactivated successfully. Jun 21 05:29:43.446564 containerd[1542]: time="2025-06-21T05:29:43.446511625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:43.447504 containerd[1542]: time="2025-06-21T05:29:43.447377302Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jun 21 05:29:43.448060 containerd[1542]: time="2025-06-21T05:29:43.448028725Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:43.450038 containerd[1542]: time="2025-06-21T05:29:43.449978589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:43.450822 containerd[1542]: time="2025-06-21T05:29:43.450699783Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.650458642s" Jun 21 05:29:43.450822 containerd[1542]: time="2025-06-21T05:29:43.450732597Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jun 21 05:29:43.451378 containerd[1542]: time="2025-06-21T05:29:43.451349914Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 21 05:29:43.452677 systemd-resolved[1403]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jun 21 05:29:43.930767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 05:29:43.934794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:29:43.954869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1252674836.mount: Deactivated successfully. Jun 21 05:29:44.128243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:29:44.137583 (kubelet)[2096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 05:29:44.209254 kubelet[2096]: E0621 05:29:44.208536 2096 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 05:29:44.217978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 05:29:44.218273 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 05:29:44.221295 systemd[1]: kubelet.service: Consumed 197ms CPU time, 111M memory peak. Jun 21 05:29:44.879574 containerd[1542]: time="2025-06-21T05:29:44.879512436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:44.880889 containerd[1542]: time="2025-06-21T05:29:44.880804286Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jun 21 05:29:44.881663 containerd[1542]: time="2025-06-21T05:29:44.881589048Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:44.884872 containerd[1542]: time="2025-06-21T05:29:44.884792696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:44.886300 containerd[1542]: time="2025-06-21T05:29:44.886153099Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.434771185s" Jun 21 05:29:44.886300 containerd[1542]: time="2025-06-21T05:29:44.886194253Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jun 21 05:29:44.887101 containerd[1542]: time="2025-06-21T05:29:44.886972961Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 05:29:45.399731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2063858588.mount: Deactivated successfully. Jun 21 05:29:45.401888 containerd[1542]: time="2025-06-21T05:29:45.401818368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:29:45.403568 containerd[1542]: time="2025-06-21T05:29:45.403509105Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 21 05:29:45.404260 containerd[1542]: time="2025-06-21T05:29:45.404185248Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:29:45.407451 containerd[1542]: time="2025-06-21T05:29:45.406528401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:29:45.407451 containerd[1542]: time="2025-06-21T05:29:45.407263962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 520.246361ms" Jun 21 05:29:45.407451 containerd[1542]: time="2025-06-21T05:29:45.407298902Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 21 05:29:45.408080 containerd[1542]: time="2025-06-21T05:29:45.407991396Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 21 05:29:45.880516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2648159496.mount: Deactivated successfully. Jun 21 05:29:46.536334 systemd-resolved[1403]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jun 21 05:29:47.664550 containerd[1542]: time="2025-06-21T05:29:47.664474650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:47.666331 containerd[1542]: time="2025-06-21T05:29:47.665847944Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jun 21 05:29:47.667282 containerd[1542]: time="2025-06-21T05:29:47.667239608Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:47.670718 containerd[1542]: time="2025-06-21T05:29:47.670658085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:29:47.672672 containerd[1542]: time="2025-06-21T05:29:47.672611130Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.264363223s" Jun 21 05:29:47.672672 containerd[1542]: time="2025-06-21T05:29:47.672665713Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jun 21 05:29:52.530347 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:29:52.530701 systemd[1]: kubelet.service: Consumed 197ms CPU time, 111M memory peak. Jun 21 05:29:52.533671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:29:52.574530 systemd[1]: Reload requested from client PID 2231 ('systemctl') (unit session-7.scope)... Jun 21 05:29:52.574550 systemd[1]: Reloading... Jun 21 05:29:52.765183 zram_generator::config[2283]: No configuration found. Jun 21 05:29:52.898883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:29:53.063229 systemd[1]: Reloading finished in 488 ms. Jun 21 05:29:53.146774 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 05:29:53.147244 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 05:29:53.147800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:29:53.147876 systemd[1]: kubelet.service: Consumed 144ms CPU time, 98.3M memory peak. Jun 21 05:29:53.152337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:29:53.342094 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:29:53.352776 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 05:29:53.428842 kubelet[2329]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:29:53.428842 kubelet[2329]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 05:29:53.428842 kubelet[2329]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:29:53.428842 kubelet[2329]: I0621 05:29:53.428213 2329 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 05:29:53.888453 kubelet[2329]: I0621 05:29:53.886983 2329 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 21 05:29:53.888453 kubelet[2329]: I0621 05:29:53.887045 2329 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 05:29:53.888453 kubelet[2329]: I0621 05:29:53.887380 2329 server.go:956] "Client rotation is on, will bootstrap in background" Jun 21 05:29:53.926426 kubelet[2329]: E0621 05:29:53.926346 2329 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://164.92.73.218:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 164.92.73.218:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 21 05:29:53.928057 kubelet[2329]: I0621 05:29:53.928003 2329 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 05:29:53.950729 kubelet[2329]: I0621 05:29:53.950691 2329 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 05:29:53.958731 kubelet[2329]: I0621 05:29:53.958685 2329 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 05:29:53.963966 kubelet[2329]: I0621 05:29:53.963890 2329 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 05:29:53.967644 kubelet[2329]: I0621 05:29:53.964203 2329 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.0-0-a0fa6d352b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 05:29:53.967931 kubelet[2329]: I0621 05:29:53.967912 2329 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 05:29:53.968020 kubelet[2329]: I0621 05:29:53.968009 2329 container_manager_linux.go:303] "Creating device plugin manager" Jun 21 05:29:53.968253 kubelet[2329]: I0621 05:29:53.968235 2329 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:29:53.971771 kubelet[2329]: I0621 05:29:53.971724 2329 kubelet.go:480] "Attempting to sync node with API server" Jun 21 05:29:53.971978 kubelet[2329]: I0621 05:29:53.971958 2329 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 05:29:53.972142 kubelet[2329]: I0621 05:29:53.972113 2329 kubelet.go:386] "Adding apiserver pod source" Jun 21 05:29:53.974562 kubelet[2329]: I0621 05:29:53.974512 2329 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 05:29:53.979940 kubelet[2329]: E0621 05:29:53.979590 2329 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://164.92.73.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-0-a0fa6d352b&limit=500&resourceVersion=0\": dial tcp 164.92.73.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 21 05:29:53.983595 kubelet[2329]: I0621 05:29:53.983558 2329 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 05:29:53.984777 kubelet[2329]: I0621 05:29:53.984741 2329 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 21 05:29:53.985801 kubelet[2329]: W0621 05:29:53.985774 2329 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 05:29:53.990038 kubelet[2329]: I0621 05:29:53.989954 2329 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 05:29:53.990339 kubelet[2329]: I0621 05:29:53.990322 2329 server.go:1289] "Started kubelet" Jun 21 05:29:53.990710 kubelet[2329]: E0621 05:29:53.990678 2329 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://164.92.73.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.73.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 21 05:29:53.992217 kubelet[2329]: I0621 05:29:53.992160 2329 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 05:29:53.993471 kubelet[2329]: I0621 05:29:53.993269 2329 server.go:317] "Adding debug handlers to kubelet server" Jun 21 05:29:53.999160 kubelet[2329]: I0621 05:29:53.997249 2329 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 05:29:53.999510 kubelet[2329]: I0621 05:29:53.999483 2329 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 05:29:54.000044 kubelet[2329]: I0621 05:29:54.000016 2329 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 05:29:54.003253 kubelet[2329]: E0621 05:29:53.999796 2329 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.92.73.218:6443/api/v1/namespaces/default/events\": dial tcp 164.92.73.218:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.0-0-a0fa6d352b.184af7bc75de2b52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.0-0-a0fa6d352b,UID:ci-4372.0.0-0-a0fa6d352b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.0-0-a0fa6d352b,},FirstTimestamp:2025-06-21 05:29:53.990265682 +0000 UTC m=+0.631480551,LastTimestamp:2025-06-21 05:29:53.990265682 +0000 UTC m=+0.631480551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.0-0-a0fa6d352b,}" Jun 21 05:29:54.003638 kubelet[2329]: I0621 05:29:54.003616 2329 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 05:29:54.010953 kubelet[2329]: E0621 05:29:54.009965 2329 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-0-a0fa6d352b\" not found" Jun 21 05:29:54.010953 kubelet[2329]: I0621 05:29:54.010018 2329 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 05:29:54.012792 kubelet[2329]: I0621 05:29:54.012213 2329 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 05:29:54.012792 kubelet[2329]: I0621 05:29:54.012297 2329 reconciler.go:26] "Reconciler: start to sync state" Jun 21 05:29:54.012792 kubelet[2329]: E0621 05:29:54.012674 2329 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://164.92.73.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.92.73.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 21 05:29:54.014635 kubelet[2329]: E0621 05:29:54.014311 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.73.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-0-a0fa6d352b?timeout=10s\": dial tcp 164.92.73.218:6443: connect: connection refused" interval="200ms" Jun 21 05:29:54.022991 kubelet[2329]: I0621 05:29:54.022865 2329 factory.go:223] Registration of the containerd container factory successfully Jun 21 05:29:54.022991 kubelet[2329]: I0621 05:29:54.022896 2329 factory.go:223] Registration of the systemd container factory successfully Jun 21 05:29:54.022991 kubelet[2329]: I0621 05:29:54.023000 2329 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 05:29:54.023624 kubelet[2329]: E0621 05:29:54.023543 2329 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 05:29:54.056965 kubelet[2329]: I0621 05:29:54.056666 2329 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 21 05:29:54.059331 kubelet[2329]: I0621 05:29:54.058944 2329 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 21 05:29:54.059331 kubelet[2329]: I0621 05:29:54.058974 2329 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 21 05:29:54.059331 kubelet[2329]: I0621 05:29:54.059004 2329 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 05:29:54.059331 kubelet[2329]: I0621 05:29:54.059013 2329 kubelet.go:2436] "Starting kubelet main sync loop" Jun 21 05:29:54.059331 kubelet[2329]: E0621 05:29:54.059064 2329 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 05:29:54.061740 kubelet[2329]: I0621 05:29:54.061708 2329 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 05:29:54.061740 kubelet[2329]: I0621 05:29:54.061727 2329 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 05:29:54.061740 kubelet[2329]: I0621 05:29:54.061750 2329 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:29:54.064772 kubelet[2329]: I0621 05:29:54.064724 2329 policy_none.go:49] "None policy: Start" Jun 21 05:29:54.064772 kubelet[2329]: I0621 05:29:54.064765 2329 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 05:29:54.064955 kubelet[2329]: I0621 05:29:54.064788 2329 state_mem.go:35] "Initializing new in-memory state store" Jun 21 05:29:54.068328 kubelet[2329]: E0621 05:29:54.067925 2329 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://164.92.73.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.92.73.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 21 05:29:54.073782 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 05:29:54.092107 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 05:29:54.097364 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 05:29:54.109364 kubelet[2329]: E0621 05:29:54.109335 2329 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 21 05:29:54.109923 kubelet[2329]: I0621 05:29:54.109903 2329 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 05:29:54.111602 kubelet[2329]: E0621 05:29:54.110236 2329 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-0-a0fa6d352b\" not found" Jun 21 05:29:54.111602 kubelet[2329]: I0621 05:29:54.110168 2329 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 05:29:54.112112 kubelet[2329]: I0621 05:29:54.112093 2329 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 05:29:54.113220 kubelet[2329]: E0621 05:29:54.113172 2329 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 05:29:54.113291 kubelet[2329]: E0621 05:29:54.113252 2329 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.0-0-a0fa6d352b\" not found" Jun 21 05:29:54.176263 systemd[1]: Created slice kubepods-burstable-pod6a8183a52acf1b10dd3dff7a659f93b6.slice - libcontainer container kubepods-burstable-pod6a8183a52acf1b10dd3dff7a659f93b6.slice. Jun 21 05:29:54.187896 kubelet[2329]: E0621 05:29:54.187762 2329 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-0-a0fa6d352b\" not found" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.194110 systemd[1]: Created slice kubepods-burstable-pod03811de30ef6cb1a375a1329dc4cfb9e.slice - libcontainer container kubepods-burstable-pod03811de30ef6cb1a375a1329dc4cfb9e.slice. Jun 21 05:29:54.208914 kubelet[2329]: E0621 05:29:54.208662 2329 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-0-a0fa6d352b\" not found" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.212293 kubelet[2329]: I0621 05:29:54.212262 2329 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.213294 kubelet[2329]: I0621 05:29:54.212905 2329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03811de30ef6cb1a375a1329dc4cfb9e-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.0-0-a0fa6d352b\" (UID: \"03811de30ef6cb1a375a1329dc4cfb9e\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.213542 kubelet[2329]: I0621 05:29:54.213325 2329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03811de30ef6cb1a375a1329dc4cfb9e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.0-0-a0fa6d352b\" (UID: \"03811de30ef6cb1a375a1329dc4cfb9e\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.213542 kubelet[2329]: I0621 05:29:54.213352 2329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03811de30ef6cb1a375a1329dc4cfb9e-ca-certs\") pod \"kube-controller-manager-ci-4372.0.0-0-a0fa6d352b\" (UID: \"03811de30ef6cb1a375a1329dc4cfb9e\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.213542 kubelet[2329]: I0621 05:29:54.213370 2329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03811de30ef6cb1a375a1329dc4cfb9e-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.0-0-a0fa6d352b\" (UID: \"03811de30ef6cb1a375a1329dc4cfb9e\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.213542 kubelet[2329]: I0621 05:29:54.213387 2329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03811de30ef6cb1a375a1329dc4cfb9e-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.0-0-a0fa6d352b\" (UID: \"03811de30ef6cb1a375a1329dc4cfb9e\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.213542 kubelet[2329]: I0621 05:29:54.213408 2329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74b6c10e88b75f833e2c87d8b63dfc3d-kubeconfig\") pod \"kube-scheduler-ci-4372.0.0-0-a0fa6d352b\" (UID: \"74b6c10e88b75f833e2c87d8b63dfc3d\") " pod="kube-system/kube-scheduler-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.213723 kubelet[2329]: I0621 05:29:54.213423 2329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a8183a52acf1b10dd3dff7a659f93b6-ca-certs\") pod \"kube-apiserver-ci-4372.0.0-0-a0fa6d352b\" (UID: \"6a8183a52acf1b10dd3dff7a659f93b6\") " pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.213723 kubelet[2329]: I0621 05:29:54.213437 2329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a8183a52acf1b10dd3dff7a659f93b6-k8s-certs\") pod \"kube-apiserver-ci-4372.0.0-0-a0fa6d352b\" (UID: \"6a8183a52acf1b10dd3dff7a659f93b6\") " pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.213723 kubelet[2329]: I0621 05:29:54.213453 2329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a8183a52acf1b10dd3dff7a659f93b6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.0-0-a0fa6d352b\" (UID: \"6a8183a52acf1b10dd3dff7a659f93b6\") " pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.213622 systemd[1]: Created slice kubepods-burstable-pod74b6c10e88b75f833e2c87d8b63dfc3d.slice - libcontainer container kubepods-burstable-pod74b6c10e88b75f833e2c87d8b63dfc3d.slice. Jun 21 05:29:54.214504 kubelet[2329]: E0621 05:29:54.214430 2329 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.73.218:6443/api/v1/nodes\": dial tcp 164.92.73.218:6443: connect: connection refused" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.215345 kubelet[2329]: E0621 05:29:54.215281 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.73.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-0-a0fa6d352b?timeout=10s\": dial tcp 164.92.73.218:6443: connect: connection refused" interval="400ms" Jun 21 05:29:54.217303 kubelet[2329]: E0621 05:29:54.217263 2329 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-0-a0fa6d352b\" not found" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.416664 kubelet[2329]: I0621 05:29:54.416622 2329 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.417241 kubelet[2329]: E0621 05:29:54.417212 2329 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.73.218:6443/api/v1/nodes\": dial tcp 164.92.73.218:6443: connect: connection refused" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.490315 kubelet[2329]: E0621 05:29:54.490160 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:54.493195 containerd[1542]: time="2025-06-21T05:29:54.493113384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.0-0-a0fa6d352b,Uid:6a8183a52acf1b10dd3dff7a659f93b6,Namespace:kube-system,Attempt:0,}" Jun 21 05:29:54.509349 kubelet[2329]: E0621 05:29:54.509296 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:54.518440 kubelet[2329]: E0621 05:29:54.518273 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:54.521152 containerd[1542]: time="2025-06-21T05:29:54.520870873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.0-0-a0fa6d352b,Uid:03811de30ef6cb1a375a1329dc4cfb9e,Namespace:kube-system,Attempt:0,}" Jun 21 05:29:54.531267 containerd[1542]: time="2025-06-21T05:29:54.531203217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.0-0-a0fa6d352b,Uid:74b6c10e88b75f833e2c87d8b63dfc3d,Namespace:kube-system,Attempt:0,}" Jun 21 05:29:54.616180 kubelet[2329]: E0621 05:29:54.616048 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.73.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-0-a0fa6d352b?timeout=10s\": dial tcp 164.92.73.218:6443: connect: connection refused" interval="800ms" Jun 21 05:29:54.641167 containerd[1542]: time="2025-06-21T05:29:54.640521159Z" level=info msg="connecting to shim 10c8312e97d0d99ba6f304b5834df6b5bc5654b654ce6343d9258fa6d558f175" address="unix:///run/containerd/s/4daefa321c54ae17ab67c68b21cf718d24efb142de3779d42205bcd7c610ecc5" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:29:54.646526 containerd[1542]: time="2025-06-21T05:29:54.646456457Z" level=info msg="connecting to shim efcc2b414b594249076468fc0b0eb6a34d104508b6b8fa584ed02921f9bc6054" address="unix:///run/containerd/s/a79134f83e30232819a450a2f51371a9510b17d172a12c3998ef04a5d8a890aa" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:29:54.648665 containerd[1542]: time="2025-06-21T05:29:54.648574376Z" level=info msg="connecting to shim 8a537cb0008971c6da77e977656a060d5250b9126876ccb504bd7e36f089f0e7" address="unix:///run/containerd/s/39506f54ca867eda70aca5c103ccd5e7f4ed53c2a4860af3d66f9b4e586e2b20" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:29:54.783375 systemd[1]: Started cri-containerd-10c8312e97d0d99ba6f304b5834df6b5bc5654b654ce6343d9258fa6d558f175.scope - libcontainer container 10c8312e97d0d99ba6f304b5834df6b5bc5654b654ce6343d9258fa6d558f175. Jun 21 05:29:54.805956 systemd[1]: Started cri-containerd-8a537cb0008971c6da77e977656a060d5250b9126876ccb504bd7e36f089f0e7.scope - libcontainer container 8a537cb0008971c6da77e977656a060d5250b9126876ccb504bd7e36f089f0e7. Jun 21 05:29:54.807939 systemd[1]: Started cri-containerd-efcc2b414b594249076468fc0b0eb6a34d104508b6b8fa584ed02921f9bc6054.scope - libcontainer container efcc2b414b594249076468fc0b0eb6a34d104508b6b8fa584ed02921f9bc6054. Jun 21 05:29:54.819615 kubelet[2329]: I0621 05:29:54.819354 2329 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.820080 kubelet[2329]: E0621 05:29:54.820041 2329 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.73.218:6443/api/v1/nodes\": dial tcp 164.92.73.218:6443: connect: connection refused" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:54.899367 kubelet[2329]: E0621 05:29:54.899311 2329 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://164.92.73.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.73.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 21 05:29:54.903063 containerd[1542]: time="2025-06-21T05:29:54.902993135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.0-0-a0fa6d352b,Uid:03811de30ef6cb1a375a1329dc4cfb9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"10c8312e97d0d99ba6f304b5834df6b5bc5654b654ce6343d9258fa6d558f175\"" Jun 21 05:29:54.907557 kubelet[2329]: E0621 05:29:54.907184 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:54.914443 containerd[1542]: time="2025-06-21T05:29:54.914394672Z" level=info msg="CreateContainer within sandbox \"10c8312e97d0d99ba6f304b5834df6b5bc5654b654ce6343d9258fa6d558f175\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 05:29:54.927832 containerd[1542]: time="2025-06-21T05:29:54.927445651Z" level=info msg="Container 8710ed2cc43066cc09c0e56167e130e017c50ef9c550b6ae6465360edd2f7bbf: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:54.958534 containerd[1542]: time="2025-06-21T05:29:54.958207101Z" level=info msg="CreateContainer within sandbox \"10c8312e97d0d99ba6f304b5834df6b5bc5654b654ce6343d9258fa6d558f175\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8710ed2cc43066cc09c0e56167e130e017c50ef9c550b6ae6465360edd2f7bbf\"" Jun 21 05:29:54.959522 containerd[1542]: time="2025-06-21T05:29:54.959485081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.0-0-a0fa6d352b,Uid:74b6c10e88b75f833e2c87d8b63dfc3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a537cb0008971c6da77e977656a060d5250b9126876ccb504bd7e36f089f0e7\"" Jun 21 05:29:54.960680 kubelet[2329]: E0621 05:29:54.960468 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:54.968782 containerd[1542]: time="2025-06-21T05:29:54.968652732Z" level=info msg="StartContainer for \"8710ed2cc43066cc09c0e56167e130e017c50ef9c550b6ae6465360edd2f7bbf\"" Jun 21 05:29:54.970130 containerd[1542]: time="2025-06-21T05:29:54.970033483Z" level=info msg="connecting to shim 8710ed2cc43066cc09c0e56167e130e017c50ef9c550b6ae6465360edd2f7bbf" address="unix:///run/containerd/s/4daefa321c54ae17ab67c68b21cf718d24efb142de3779d42205bcd7c610ecc5" protocol=ttrpc version=3 Jun 21 05:29:54.970417 containerd[1542]: time="2025-06-21T05:29:54.970360279Z" level=info msg="CreateContainer within sandbox \"8a537cb0008971c6da77e977656a060d5250b9126876ccb504bd7e36f089f0e7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 05:29:54.982842 containerd[1542]: time="2025-06-21T05:29:54.982747568Z" level=info msg="Container bbf0dcb8ef61f8cfd234acc0cda0e96f10ed1e84d9918ef7cb109fd214500247: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:54.998446 systemd[1]: Started cri-containerd-8710ed2cc43066cc09c0e56167e130e017c50ef9c550b6ae6465360edd2f7bbf.scope - libcontainer container 8710ed2cc43066cc09c0e56167e130e017c50ef9c550b6ae6465360edd2f7bbf. Jun 21 05:29:55.000207 containerd[1542]: time="2025-06-21T05:29:55.000154611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.0-0-a0fa6d352b,Uid:6a8183a52acf1b10dd3dff7a659f93b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"efcc2b414b594249076468fc0b0eb6a34d104508b6b8fa584ed02921f9bc6054\"" Jun 21 05:29:55.004038 kubelet[2329]: E0621 05:29:55.003881 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:55.009029 containerd[1542]: time="2025-06-21T05:29:55.008918636Z" level=info msg="CreateContainer within sandbox \"8a537cb0008971c6da77e977656a060d5250b9126876ccb504bd7e36f089f0e7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bbf0dcb8ef61f8cfd234acc0cda0e96f10ed1e84d9918ef7cb109fd214500247\"" Jun 21 05:29:55.009640 containerd[1542]: time="2025-06-21T05:29:55.009506494Z" level=info msg="StartContainer for \"bbf0dcb8ef61f8cfd234acc0cda0e96f10ed1e84d9918ef7cb109fd214500247\"" Jun 21 05:29:55.010746 containerd[1542]: time="2025-06-21T05:29:55.010707321Z" level=info msg="CreateContainer within sandbox \"efcc2b414b594249076468fc0b0eb6a34d104508b6b8fa584ed02921f9bc6054\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 05:29:55.013806 containerd[1542]: time="2025-06-21T05:29:55.013728191Z" level=info msg="connecting to shim bbf0dcb8ef61f8cfd234acc0cda0e96f10ed1e84d9918ef7cb109fd214500247" address="unix:///run/containerd/s/39506f54ca867eda70aca5c103ccd5e7f4ed53c2a4860af3d66f9b4e586e2b20" protocol=ttrpc version=3 Jun 21 05:29:55.021163 containerd[1542]: time="2025-06-21T05:29:55.019282151Z" level=info msg="Container 3fc2fec299faa2e4d76d6a5afd877579a86288aa9d9db5a35bd3701cecd627f1: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:29:55.034538 containerd[1542]: time="2025-06-21T05:29:55.034389439Z" level=info msg="CreateContainer within sandbox \"efcc2b414b594249076468fc0b0eb6a34d104508b6b8fa584ed02921f9bc6054\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3fc2fec299faa2e4d76d6a5afd877579a86288aa9d9db5a35bd3701cecd627f1\"" Jun 21 05:29:55.037932 containerd[1542]: time="2025-06-21T05:29:55.037882363Z" level=info msg="StartContainer for \"3fc2fec299faa2e4d76d6a5afd877579a86288aa9d9db5a35bd3701cecd627f1\"" Jun 21 05:29:55.040755 containerd[1542]: time="2025-06-21T05:29:55.040189165Z" level=info msg="connecting to shim 3fc2fec299faa2e4d76d6a5afd877579a86288aa9d9db5a35bd3701cecd627f1" address="unix:///run/containerd/s/a79134f83e30232819a450a2f51371a9510b17d172a12c3998ef04a5d8a890aa" protocol=ttrpc version=3 Jun 21 05:29:55.065402 systemd[1]: Started cri-containerd-bbf0dcb8ef61f8cfd234acc0cda0e96f10ed1e84d9918ef7cb109fd214500247.scope - libcontainer container bbf0dcb8ef61f8cfd234acc0cda0e96f10ed1e84d9918ef7cb109fd214500247. Jun 21 05:29:55.076674 systemd[1]: Started cri-containerd-3fc2fec299faa2e4d76d6a5afd877579a86288aa9d9db5a35bd3701cecd627f1.scope - libcontainer container 3fc2fec299faa2e4d76d6a5afd877579a86288aa9d9db5a35bd3701cecd627f1. Jun 21 05:29:55.078562 kubelet[2329]: E0621 05:29:55.078514 2329 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://164.92.73.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-0-a0fa6d352b&limit=500&resourceVersion=0\": dial tcp 164.92.73.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 21 05:29:55.127269 containerd[1542]: time="2025-06-21T05:29:55.127218974Z" level=info msg="StartContainer for \"8710ed2cc43066cc09c0e56167e130e017c50ef9c550b6ae6465360edd2f7bbf\" returns successfully" Jun 21 05:29:55.210877 containerd[1542]: time="2025-06-21T05:29:55.210833596Z" level=info msg="StartContainer for \"bbf0dcb8ef61f8cfd234acc0cda0e96f10ed1e84d9918ef7cb109fd214500247\" returns successfully" Jun 21 05:29:55.222478 containerd[1542]: time="2025-06-21T05:29:55.222433702Z" level=info msg="StartContainer for \"3fc2fec299faa2e4d76d6a5afd877579a86288aa9d9db5a35bd3701cecd627f1\" returns successfully" Jun 21 05:29:55.304591 kubelet[2329]: E0621 05:29:55.304411 2329 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://164.92.73.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.92.73.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 21 05:29:55.418159 kubelet[2329]: E0621 05:29:55.417919 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.73.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-0-a0fa6d352b?timeout=10s\": dial tcp 164.92.73.218:6443: connect: connection refused" interval="1.6s" Jun 21 05:29:55.418647 kubelet[2329]: E0621 05:29:55.418607 2329 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://164.92.73.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.92.73.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 21 05:29:55.623021 kubelet[2329]: I0621 05:29:55.621997 2329 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:56.111390 kubelet[2329]: E0621 05:29:56.110770 2329 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-0-a0fa6d352b\" not found" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:56.111390 kubelet[2329]: E0621 05:29:56.110978 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:56.117081 kubelet[2329]: E0621 05:29:56.117049 2329 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-0-a0fa6d352b\" not found" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:56.118086 kubelet[2329]: E0621 05:29:56.118010 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:56.121164 kubelet[2329]: E0621 05:29:56.120561 2329 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-0-a0fa6d352b\" not found" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:56.122636 kubelet[2329]: E0621 05:29:56.122529 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:57.124592 kubelet[2329]: E0621 05:29:57.124456 2329 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-0-a0fa6d352b\" not found" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:57.127943 kubelet[2329]: E0621 05:29:57.127877 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:57.129458 kubelet[2329]: E0621 05:29:57.129412 2329 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-0-a0fa6d352b\" not found" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:57.129852 kubelet[2329]: E0621 05:29:57.129655 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:57.129963 kubelet[2329]: E0621 05:29:57.129855 2329 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-0-a0fa6d352b\" not found" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:57.130369 kubelet[2329]: E0621 05:29:57.130031 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:57.896246 kubelet[2329]: E0621 05:29:57.896189 2329 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.0-0-a0fa6d352b\" not found" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:57.981248 kubelet[2329]: I0621 05:29:57.981196 2329 apiserver.go:52] "Watching apiserver" Jun 21 05:29:57.998300 kubelet[2329]: I0621 05:29:57.998246 2329 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:58.012439 kubelet[2329]: I0621 05:29:58.012196 2329 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:58.012696 kubelet[2329]: I0621 05:29:58.012658 2329 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 05:29:58.072395 kubelet[2329]: E0621 05:29:58.072357 2329 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.0-0-a0fa6d352b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:58.072395 kubelet[2329]: I0621 05:29:58.072388 2329 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:58.075139 kubelet[2329]: E0621 05:29:58.075066 2329 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.0-0-a0fa6d352b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:58.075456 kubelet[2329]: I0621 05:29:58.075318 2329 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:58.077430 kubelet[2329]: E0621 05:29:58.077389 2329 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.0-0-a0fa6d352b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:58.125145 kubelet[2329]: I0621 05:29:58.125068 2329 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:58.125623 kubelet[2329]: I0621 05:29:58.125454 2329 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:58.129708 kubelet[2329]: E0621 05:29:58.129559 2329 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.0-0-a0fa6d352b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:58.130527 kubelet[2329]: E0621 05:29:58.130149 2329 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.0-0-a0fa6d352b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:29:58.131383 kubelet[2329]: E0621 05:29:58.131316 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:29:58.131383 kubelet[2329]: E0621 05:29:58.131360 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:00.327923 systemd[1]: Reload requested from client PID 2611 ('systemctl') (unit session-7.scope)... Jun 21 05:30:00.328533 systemd[1]: Reloading... Jun 21 05:30:00.526187 zram_generator::config[2652]: No configuration found. Jun 21 05:30:00.744939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:30:00.953853 systemd[1]: Reloading finished in 624 ms. Jun 21 05:30:01.004779 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:30:01.022046 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 05:30:01.022543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:30:01.022634 systemd[1]: kubelet.service: Consumed 1.180s CPU time, 126.5M memory peak. Jun 21 05:30:01.028980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:30:01.239035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:30:01.256233 (kubelet)[2705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 05:30:01.356996 kubelet[2705]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:30:01.356996 kubelet[2705]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 05:30:01.356996 kubelet[2705]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:30:01.357838 kubelet[2705]: I0621 05:30:01.356994 2705 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 05:30:01.384891 kubelet[2705]: I0621 05:30:01.384398 2705 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 21 05:30:01.384891 kubelet[2705]: I0621 05:30:01.384438 2705 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 05:30:01.384891 kubelet[2705]: I0621 05:30:01.384720 2705 server.go:956] "Client rotation is on, will bootstrap in background" Jun 21 05:30:01.387335 kubelet[2705]: I0621 05:30:01.387303 2705 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 21 05:30:01.398238 kubelet[2705]: I0621 05:30:01.398185 2705 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 05:30:01.416527 kubelet[2705]: I0621 05:30:01.416471 2705 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 05:30:01.424268 kubelet[2705]: I0621 05:30:01.423974 2705 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 05:30:01.425095 kubelet[2705]: I0621 05:30:01.425036 2705 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 05:30:01.425580 kubelet[2705]: I0621 05:30:01.425309 2705 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.0-0-a0fa6d352b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 05:30:01.425869 kubelet[2705]: I0621 05:30:01.425844 2705 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 05:30:01.425979 kubelet[2705]: I0621 05:30:01.425966 2705 container_manager_linux.go:303] "Creating device plugin manager" Jun 21 05:30:01.426187 kubelet[2705]: I0621 05:30:01.426166 2705 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:30:01.426724 kubelet[2705]: I0621 05:30:01.426570 2705 kubelet.go:480] "Attempting to sync node with API server" Jun 21 05:30:01.426724 kubelet[2705]: I0621 05:30:01.426602 2705 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 05:30:01.426724 kubelet[2705]: I0621 05:30:01.426633 2705 kubelet.go:386] "Adding apiserver pod source" Jun 21 05:30:01.426724 kubelet[2705]: I0621 05:30:01.426649 2705 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 05:30:01.432405 kubelet[2705]: I0621 05:30:01.432086 2705 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 05:30:01.435436 kubelet[2705]: I0621 05:30:01.433609 2705 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 21 05:30:01.455529 kubelet[2705]: I0621 05:30:01.454983 2705 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 05:30:01.456059 kubelet[2705]: I0621 05:30:01.455951 2705 server.go:1289] "Started kubelet" Jun 21 05:30:01.462552 kubelet[2705]: I0621 05:30:01.462520 2705 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 05:30:01.476197 kubelet[2705]: I0621 05:30:01.476101 2705 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 05:30:01.479037 kubelet[2705]: I0621 05:30:01.478991 2705 server.go:317] "Adding debug handlers to kubelet server" Jun 21 05:30:01.495983 kubelet[2705]: I0621 05:30:01.487820 2705 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 05:30:01.497152 kubelet[2705]: I0621 05:30:01.496685 2705 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 05:30:01.498307 kubelet[2705]: I0621 05:30:01.497854 2705 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 05:30:01.498851 kubelet[2705]: E0621 05:30:01.492086 2705 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-0-a0fa6d352b\" not found" Jun 21 05:30:01.499883 kubelet[2705]: I0621 05:30:01.491083 2705 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 05:30:01.501789 kubelet[2705]: I0621 05:30:01.491823 2705 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 05:30:01.502204 kubelet[2705]: I0621 05:30:01.502097 2705 reconciler.go:26] "Reconciler: start to sync state" Jun 21 05:30:01.503045 kubelet[2705]: I0621 05:30:01.503007 2705 factory.go:223] Registration of the systemd container factory successfully Jun 21 05:30:01.503505 kubelet[2705]: I0621 05:30:01.503451 2705 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 05:30:01.538751 kubelet[2705]: E0621 05:30:01.538689 2705 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 05:30:01.540378 kubelet[2705]: I0621 05:30:01.540338 2705 factory.go:223] Registration of the containerd container factory successfully Jun 21 05:30:01.552889 kubelet[2705]: I0621 05:30:01.552750 2705 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 21 05:30:01.558454 kubelet[2705]: I0621 05:30:01.558341 2705 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 21 05:30:01.558454 kubelet[2705]: I0621 05:30:01.558411 2705 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 21 05:30:01.558936 kubelet[2705]: I0621 05:30:01.558818 2705 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 05:30:01.561918 kubelet[2705]: I0621 05:30:01.561850 2705 kubelet.go:2436] "Starting kubelet main sync loop" Jun 21 05:30:01.562434 kubelet[2705]: E0621 05:30:01.562279 2705 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 05:30:01.664965 kubelet[2705]: E0621 05:30:01.663504 2705 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 21 05:30:01.702430 kubelet[2705]: I0621 05:30:01.701971 2705 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 05:30:01.702430 kubelet[2705]: I0621 05:30:01.701997 2705 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 05:30:01.702430 kubelet[2705]: I0621 05:30:01.702035 2705 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:30:01.702430 kubelet[2705]: I0621 05:30:01.702257 2705 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 05:30:01.702430 kubelet[2705]: I0621 05:30:01.702270 2705 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 05:30:01.702430 kubelet[2705]: I0621 05:30:01.702295 2705 policy_none.go:49] "None policy: Start" Jun 21 05:30:01.702430 kubelet[2705]: I0621 05:30:01.702308 2705 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 05:30:01.702430 kubelet[2705]: I0621 05:30:01.702320 2705 state_mem.go:35] "Initializing new in-memory state store" Jun 21 05:30:01.702843 kubelet[2705]: I0621 05:30:01.702592 2705 state_mem.go:75] "Updated machine memory state" Jun 21 05:30:01.712547 kubelet[2705]: E0621 05:30:01.710107 2705 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 21 05:30:01.713293 kubelet[2705]: I0621 05:30:01.713263 2705 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 05:30:01.714615 kubelet[2705]: I0621 05:30:01.713428 2705 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 05:30:01.716830 kubelet[2705]: I0621 05:30:01.716583 2705 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 05:30:01.725941 kubelet[2705]: E0621 05:30:01.724630 2705 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 05:30:01.836706 kubelet[2705]: I0621 05:30:01.836249 2705 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.862286 kubelet[2705]: I0621 05:30:01.860781 2705 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.862286 kubelet[2705]: I0621 05:30:01.860946 2705 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.868104 kubelet[2705]: I0621 05:30:01.868049 2705 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.869526 kubelet[2705]: I0621 05:30:01.868772 2705 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.869526 kubelet[2705]: I0621 05:30:01.869161 2705 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.892862 kubelet[2705]: I0621 05:30:01.891672 2705 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 21 05:30:01.896558 kubelet[2705]: I0621 05:30:01.896508 2705 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 21 05:30:01.899790 kubelet[2705]: I0621 05:30:01.899071 2705 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 21 05:30:01.906749 kubelet[2705]: I0621 05:30:01.906273 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03811de30ef6cb1a375a1329dc4cfb9e-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.0-0-a0fa6d352b\" (UID: \"03811de30ef6cb1a375a1329dc4cfb9e\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.906749 kubelet[2705]: I0621 05:30:01.906333 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03811de30ef6cb1a375a1329dc4cfb9e-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.0-0-a0fa6d352b\" (UID: \"03811de30ef6cb1a375a1329dc4cfb9e\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.906749 kubelet[2705]: I0621 05:30:01.906362 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03811de30ef6cb1a375a1329dc4cfb9e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.0-0-a0fa6d352b\" (UID: \"03811de30ef6cb1a375a1329dc4cfb9e\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.906749 kubelet[2705]: I0621 05:30:01.906396 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a8183a52acf1b10dd3dff7a659f93b6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.0-0-a0fa6d352b\" (UID: \"6a8183a52acf1b10dd3dff7a659f93b6\") " pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.906749 kubelet[2705]: I0621 05:30:01.906439 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03811de30ef6cb1a375a1329dc4cfb9e-ca-certs\") pod \"kube-controller-manager-ci-4372.0.0-0-a0fa6d352b\" (UID: \"03811de30ef6cb1a375a1329dc4cfb9e\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.907112 kubelet[2705]: I0621 05:30:01.906480 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74b6c10e88b75f833e2c87d8b63dfc3d-kubeconfig\") pod \"kube-scheduler-ci-4372.0.0-0-a0fa6d352b\" (UID: \"74b6c10e88b75f833e2c87d8b63dfc3d\") " pod="kube-system/kube-scheduler-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.907112 kubelet[2705]: I0621 05:30:01.906510 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a8183a52acf1b10dd3dff7a659f93b6-ca-certs\") pod \"kube-apiserver-ci-4372.0.0-0-a0fa6d352b\" (UID: \"6a8183a52acf1b10dd3dff7a659f93b6\") " pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.907112 kubelet[2705]: I0621 05:30:01.906535 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a8183a52acf1b10dd3dff7a659f93b6-k8s-certs\") pod \"kube-apiserver-ci-4372.0.0-0-a0fa6d352b\" (UID: \"6a8183a52acf1b10dd3dff7a659f93b6\") " pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:01.907112 kubelet[2705]: I0621 05:30:01.906573 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03811de30ef6cb1a375a1329dc4cfb9e-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.0-0-a0fa6d352b\" (UID: \"03811de30ef6cb1a375a1329dc4cfb9e\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:02.195696 kubelet[2705]: E0621 05:30:02.193038 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:02.198274 kubelet[2705]: E0621 05:30:02.197327 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:02.201182 kubelet[2705]: E0621 05:30:02.201021 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:02.428800 kubelet[2705]: I0621 05:30:02.428737 2705 apiserver.go:52] "Watching apiserver" Jun 21 05:30:02.501965 kubelet[2705]: I0621 05:30:02.501790 2705 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 05:30:02.629791 kubelet[2705]: E0621 05:30:02.629498 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:02.633156 kubelet[2705]: I0621 05:30:02.630429 2705 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:02.637566 kubelet[2705]: E0621 05:30:02.635654 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:02.665490 kubelet[2705]: I0621 05:30:02.665441 2705 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 21 05:30:02.666483 kubelet[2705]: E0621 05:30:02.665842 2705 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.0-0-a0fa6d352b\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:02.666483 kubelet[2705]: E0621 05:30:02.666084 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:02.709319 kubelet[2705]: I0621 05:30:02.709229 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.0-0-a0fa6d352b" podStartSLOduration=1.7091913349999999 podStartE2EDuration="1.709191335s" podCreationTimestamp="2025-06-21 05:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:30:02.707258031 +0000 UTC m=+1.439055853" watchObservedRunningTime="2025-06-21 05:30:02.709191335 +0000 UTC m=+1.440989149" Jun 21 05:30:02.767712 kubelet[2705]: I0621 05:30:02.767440 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.0-0-a0fa6d352b" podStartSLOduration=1.76740012 podStartE2EDuration="1.76740012s" podCreationTimestamp="2025-06-21 05:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:30:02.741390715 +0000 UTC m=+1.473188538" watchObservedRunningTime="2025-06-21 05:30:02.76740012 +0000 UTC m=+1.499197940" Jun 21 05:30:02.791762 kubelet[2705]: I0621 05:30:02.791677 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.0-0-a0fa6d352b" podStartSLOduration=1.791655976 podStartE2EDuration="1.791655976s" podCreationTimestamp="2025-06-21 05:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:30:02.767849277 +0000 UTC m=+1.499647100" watchObservedRunningTime="2025-06-21 05:30:02.791655976 +0000 UTC m=+1.523453799" Jun 21 05:30:03.631892 kubelet[2705]: E0621 05:30:03.631775 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:03.631892 kubelet[2705]: E0621 05:30:03.631776 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:04.632866 kubelet[2705]: E0621 05:30:04.632821 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:04.633633 kubelet[2705]: E0621 05:30:04.633602 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:05.422748 kubelet[2705]: I0621 05:30:05.422665 2705 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 05:30:05.423890 containerd[1542]: time="2025-06-21T05:30:05.423788391Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 05:30:05.424862 kubelet[2705]: I0621 05:30:05.424521 2705 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 05:30:06.272242 systemd[1]: Created slice kubepods-besteffort-pod2797791c_6ea6_45ac_9589_f9a7968b64fd.slice - libcontainer container kubepods-besteffort-pod2797791c_6ea6_45ac_9589_f9a7968b64fd.slice. Jun 21 05:30:06.336632 kubelet[2705]: I0621 05:30:06.336441 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2797791c-6ea6-45ac-9589-f9a7968b64fd-kube-proxy\") pod \"kube-proxy-mm4mc\" (UID: \"2797791c-6ea6-45ac-9589-f9a7968b64fd\") " pod="kube-system/kube-proxy-mm4mc" Jun 21 05:30:06.336632 kubelet[2705]: I0621 05:30:06.336493 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qpsc\" (UniqueName: \"kubernetes.io/projected/2797791c-6ea6-45ac-9589-f9a7968b64fd-kube-api-access-9qpsc\") pod \"kube-proxy-mm4mc\" (UID: \"2797791c-6ea6-45ac-9589-f9a7968b64fd\") " pod="kube-system/kube-proxy-mm4mc" Jun 21 05:30:06.336632 kubelet[2705]: I0621 05:30:06.336515 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2797791c-6ea6-45ac-9589-f9a7968b64fd-xtables-lock\") pod \"kube-proxy-mm4mc\" (UID: \"2797791c-6ea6-45ac-9589-f9a7968b64fd\") " pod="kube-system/kube-proxy-mm4mc" Jun 21 05:30:06.336632 kubelet[2705]: I0621 05:30:06.336532 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2797791c-6ea6-45ac-9589-f9a7968b64fd-lib-modules\") pod \"kube-proxy-mm4mc\" (UID: \"2797791c-6ea6-45ac-9589-f9a7968b64fd\") " pod="kube-system/kube-proxy-mm4mc" Jun 21 05:30:06.582017 kubelet[2705]: E0621 05:30:06.581084 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:06.583634 containerd[1542]: time="2025-06-21T05:30:06.583472254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mm4mc,Uid:2797791c-6ea6-45ac-9589-f9a7968b64fd,Namespace:kube-system,Attempt:0,}" Jun 21 05:30:06.624568 systemd[1]: Created slice kubepods-besteffort-pod3b312e88_0334_46e5_9494_2f978cec4c28.slice - libcontainer container kubepods-besteffort-pod3b312e88_0334_46e5_9494_2f978cec4c28.slice. Jun 21 05:30:06.638865 containerd[1542]: time="2025-06-21T05:30:06.638363499Z" level=info msg="connecting to shim 087cae2cfab04589820ca7a6ba63fdb85d1c5fab48dd68cb23c55cb14e59afbe" address="unix:///run/containerd/s/f8ae008dddfbea33241bf2b5d38ab37e540c8541f971ca8bc2f233c605579932" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:30:06.640097 kubelet[2705]: I0621 05:30:06.639972 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3b312e88-0334-46e5-9494-2f978cec4c28-var-lib-calico\") pod \"tigera-operator-68f7c7984d-8stk7\" (UID: \"3b312e88-0334-46e5-9494-2f978cec4c28\") " pod="tigera-operator/tigera-operator-68f7c7984d-8stk7" Jun 21 05:30:06.640660 kubelet[2705]: I0621 05:30:06.640635 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84x5t\" (UniqueName: \"kubernetes.io/projected/3b312e88-0334-46e5-9494-2f978cec4c28-kube-api-access-84x5t\") pod \"tigera-operator-68f7c7984d-8stk7\" (UID: \"3b312e88-0334-46e5-9494-2f978cec4c28\") " pod="tigera-operator/tigera-operator-68f7c7984d-8stk7" Jun 21 05:30:06.681485 systemd[1]: Started cri-containerd-087cae2cfab04589820ca7a6ba63fdb85d1c5fab48dd68cb23c55cb14e59afbe.scope - libcontainer container 087cae2cfab04589820ca7a6ba63fdb85d1c5fab48dd68cb23c55cb14e59afbe. Jun 21 05:30:06.727106 containerd[1542]: time="2025-06-21T05:30:06.726786370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mm4mc,Uid:2797791c-6ea6-45ac-9589-f9a7968b64fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"087cae2cfab04589820ca7a6ba63fdb85d1c5fab48dd68cb23c55cb14e59afbe\"" Jun 21 05:30:06.728887 kubelet[2705]: E0621 05:30:06.728850 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:06.737026 containerd[1542]: time="2025-06-21T05:30:06.736885756Z" level=info msg="CreateContainer within sandbox \"087cae2cfab04589820ca7a6ba63fdb85d1c5fab48dd68cb23c55cb14e59afbe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 05:30:06.762821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3590710434.mount: Deactivated successfully. Jun 21 05:30:06.765172 containerd[1542]: time="2025-06-21T05:30:06.764869905Z" level=info msg="Container 3be5e2f6360bc81b5eaeea64f7f2a77a63c2a19a66ea617a835d2afcce0cbb1f: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:06.785521 containerd[1542]: time="2025-06-21T05:30:06.785429051Z" level=info msg="CreateContainer within sandbox \"087cae2cfab04589820ca7a6ba63fdb85d1c5fab48dd68cb23c55cb14e59afbe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3be5e2f6360bc81b5eaeea64f7f2a77a63c2a19a66ea617a835d2afcce0cbb1f\"" Jun 21 05:30:06.786606 containerd[1542]: time="2025-06-21T05:30:06.786548130Z" level=info msg="StartContainer for \"3be5e2f6360bc81b5eaeea64f7f2a77a63c2a19a66ea617a835d2afcce0cbb1f\"" Jun 21 05:30:06.792157 containerd[1542]: time="2025-06-21T05:30:06.791585315Z" level=info msg="connecting to shim 3be5e2f6360bc81b5eaeea64f7f2a77a63c2a19a66ea617a835d2afcce0cbb1f" address="unix:///run/containerd/s/f8ae008dddfbea33241bf2b5d38ab37e540c8541f971ca8bc2f233c605579932" protocol=ttrpc version=3 Jun 21 05:30:06.822401 systemd[1]: Started cri-containerd-3be5e2f6360bc81b5eaeea64f7f2a77a63c2a19a66ea617a835d2afcce0cbb1f.scope - libcontainer container 3be5e2f6360bc81b5eaeea64f7f2a77a63c2a19a66ea617a835d2afcce0cbb1f. Jun 21 05:30:06.888487 containerd[1542]: time="2025-06-21T05:30:06.887627482Z" level=info msg="StartContainer for \"3be5e2f6360bc81b5eaeea64f7f2a77a63c2a19a66ea617a835d2afcce0cbb1f\" returns successfully" Jun 21 05:30:06.933272 containerd[1542]: time="2025-06-21T05:30:06.933210250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-8stk7,Uid:3b312e88-0334-46e5-9494-2f978cec4c28,Namespace:tigera-operator,Attempt:0,}" Jun 21 05:30:06.971433 containerd[1542]: time="2025-06-21T05:30:06.971277430Z" level=info msg="connecting to shim 7dc1945e912c42d13a75fdf0ab9bff3473bdf9406186b23babf6effb1ab21540" address="unix:///run/containerd/s/bdf64d949d55c7917b63dd1369aa1eb4601957260423b374a792e26b1796f206" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:30:07.018719 systemd[1]: Started cri-containerd-7dc1945e912c42d13a75fdf0ab9bff3473bdf9406186b23babf6effb1ab21540.scope - libcontainer container 7dc1945e912c42d13a75fdf0ab9bff3473bdf9406186b23babf6effb1ab21540. Jun 21 05:30:07.106637 containerd[1542]: time="2025-06-21T05:30:07.106571512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-8stk7,Uid:3b312e88-0334-46e5-9494-2f978cec4c28,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7dc1945e912c42d13a75fdf0ab9bff3473bdf9406186b23babf6effb1ab21540\"" Jun 21 05:30:07.112556 containerd[1542]: time="2025-06-21T05:30:07.112509797Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 21 05:30:07.116086 systemd-resolved[1403]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jun 21 05:30:07.374512 kubelet[2705]: E0621 05:30:07.374473 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:07.645281 kubelet[2705]: E0621 05:30:07.642949 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:07.645281 kubelet[2705]: E0621 05:30:07.643143 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:07.786428 kubelet[2705]: I0621 05:30:07.786236 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mm4mc" podStartSLOduration=1.7862135879999999 podStartE2EDuration="1.786213588s" podCreationTimestamp="2025-06-21 05:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:30:07.784927212 +0000 UTC m=+6.516725035" watchObservedRunningTime="2025-06-21 05:30:07.786213588 +0000 UTC m=+6.518011410" Jun 21 05:30:09.048681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2327836695.mount: Deactivated successfully. Jun 21 05:30:10.204152 containerd[1542]: time="2025-06-21T05:30:10.204058519Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:10.205876 containerd[1542]: time="2025-06-21T05:30:10.205822579Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 21 05:30:10.207147 containerd[1542]: time="2025-06-21T05:30:10.207066054Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:10.209275 containerd[1542]: time="2025-06-21T05:30:10.209201087Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:10.210114 containerd[1542]: time="2025-06-21T05:30:10.209932109Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 3.097378291s" Jun 21 05:30:10.210114 containerd[1542]: time="2025-06-21T05:30:10.209988915Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 21 05:30:10.215819 containerd[1542]: time="2025-06-21T05:30:10.215759981Z" level=info msg="CreateContainer within sandbox \"7dc1945e912c42d13a75fdf0ab9bff3473bdf9406186b23babf6effb1ab21540\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 21 05:30:10.227222 containerd[1542]: time="2025-06-21T05:30:10.227113987Z" level=info msg="Container d8255876ec28fc9dae3d5596ea34a8eddcd2970ebc4b3e2babf1ec15447d9f45: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:10.237582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount797721669.mount: Deactivated successfully. Jun 21 05:30:10.248893 containerd[1542]: time="2025-06-21T05:30:10.248783021Z" level=info msg="CreateContainer within sandbox \"7dc1945e912c42d13a75fdf0ab9bff3473bdf9406186b23babf6effb1ab21540\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d8255876ec28fc9dae3d5596ea34a8eddcd2970ebc4b3e2babf1ec15447d9f45\"" Jun 21 05:30:10.252289 containerd[1542]: time="2025-06-21T05:30:10.252226789Z" level=info msg="StartContainer for \"d8255876ec28fc9dae3d5596ea34a8eddcd2970ebc4b3e2babf1ec15447d9f45\"" Jun 21 05:30:10.253942 containerd[1542]: time="2025-06-21T05:30:10.253834782Z" level=info msg="connecting to shim d8255876ec28fc9dae3d5596ea34a8eddcd2970ebc4b3e2babf1ec15447d9f45" address="unix:///run/containerd/s/bdf64d949d55c7917b63dd1369aa1eb4601957260423b374a792e26b1796f206" protocol=ttrpc version=3 Jun 21 05:30:10.290479 systemd[1]: Started cri-containerd-d8255876ec28fc9dae3d5596ea34a8eddcd2970ebc4b3e2babf1ec15447d9f45.scope - libcontainer container d8255876ec28fc9dae3d5596ea34a8eddcd2970ebc4b3e2babf1ec15447d9f45. Jun 21 05:30:10.337693 containerd[1542]: time="2025-06-21T05:30:10.337593636Z" level=info msg="StartContainer for \"d8255876ec28fc9dae3d5596ea34a8eddcd2970ebc4b3e2babf1ec15447d9f45\" returns successfully" Jun 21 05:30:13.114603 kubelet[2705]: E0621 05:30:13.114507 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:13.141292 kubelet[2705]: I0621 05:30:13.141190 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-8stk7" podStartSLOduration=4.040580953 podStartE2EDuration="7.141172058s" podCreationTimestamp="2025-06-21 05:30:06 +0000 UTC" firstStartedPulling="2025-06-21 05:30:07.110864711 +0000 UTC m=+5.842662517" lastFinishedPulling="2025-06-21 05:30:10.211455821 +0000 UTC m=+8.943253622" observedRunningTime="2025-06-21 05:30:10.671784252 +0000 UTC m=+9.403582083" watchObservedRunningTime="2025-06-21 05:30:13.141172058 +0000 UTC m=+11.872969880" Jun 21 05:30:13.444226 kubelet[2705]: E0621 05:30:13.444182 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:13.668930 kubelet[2705]: E0621 05:30:13.668509 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:15.926056 sudo[1774]: pam_unix(sudo:session): session closed for user root Jun 21 05:30:15.930698 sshd[1773]: Connection closed by 139.178.68.195 port 54204 Jun 21 05:30:15.933152 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:15.939992 systemd[1]: sshd@6-164.92.73.218:22-139.178.68.195:54204.service: Deactivated successfully. Jun 21 05:30:15.944626 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 05:30:15.945006 systemd[1]: session-7.scope: Consumed 7.784s CPU time, 157.5M memory peak. Jun 21 05:30:15.951944 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. Jun 21 05:30:15.954000 systemd-logind[1514]: Removed session 7. Jun 21 05:30:16.264945 update_engine[1516]: I20250621 05:30:16.264173 1516 update_attempter.cc:509] Updating boot flags... Jun 21 05:30:20.578327 systemd[1]: Created slice kubepods-besteffort-pod1dda97d4_db4d_4a32_b1ac_fcf295fb242b.slice - libcontainer container kubepods-besteffort-pod1dda97d4_db4d_4a32_b1ac_fcf295fb242b.slice. Jun 21 05:30:20.644626 kubelet[2705]: I0621 05:30:20.644525 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dda97d4-db4d-4a32-b1ac-fcf295fb242b-tigera-ca-bundle\") pod \"calico-typha-74d8ccc645-jnlqd\" (UID: \"1dda97d4-db4d-4a32-b1ac-fcf295fb242b\") " pod="calico-system/calico-typha-74d8ccc645-jnlqd" Jun 21 05:30:20.645760 kubelet[2705]: I0621 05:30:20.644685 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1dda97d4-db4d-4a32-b1ac-fcf295fb242b-typha-certs\") pod \"calico-typha-74d8ccc645-jnlqd\" (UID: \"1dda97d4-db4d-4a32-b1ac-fcf295fb242b\") " pod="calico-system/calico-typha-74d8ccc645-jnlqd" Jun 21 05:30:20.645760 kubelet[2705]: I0621 05:30:20.644713 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ddld\" (UniqueName: \"kubernetes.io/projected/1dda97d4-db4d-4a32-b1ac-fcf295fb242b-kube-api-access-6ddld\") pod \"calico-typha-74d8ccc645-jnlqd\" (UID: \"1dda97d4-db4d-4a32-b1ac-fcf295fb242b\") " pod="calico-system/calico-typha-74d8ccc645-jnlqd" Jun 21 05:30:20.885261 kubelet[2705]: E0621 05:30:20.883960 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:20.886832 containerd[1542]: time="2025-06-21T05:30:20.886492886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74d8ccc645-jnlqd,Uid:1dda97d4-db4d-4a32-b1ac-fcf295fb242b,Namespace:calico-system,Attempt:0,}" Jun 21 05:30:20.896806 systemd[1]: Created slice kubepods-besteffort-podcfbbdde8_f41e_42a4_9e5e_1c7fb8721f7f.slice - libcontainer container kubepods-besteffort-podcfbbdde8_f41e_42a4_9e5e_1c7fb8721f7f.slice. Jun 21 05:30:20.948695 kubelet[2705]: I0621 05:30:20.947081 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f-flexvol-driver-host\") pod \"calico-node-87snn\" (UID: \"cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f\") " pod="calico-system/calico-node-87snn" Jun 21 05:30:20.948695 kubelet[2705]: I0621 05:30:20.947157 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f-cni-bin-dir\") pod \"calico-node-87snn\" (UID: \"cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f\") " pod="calico-system/calico-node-87snn" Jun 21 05:30:20.948695 kubelet[2705]: I0621 05:30:20.947190 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f-cni-net-dir\") pod \"calico-node-87snn\" (UID: \"cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f\") " pod="calico-system/calico-node-87snn" Jun 21 05:30:20.948695 kubelet[2705]: I0621 05:30:20.947214 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f-node-certs\") pod \"calico-node-87snn\" (UID: \"cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f\") " pod="calico-system/calico-node-87snn" Jun 21 05:30:20.948695 kubelet[2705]: I0621 05:30:20.947234 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f-xtables-lock\") pod \"calico-node-87snn\" (UID: \"cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f\") " pod="calico-system/calico-node-87snn" Jun 21 05:30:20.949149 kubelet[2705]: I0621 05:30:20.947255 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f-policysync\") pod \"calico-node-87snn\" (UID: \"cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f\") " pod="calico-system/calico-node-87snn" Jun 21 05:30:20.949149 kubelet[2705]: I0621 05:30:20.947277 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f-var-lib-calico\") pod \"calico-node-87snn\" (UID: \"cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f\") " pod="calico-system/calico-node-87snn" Jun 21 05:30:20.949149 kubelet[2705]: I0621 05:30:20.947300 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f-var-run-calico\") pod \"calico-node-87snn\" (UID: \"cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f\") " pod="calico-system/calico-node-87snn" Jun 21 05:30:20.949149 kubelet[2705]: I0621 05:30:20.947314 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f-cni-log-dir\") pod \"calico-node-87snn\" (UID: \"cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f\") " pod="calico-system/calico-node-87snn" Jun 21 05:30:20.949149 kubelet[2705]: I0621 05:30:20.947329 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f-lib-modules\") pod \"calico-node-87snn\" (UID: \"cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f\") " pod="calico-system/calico-node-87snn" Jun 21 05:30:20.949587 kubelet[2705]: I0621 05:30:20.947358 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f-tigera-ca-bundle\") pod \"calico-node-87snn\" (UID: \"cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f\") " pod="calico-system/calico-node-87snn" Jun 21 05:30:20.949587 kubelet[2705]: I0621 05:30:20.947373 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzgv4\" (UniqueName: \"kubernetes.io/projected/cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f-kube-api-access-nzgv4\") pod \"calico-node-87snn\" (UID: \"cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f\") " pod="calico-system/calico-node-87snn" Jun 21 05:30:20.983070 containerd[1542]: time="2025-06-21T05:30:20.982940228Z" level=info msg="connecting to shim 3f7d463a9739c1a431aca9510435c984bd23598b3dfec6e1ff0be783377a412f" address="unix:///run/containerd/s/589f3e1c88a41bb7e4edac299a059fe422d09ca244efd5dd1f928ff4a9cec9c1" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:30:21.049610 systemd[1]: Started cri-containerd-3f7d463a9739c1a431aca9510435c984bd23598b3dfec6e1ff0be783377a412f.scope - libcontainer container 3f7d463a9739c1a431aca9510435c984bd23598b3dfec6e1ff0be783377a412f. Jun 21 05:30:21.054874 kubelet[2705]: E0621 05:30:21.054685 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.054874 kubelet[2705]: W0621 05:30:21.054712 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.054874 kubelet[2705]: E0621 05:30:21.054737 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.055427 kubelet[2705]: E0621 05:30:21.055373 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.055427 kubelet[2705]: W0621 05:30:21.055394 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.055642 kubelet[2705]: E0621 05:30:21.055413 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.055940 kubelet[2705]: E0621 05:30:21.055908 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.055940 kubelet[2705]: W0621 05:30:21.055922 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.056217 kubelet[2705]: E0621 05:30:21.056081 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.056486 kubelet[2705]: E0621 05:30:21.056471 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.056597 kubelet[2705]: W0621 05:30:21.056574 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.056713 kubelet[2705]: E0621 05:30:21.056643 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.060969 kubelet[2705]: E0621 05:30:21.060936 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.061327 kubelet[2705]: W0621 05:30:21.061147 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.061327 kubelet[2705]: E0621 05:30:21.061177 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.061566 kubelet[2705]: E0621 05:30:21.061551 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.061699 kubelet[2705]: W0621 05:30:21.061615 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.061699 kubelet[2705]: E0621 05:30:21.061630 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.062086 kubelet[2705]: E0621 05:30:21.062008 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.062086 kubelet[2705]: W0621 05:30:21.062027 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.062086 kubelet[2705]: E0621 05:30:21.062043 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.062489 kubelet[2705]: E0621 05:30:21.062453 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.062489 kubelet[2705]: W0621 05:30:21.062467 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.062654 kubelet[2705]: E0621 05:30:21.062581 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.062869 kubelet[2705]: E0621 05:30:21.062821 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.062869 kubelet[2705]: W0621 05:30:21.062835 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.062869 kubelet[2705]: E0621 05:30:21.062846 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.063224 kubelet[2705]: E0621 05:30:21.063187 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.063224 kubelet[2705]: W0621 05:30:21.063199 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.063224 kubelet[2705]: E0621 05:30:21.063210 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.063598 kubelet[2705]: E0621 05:30:21.063556 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.063598 kubelet[2705]: W0621 05:30:21.063568 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.063598 kubelet[2705]: E0621 05:30:21.063579 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.063964 kubelet[2705]: E0621 05:30:21.063913 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.063964 kubelet[2705]: W0621 05:30:21.063924 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.063964 kubelet[2705]: E0621 05:30:21.063934 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.064272 kubelet[2705]: E0621 05:30:21.064222 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.064272 kubelet[2705]: W0621 05:30:21.064233 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.064272 kubelet[2705]: E0621 05:30:21.064243 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.064726 kubelet[2705]: E0621 05:30:21.064672 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.064726 kubelet[2705]: W0621 05:30:21.064685 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.064726 kubelet[2705]: E0621 05:30:21.064697 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.068572 kubelet[2705]: E0621 05:30:21.068510 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.068572 kubelet[2705]: W0621 05:30:21.068539 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.068572 kubelet[2705]: E0621 05:30:21.068562 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.073168 kubelet[2705]: E0621 05:30:21.069563 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.073168 kubelet[2705]: W0621 05:30:21.069591 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.073168 kubelet[2705]: E0621 05:30:21.069613 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.073168 kubelet[2705]: E0621 05:30:21.070465 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.073168 kubelet[2705]: W0621 05:30:21.070482 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.073168 kubelet[2705]: E0621 05:30:21.070501 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.073168 kubelet[2705]: E0621 05:30:21.072290 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.073168 kubelet[2705]: W0621 05:30:21.072317 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.073168 kubelet[2705]: E0621 05:30:21.072342 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.080994 kubelet[2705]: E0621 05:30:21.080943 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.080994 kubelet[2705]: W0621 05:30:21.080974 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.081196 kubelet[2705]: E0621 05:30:21.081029 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.083015 kubelet[2705]: E0621 05:30:21.082980 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.083226 kubelet[2705]: W0621 05:30:21.083154 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.083226 kubelet[2705]: E0621 05:30:21.083184 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.202539 containerd[1542]: time="2025-06-21T05:30:21.202476953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74d8ccc645-jnlqd,Uid:1dda97d4-db4d-4a32-b1ac-fcf295fb242b,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f7d463a9739c1a431aca9510435c984bd23598b3dfec6e1ff0be783377a412f\"" Jun 21 05:30:21.204099 kubelet[2705]: E0621 05:30:21.204058 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:21.207036 containerd[1542]: time="2025-06-21T05:30:21.206947366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 21 05:30:21.215146 containerd[1542]: time="2025-06-21T05:30:21.214712537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-87snn,Uid:cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f,Namespace:calico-system,Attempt:0,}" Jun 21 05:30:21.219951 kubelet[2705]: E0621 05:30:21.219784 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvskp" podUID="0d57dcbc-26a6-4a6a-877e-2663d2596744" Jun 21 05:30:21.231290 kubelet[2705]: E0621 05:30:21.231249 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.231753 kubelet[2705]: W0621 05:30:21.231598 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.231753 kubelet[2705]: E0621 05:30:21.231631 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.232598 kubelet[2705]: E0621 05:30:21.232572 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.233317 kubelet[2705]: W0621 05:30:21.232772 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.233317 kubelet[2705]: E0621 05:30:21.233256 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.235582 kubelet[2705]: E0621 05:30:21.234198 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.236492 kubelet[2705]: W0621 05:30:21.236192 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.236492 kubelet[2705]: E0621 05:30:21.236273 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.236759 kubelet[2705]: E0621 05:30:21.236741 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.236869 kubelet[2705]: W0621 05:30:21.236852 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.236992 kubelet[2705]: E0621 05:30:21.236953 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.238794 kubelet[2705]: E0621 05:30:21.238764 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.238937 kubelet[2705]: W0621 05:30:21.238921 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.239083 kubelet[2705]: E0621 05:30:21.239063 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.239665 kubelet[2705]: E0621 05:30:21.239640 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.239922 kubelet[2705]: W0621 05:30:21.239772 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.239922 kubelet[2705]: E0621 05:30:21.239859 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.243417 kubelet[2705]: E0621 05:30:21.242733 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.243417 kubelet[2705]: W0621 05:30:21.242763 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.243417 kubelet[2705]: E0621 05:30:21.242788 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.244180 kubelet[2705]: E0621 05:30:21.243809 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.244180 kubelet[2705]: W0621 05:30:21.243827 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.244180 kubelet[2705]: E0621 05:30:21.243846 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.244913 kubelet[2705]: E0621 05:30:21.244743 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.244913 kubelet[2705]: W0621 05:30:21.244761 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.244913 kubelet[2705]: E0621 05:30:21.244779 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.246062 kubelet[2705]: E0621 05:30:21.245944 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.246062 kubelet[2705]: W0621 05:30:21.245964 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.246062 kubelet[2705]: E0621 05:30:21.245987 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.248424 kubelet[2705]: E0621 05:30:21.248281 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.248424 kubelet[2705]: W0621 05:30:21.248309 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.248424 kubelet[2705]: E0621 05:30:21.248335 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.249153 kubelet[2705]: E0621 05:30:21.249008 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.249153 kubelet[2705]: W0621 05:30:21.249028 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.249153 kubelet[2705]: E0621 05:30:21.249048 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.249503 kubelet[2705]: E0621 05:30:21.249490 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.249584 kubelet[2705]: W0621 05:30:21.249573 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.249638 kubelet[2705]: E0621 05:30:21.249629 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.249932 kubelet[2705]: E0621 05:30:21.249831 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.249932 kubelet[2705]: W0621 05:30:21.249848 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.249932 kubelet[2705]: E0621 05:30:21.249860 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.250975 kubelet[2705]: E0621 05:30:21.250270 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.250975 kubelet[2705]: W0621 05:30:21.250282 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.250975 kubelet[2705]: E0621 05:30:21.250293 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.253385 kubelet[2705]: E0621 05:30:21.253354 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.253551 kubelet[2705]: W0621 05:30:21.253514 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.253652 kubelet[2705]: E0621 05:30:21.253638 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.254069 kubelet[2705]: E0621 05:30:21.254001 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.254069 kubelet[2705]: W0621 05:30:21.254015 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.254069 kubelet[2705]: E0621 05:30:21.254029 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.254468 kubelet[2705]: E0621 05:30:21.254384 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.254468 kubelet[2705]: W0621 05:30:21.254400 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.254468 kubelet[2705]: E0621 05:30:21.254417 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.254745 kubelet[2705]: E0621 05:30:21.254732 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.255243 kubelet[2705]: W0621 05:30:21.254791 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.255243 kubelet[2705]: E0621 05:30:21.254804 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.255812 kubelet[2705]: E0621 05:30:21.255700 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.255812 kubelet[2705]: W0621 05:30:21.255719 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.256332 kubelet[2705]: E0621 05:30:21.256221 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.258296 kubelet[2705]: E0621 05:30:21.258227 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.258296 kubelet[2705]: W0621 05:30:21.258249 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.258296 kubelet[2705]: E0621 05:30:21.258270 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.258639 kubelet[2705]: I0621 05:30:21.258513 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5r24\" (UniqueName: \"kubernetes.io/projected/0d57dcbc-26a6-4a6a-877e-2663d2596744-kube-api-access-l5r24\") pod \"csi-node-driver-qvskp\" (UID: \"0d57dcbc-26a6-4a6a-877e-2663d2596744\") " pod="calico-system/csi-node-driver-qvskp" Jun 21 05:30:21.261579 containerd[1542]: time="2025-06-21T05:30:21.260986427Z" level=info msg="connecting to shim 9c566d750e82019d61172dbf067450192e7eb6b74342998662b6c58bb1bfe400" address="unix:///run/containerd/s/2f2134832e23459199078d137491340fdc1161371a4e65c932c2b5aae4cf8482" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:30:21.261720 kubelet[2705]: E0621 05:30:21.261430 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.261720 kubelet[2705]: W0621 05:30:21.261447 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.261720 kubelet[2705]: E0621 05:30:21.261468 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.264419 kubelet[2705]: E0621 05:30:21.263955 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.264419 kubelet[2705]: W0621 05:30:21.264194 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.264419 kubelet[2705]: E0621 05:30:21.264226 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.266045 kubelet[2705]: E0621 05:30:21.265107 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.266045 kubelet[2705]: W0621 05:30:21.265138 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.266045 kubelet[2705]: E0621 05:30:21.265157 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.266045 kubelet[2705]: I0621 05:30:21.265203 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0d57dcbc-26a6-4a6a-877e-2663d2596744-registration-dir\") pod \"csi-node-driver-qvskp\" (UID: \"0d57dcbc-26a6-4a6a-877e-2663d2596744\") " pod="calico-system/csi-node-driver-qvskp" Jun 21 05:30:21.266045 kubelet[2705]: E0621 05:30:21.265655 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.266045 kubelet[2705]: W0621 05:30:21.265674 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.266045 kubelet[2705]: E0621 05:30:21.265688 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.266701 kubelet[2705]: I0621 05:30:21.266583 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0d57dcbc-26a6-4a6a-877e-2663d2596744-socket-dir\") pod \"csi-node-driver-qvskp\" (UID: \"0d57dcbc-26a6-4a6a-877e-2663d2596744\") " pod="calico-system/csi-node-driver-qvskp" Jun 21 05:30:21.267313 kubelet[2705]: E0621 05:30:21.267288 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.267488 kubelet[2705]: W0621 05:30:21.267472 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.267650 kubelet[2705]: E0621 05:30:21.267636 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.268328 kubelet[2705]: I0621 05:30:21.268288 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0d57dcbc-26a6-4a6a-877e-2663d2596744-varrun\") pod \"csi-node-driver-qvskp\" (UID: \"0d57dcbc-26a6-4a6a-877e-2663d2596744\") " pod="calico-system/csi-node-driver-qvskp" Jun 21 05:30:21.269195 kubelet[2705]: E0621 05:30:21.268984 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.269552 kubelet[2705]: W0621 05:30:21.269386 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.269552 kubelet[2705]: E0621 05:30:21.269419 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.272242 kubelet[2705]: E0621 05:30:21.271201 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.272242 kubelet[2705]: W0621 05:30:21.272169 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.272242 kubelet[2705]: E0621 05:30:21.272211 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.273696 kubelet[2705]: E0621 05:30:21.273654 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.279565 kubelet[2705]: W0621 05:30:21.273678 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.279565 kubelet[2705]: E0621 05:30:21.279203 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.279565 kubelet[2705]: I0621 05:30:21.279533 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d57dcbc-26a6-4a6a-877e-2663d2596744-kubelet-dir\") pod \"csi-node-driver-qvskp\" (UID: \"0d57dcbc-26a6-4a6a-877e-2663d2596744\") " pod="calico-system/csi-node-driver-qvskp" Jun 21 05:30:21.282454 kubelet[2705]: E0621 05:30:21.282229 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.282454 kubelet[2705]: W0621 05:30:21.282266 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.282454 kubelet[2705]: E0621 05:30:21.282298 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.283426 kubelet[2705]: E0621 05:30:21.282999 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.283426 kubelet[2705]: W0621 05:30:21.283015 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.283426 kubelet[2705]: E0621 05:30:21.283033 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.284602 kubelet[2705]: E0621 05:30:21.284413 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.284602 kubelet[2705]: W0621 05:30:21.284441 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.284602 kubelet[2705]: E0621 05:30:21.284462 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.285044 kubelet[2705]: E0621 05:30:21.284917 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.285044 kubelet[2705]: W0621 05:30:21.284937 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.285044 kubelet[2705]: E0621 05:30:21.284953 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.285792 kubelet[2705]: E0621 05:30:21.285677 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.285792 kubelet[2705]: W0621 05:30:21.285696 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.285792 kubelet[2705]: E0621 05:30:21.285713 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.286477 kubelet[2705]: E0621 05:30:21.286461 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.286576 kubelet[2705]: W0621 05:30:21.286565 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.286873 kubelet[2705]: E0621 05:30:21.286858 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.320464 systemd[1]: Started cri-containerd-9c566d750e82019d61172dbf067450192e7eb6b74342998662b6c58bb1bfe400.scope - libcontainer container 9c566d750e82019d61172dbf067450192e7eb6b74342998662b6c58bb1bfe400. Jun 21 05:30:21.385804 kubelet[2705]: E0621 05:30:21.385775 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.386033 kubelet[2705]: W0621 05:30:21.386009 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.386152 kubelet[2705]: E0621 05:30:21.386138 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.386737 kubelet[2705]: E0621 05:30:21.386718 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.386860 kubelet[2705]: W0621 05:30:21.386848 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.386931 kubelet[2705]: E0621 05:30:21.386921 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.388595 kubelet[2705]: E0621 05:30:21.388563 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.389026 kubelet[2705]: W0621 05:30:21.389005 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.389114 kubelet[2705]: E0621 05:30:21.389103 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.390303 kubelet[2705]: E0621 05:30:21.390286 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.390777 kubelet[2705]: W0621 05:30:21.390762 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.390995 kubelet[2705]: E0621 05:30:21.390978 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.392105 kubelet[2705]: E0621 05:30:21.391437 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.392280 kubelet[2705]: W0621 05:30:21.392259 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.392373 kubelet[2705]: E0621 05:30:21.392361 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.392951 kubelet[2705]: E0621 05:30:21.392903 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.393146 kubelet[2705]: W0621 05:30:21.393115 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.393276 kubelet[2705]: E0621 05:30:21.393199 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.395562 kubelet[2705]: E0621 05:30:21.395485 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.395562 kubelet[2705]: W0621 05:30:21.395504 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.395562 kubelet[2705]: E0621 05:30:21.395521 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.395949 kubelet[2705]: E0621 05:30:21.395929 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.396107 kubelet[2705]: W0621 05:30:21.396039 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.396107 kubelet[2705]: E0621 05:30:21.396064 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.396430 kubelet[2705]: E0621 05:30:21.396394 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.396430 kubelet[2705]: W0621 05:30:21.396405 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.396430 kubelet[2705]: E0621 05:30:21.396416 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.396821 kubelet[2705]: E0621 05:30:21.396797 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.396821 kubelet[2705]: W0621 05:30:21.396808 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.396975 kubelet[2705]: E0621 05:30:21.396912 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.397263 kubelet[2705]: E0621 05:30:21.397186 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.397263 kubelet[2705]: W0621 05:30:21.397197 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.397263 kubelet[2705]: E0621 05:30:21.397207 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.397587 kubelet[2705]: E0621 05:30:21.397518 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.397587 kubelet[2705]: W0621 05:30:21.397528 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.397587 kubelet[2705]: E0621 05:30:21.397538 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.397949 kubelet[2705]: E0621 05:30:21.397927 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.398088 kubelet[2705]: W0621 05:30:21.398022 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.398088 kubelet[2705]: E0621 05:30:21.398038 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.398585 kubelet[2705]: E0621 05:30:21.398441 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.398585 kubelet[2705]: W0621 05:30:21.398454 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.398585 kubelet[2705]: E0621 05:30:21.398464 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.398807 kubelet[2705]: E0621 05:30:21.398734 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.398807 kubelet[2705]: W0621 05:30:21.398748 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.398807 kubelet[2705]: E0621 05:30:21.398761 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.399046 kubelet[2705]: E0621 05:30:21.398962 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.399046 kubelet[2705]: W0621 05:30:21.398975 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.399046 kubelet[2705]: E0621 05:30:21.398985 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.401554 kubelet[2705]: E0621 05:30:21.401497 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.401554 kubelet[2705]: W0621 05:30:21.401531 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.401718 kubelet[2705]: E0621 05:30:21.401563 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.403231 kubelet[2705]: E0621 05:30:21.403207 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.403231 kubelet[2705]: W0621 05:30:21.403226 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.403722 kubelet[2705]: E0621 05:30:21.403244 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.403722 kubelet[2705]: E0621 05:30:21.403502 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.403722 kubelet[2705]: W0621 05:30:21.403516 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.403722 kubelet[2705]: E0621 05:30:21.403531 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.404051 kubelet[2705]: E0621 05:30:21.403997 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.404051 kubelet[2705]: W0621 05:30:21.404021 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.404051 kubelet[2705]: E0621 05:30:21.404040 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.404320 kubelet[2705]: E0621 05:30:21.404296 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.404320 kubelet[2705]: W0621 05:30:21.404306 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.404386 kubelet[2705]: E0621 05:30:21.404321 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.404652 kubelet[2705]: E0621 05:30:21.404632 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.404652 kubelet[2705]: W0621 05:30:21.404651 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.405258 kubelet[2705]: E0621 05:30:21.404661 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.405258 kubelet[2705]: E0621 05:30:21.405256 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.405338 kubelet[2705]: W0621 05:30:21.405266 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.405338 kubelet[2705]: E0621 05:30:21.405276 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.405512 kubelet[2705]: E0621 05:30:21.405498 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.405556 kubelet[2705]: W0621 05:30:21.405512 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.405556 kubelet[2705]: E0621 05:30:21.405526 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.405832 kubelet[2705]: E0621 05:30:21.405818 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.405832 kubelet[2705]: W0621 05:30:21.405830 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.407171 kubelet[2705]: E0621 05:30:21.405843 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.418960 kubelet[2705]: E0621 05:30:21.418914 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:21.419538 kubelet[2705]: W0621 05:30:21.419057 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:21.419538 kubelet[2705]: E0621 05:30:21.419094 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:21.477998 containerd[1542]: time="2025-06-21T05:30:21.477828757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-87snn,Uid:cfbbdde8-f41e-42a4-9e5e-1c7fb8721f7f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c566d750e82019d61172dbf067450192e7eb6b74342998662b6c58bb1bfe400\"" Jun 21 05:30:22.562554 kubelet[2705]: E0621 05:30:22.562442 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvskp" podUID="0d57dcbc-26a6-4a6a-877e-2663d2596744" Jun 21 05:30:22.927618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274055991.mount: Deactivated successfully. Jun 21 05:30:23.973013 containerd[1542]: time="2025-06-21T05:30:23.972951856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:23.974624 containerd[1542]: time="2025-06-21T05:30:23.974361610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=35227888" Jun 21 05:30:23.975530 containerd[1542]: time="2025-06-21T05:30:23.975479392Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:23.978089 containerd[1542]: time="2025-06-21T05:30:23.978018668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:23.978909 containerd[1542]: time="2025-06-21T05:30:23.978865064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 2.771867103s" Jun 21 05:30:23.979087 containerd[1542]: time="2025-06-21T05:30:23.979063954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 21 05:30:23.981175 containerd[1542]: time="2025-06-21T05:30:23.980997063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 21 05:30:24.013416 containerd[1542]: time="2025-06-21T05:30:24.013331555Z" level=info msg="CreateContainer within sandbox \"3f7d463a9739c1a431aca9510435c984bd23598b3dfec6e1ff0be783377a412f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 21 05:30:24.022646 containerd[1542]: time="2025-06-21T05:30:24.022478311Z" level=info msg="Container 1f714fccc8360e2dc2cc95c53bb3f8fa5eae5b30b3c07a12d4a88ab6ca7c7574: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:24.060147 containerd[1542]: time="2025-06-21T05:30:24.060060396Z" level=info msg="CreateContainer within sandbox \"3f7d463a9739c1a431aca9510435c984bd23598b3dfec6e1ff0be783377a412f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1f714fccc8360e2dc2cc95c53bb3f8fa5eae5b30b3c07a12d4a88ab6ca7c7574\"" Jun 21 05:30:24.062388 containerd[1542]: time="2025-06-21T05:30:24.061362558Z" level=info msg="StartContainer for \"1f714fccc8360e2dc2cc95c53bb3f8fa5eae5b30b3c07a12d4a88ab6ca7c7574\"" Jun 21 05:30:24.063704 containerd[1542]: time="2025-06-21T05:30:24.063665884Z" level=info msg="connecting to shim 1f714fccc8360e2dc2cc95c53bb3f8fa5eae5b30b3c07a12d4a88ab6ca7c7574" address="unix:///run/containerd/s/589f3e1c88a41bb7e4edac299a059fe422d09ca244efd5dd1f928ff4a9cec9c1" protocol=ttrpc version=3 Jun 21 05:30:24.098553 systemd[1]: Started cri-containerd-1f714fccc8360e2dc2cc95c53bb3f8fa5eae5b30b3c07a12d4a88ab6ca7c7574.scope - libcontainer container 1f714fccc8360e2dc2cc95c53bb3f8fa5eae5b30b3c07a12d4a88ab6ca7c7574. Jun 21 05:30:24.181755 containerd[1542]: time="2025-06-21T05:30:24.181665876Z" level=info msg="StartContainer for \"1f714fccc8360e2dc2cc95c53bb3f8fa5eae5b30b3c07a12d4a88ab6ca7c7574\" returns successfully" Jun 21 05:30:24.562605 kubelet[2705]: E0621 05:30:24.562534 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvskp" podUID="0d57dcbc-26a6-4a6a-877e-2663d2596744" Jun 21 05:30:24.721215 kubelet[2705]: E0621 05:30:24.720595 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:24.742041 kubelet[2705]: I0621 05:30:24.741927 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74d8ccc645-jnlqd" podStartSLOduration=1.967565804 podStartE2EDuration="4.741793526s" podCreationTimestamp="2025-06-21 05:30:20 +0000 UTC" firstStartedPulling="2025-06-21 05:30:21.206357021 +0000 UTC m=+19.938154823" lastFinishedPulling="2025-06-21 05:30:23.980584731 +0000 UTC m=+22.712382545" observedRunningTime="2025-06-21 05:30:24.740218739 +0000 UTC m=+23.472016563" watchObservedRunningTime="2025-06-21 05:30:24.741793526 +0000 UTC m=+23.473591352" Jun 21 05:30:24.787971 kubelet[2705]: E0621 05:30:24.787918 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.788342 kubelet[2705]: W0621 05:30:24.787951 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.788342 kubelet[2705]: E0621 05:30:24.788262 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.788896 kubelet[2705]: E0621 05:30:24.788831 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.788896 kubelet[2705]: W0621 05:30:24.788848 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.788896 kubelet[2705]: E0621 05:30:24.788864 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.789478 kubelet[2705]: E0621 05:30:24.789364 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.789478 kubelet[2705]: W0621 05:30:24.789406 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.789478 kubelet[2705]: E0621 05:30:24.789423 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.790326 kubelet[2705]: E0621 05:30:24.790173 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.790326 kubelet[2705]: W0621 05:30:24.790189 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.790326 kubelet[2705]: E0621 05:30:24.790205 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.790816 kubelet[2705]: E0621 05:30:24.790802 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.790905 kubelet[2705]: W0621 05:30:24.790891 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.791058 kubelet[2705]: E0621 05:30:24.790953 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.791356 kubelet[2705]: E0621 05:30:24.791344 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.791591 kubelet[2705]: W0621 05:30:24.791447 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.791591 kubelet[2705]: E0621 05:30:24.791470 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.791823 kubelet[2705]: E0621 05:30:24.791749 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.791823 kubelet[2705]: W0621 05:30:24.791760 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.791823 kubelet[2705]: E0621 05:30:24.791771 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.792252 kubelet[2705]: E0621 05:30:24.792169 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.792252 kubelet[2705]: W0621 05:30:24.792182 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.792252 kubelet[2705]: E0621 05:30:24.792193 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.792659 kubelet[2705]: E0621 05:30:24.792599 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.792659 kubelet[2705]: W0621 05:30:24.792611 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.792659 kubelet[2705]: E0621 05:30:24.792622 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.792999 kubelet[2705]: E0621 05:30:24.792984 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.793191 kubelet[2705]: W0621 05:30:24.793083 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.793191 kubelet[2705]: E0621 05:30:24.793104 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.793536 kubelet[2705]: E0621 05:30:24.793482 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.793806 kubelet[2705]: W0621 05:30:24.793651 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.793806 kubelet[2705]: E0621 05:30:24.793678 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.793984 kubelet[2705]: E0621 05:30:24.793973 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.794043 kubelet[2705]: W0621 05:30:24.794030 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.794147 kubelet[2705]: E0621 05:30:24.794106 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.794649 kubelet[2705]: E0621 05:30:24.794536 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.794649 kubelet[2705]: W0621 05:30:24.794570 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.794649 kubelet[2705]: E0621 05:30:24.794586 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.795160 kubelet[2705]: E0621 05:30:24.795079 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.795160 kubelet[2705]: W0621 05:30:24.795099 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.795318 kubelet[2705]: E0621 05:30:24.795259 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.795635 kubelet[2705]: E0621 05:30:24.795621 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.795796 kubelet[2705]: W0621 05:30:24.795711 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.795796 kubelet[2705]: E0621 05:30:24.795727 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.835544 kubelet[2705]: E0621 05:30:24.834859 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.835544 kubelet[2705]: W0621 05:30:24.834886 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.835544 kubelet[2705]: E0621 05:30:24.834935 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.837803 kubelet[2705]: E0621 05:30:24.835727 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.837803 kubelet[2705]: W0621 05:30:24.835751 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.837803 kubelet[2705]: E0621 05:30:24.835774 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.837803 kubelet[2705]: E0621 05:30:24.836803 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.837803 kubelet[2705]: W0621 05:30:24.836822 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.837803 kubelet[2705]: E0621 05:30:24.836841 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.838998 kubelet[2705]: E0621 05:30:24.838391 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.838998 kubelet[2705]: W0621 05:30:24.838416 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.838998 kubelet[2705]: E0621 05:30:24.838437 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.841375 kubelet[2705]: E0621 05:30:24.841243 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.841375 kubelet[2705]: W0621 05:30:24.841270 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.841375 kubelet[2705]: E0621 05:30:24.841296 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.841852 kubelet[2705]: E0621 05:30:24.841570 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.841852 kubelet[2705]: W0621 05:30:24.841588 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.841852 kubelet[2705]: E0621 05:30:24.841605 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.842582 kubelet[2705]: E0621 05:30:24.842402 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.842582 kubelet[2705]: W0621 05:30:24.842422 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.842582 kubelet[2705]: E0621 05:30:24.842441 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.843160 kubelet[2705]: E0621 05:30:24.843136 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.843241 kubelet[2705]: W0621 05:30:24.843177 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.843241 kubelet[2705]: E0621 05:30:24.843197 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.843420 kubelet[2705]: E0621 05:30:24.843406 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.843420 kubelet[2705]: W0621 05:30:24.843419 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.843478 kubelet[2705]: E0621 05:30:24.843436 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.844468 kubelet[2705]: E0621 05:30:24.844421 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.844468 kubelet[2705]: W0621 05:30:24.844437 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.844468 kubelet[2705]: E0621 05:30:24.844452 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.845210 kubelet[2705]: E0621 05:30:24.845049 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.845210 kubelet[2705]: W0621 05:30:24.845065 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.845210 kubelet[2705]: E0621 05:30:24.845078 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.845409 kubelet[2705]: E0621 05:30:24.845398 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.845470 kubelet[2705]: W0621 05:30:24.845460 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.845517 kubelet[2705]: E0621 05:30:24.845509 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.845943 kubelet[2705]: E0621 05:30:24.845909 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.845943 kubelet[2705]: W0621 05:30:24.845921 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.845943 kubelet[2705]: E0621 05:30:24.845931 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.846681 kubelet[2705]: E0621 05:30:24.846562 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.846681 kubelet[2705]: W0621 05:30:24.846581 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.846846 kubelet[2705]: E0621 05:30:24.846596 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.847329 kubelet[2705]: E0621 05:30:24.847312 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.847530 kubelet[2705]: W0621 05:30:24.847422 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.847530 kubelet[2705]: E0621 05:30:24.847443 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.847985 kubelet[2705]: E0621 05:30:24.847949 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.847985 kubelet[2705]: W0621 05:30:24.847962 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.847985 kubelet[2705]: E0621 05:30:24.847972 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.848365 kubelet[2705]: E0621 05:30:24.848352 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.848961 kubelet[2705]: W0621 05:30:24.848433 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.848961 kubelet[2705]: E0621 05:30:24.848447 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:24.849327 kubelet[2705]: E0621 05:30:24.849314 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:30:24.849397 kubelet[2705]: W0621 05:30:24.849388 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:30:24.849448 kubelet[2705]: E0621 05:30:24.849439 2705 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:30:25.497492 containerd[1542]: time="2025-06-21T05:30:25.497431238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:25.498537 containerd[1542]: time="2025-06-21T05:30:25.498203317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4441627" Jun 21 05:30:25.499397 containerd[1542]: time="2025-06-21T05:30:25.499357300Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:25.502345 containerd[1542]: time="2025-06-21T05:30:25.502258391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:25.503321 containerd[1542]: time="2025-06-21T05:30:25.503200707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 1.522153453s" Jun 21 05:30:25.503321 containerd[1542]: time="2025-06-21T05:30:25.503237370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 21 05:30:25.509144 containerd[1542]: time="2025-06-21T05:30:25.508971083Z" level=info msg="CreateContainer within sandbox \"9c566d750e82019d61172dbf067450192e7eb6b74342998662b6c58bb1bfe400\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 21 05:30:25.520163 containerd[1542]: time="2025-06-21T05:30:25.518378266Z" level=info msg="Container f51eb54663424028dce0731110def062e6721e35c1aa41f5d309eee19b45dfc4: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:25.528638 containerd[1542]: time="2025-06-21T05:30:25.528546884Z" level=info msg="CreateContainer within sandbox \"9c566d750e82019d61172dbf067450192e7eb6b74342998662b6c58bb1bfe400\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f51eb54663424028dce0731110def062e6721e35c1aa41f5d309eee19b45dfc4\"" Jun 21 05:30:25.532304 containerd[1542]: time="2025-06-21T05:30:25.529637420Z" level=info msg="StartContainer for \"f51eb54663424028dce0731110def062e6721e35c1aa41f5d309eee19b45dfc4\"" Jun 21 05:30:25.532772 containerd[1542]: time="2025-06-21T05:30:25.532729177Z" level=info msg="connecting to shim f51eb54663424028dce0731110def062e6721e35c1aa41f5d309eee19b45dfc4" address="unix:///run/containerd/s/2f2134832e23459199078d137491340fdc1161371a4e65c932c2b5aae4cf8482" protocol=ttrpc version=3 Jun 21 05:30:25.568951 systemd[1]: Started cri-containerd-f51eb54663424028dce0731110def062e6721e35c1aa41f5d309eee19b45dfc4.scope - libcontainer container f51eb54663424028dce0731110def062e6721e35c1aa41f5d309eee19b45dfc4. Jun 21 05:30:25.622649 containerd[1542]: time="2025-06-21T05:30:25.622604693Z" level=info msg="StartContainer for \"f51eb54663424028dce0731110def062e6721e35c1aa41f5d309eee19b45dfc4\" returns successfully" Jun 21 05:30:25.639116 systemd[1]: cri-containerd-f51eb54663424028dce0731110def062e6721e35c1aa41f5d309eee19b45dfc4.scope: Deactivated successfully. Jun 21 05:30:25.700005 containerd[1542]: time="2025-06-21T05:30:25.699770153Z" level=info msg="received exit event container_id:\"f51eb54663424028dce0731110def062e6721e35c1aa41f5d309eee19b45dfc4\" id:\"f51eb54663424028dce0731110def062e6721e35c1aa41f5d309eee19b45dfc4\" pid:3410 exited_at:{seconds:1750483825 nanos:644262030}" Jun 21 05:30:25.703597 containerd[1542]: time="2025-06-21T05:30:25.703550997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f51eb54663424028dce0731110def062e6721e35c1aa41f5d309eee19b45dfc4\" id:\"f51eb54663424028dce0731110def062e6721e35c1aa41f5d309eee19b45dfc4\" pid:3410 exited_at:{seconds:1750483825 nanos:644262030}" Jun 21 05:30:25.727431 kubelet[2705]: I0621 05:30:25.727397 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:30:25.727884 kubelet[2705]: E0621 05:30:25.727761 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:25.746921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f51eb54663424028dce0731110def062e6721e35c1aa41f5d309eee19b45dfc4-rootfs.mount: Deactivated successfully. Jun 21 05:30:26.562696 kubelet[2705]: E0621 05:30:26.562624 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvskp" podUID="0d57dcbc-26a6-4a6a-877e-2663d2596744" Jun 21 05:30:26.735548 containerd[1542]: time="2025-06-21T05:30:26.734360875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 21 05:30:28.563298 kubelet[2705]: E0621 05:30:28.563220 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvskp" podUID="0d57dcbc-26a6-4a6a-877e-2663d2596744" Jun 21 05:30:30.563012 kubelet[2705]: E0621 05:30:30.562951 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qvskp" podUID="0d57dcbc-26a6-4a6a-877e-2663d2596744" Jun 21 05:30:30.812390 containerd[1542]: time="2025-06-21T05:30:30.812332224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:30.814945 containerd[1542]: time="2025-06-21T05:30:30.814506912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 21 05:30:30.816459 containerd[1542]: time="2025-06-21T05:30:30.816320033Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:30.819433 containerd[1542]: time="2025-06-21T05:30:30.819365794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:30.820478 containerd[1542]: time="2025-06-21T05:30:30.820437731Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 4.084956783s" Jun 21 05:30:30.820681 containerd[1542]: time="2025-06-21T05:30:30.820655232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 21 05:30:30.828853 containerd[1542]: time="2025-06-21T05:30:30.828787736Z" level=info msg="CreateContainer within sandbox \"9c566d750e82019d61172dbf067450192e7eb6b74342998662b6c58bb1bfe400\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 21 05:30:30.866097 containerd[1542]: time="2025-06-21T05:30:30.864279511Z" level=info msg="Container db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:30.885641 containerd[1542]: time="2025-06-21T05:30:30.885492594Z" level=info msg="CreateContainer within sandbox \"9c566d750e82019d61172dbf067450192e7eb6b74342998662b6c58bb1bfe400\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78\"" Jun 21 05:30:30.887305 containerd[1542]: time="2025-06-21T05:30:30.887059027Z" level=info msg="StartContainer for \"db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78\"" Jun 21 05:30:30.888754 containerd[1542]: time="2025-06-21T05:30:30.888719590Z" level=info msg="connecting to shim db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78" address="unix:///run/containerd/s/2f2134832e23459199078d137491340fdc1161371a4e65c932c2b5aae4cf8482" protocol=ttrpc version=3 Jun 21 05:30:30.920395 systemd[1]: Started cri-containerd-db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78.scope - libcontainer container db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78. Jun 21 05:30:30.998002 containerd[1542]: time="2025-06-21T05:30:30.997330089Z" level=info msg="StartContainer for \"db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78\" returns successfully" Jun 21 05:30:31.625310 systemd[1]: cri-containerd-db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78.scope: Deactivated successfully. Jun 21 05:30:31.625687 systemd[1]: cri-containerd-db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78.scope: Consumed 637ms CPU time, 165.8M memory peak, 12.2M read from disk, 171.2M written to disk. Jun 21 05:30:31.660250 containerd[1542]: time="2025-06-21T05:30:31.659629070Z" level=info msg="received exit event container_id:\"db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78\" id:\"db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78\" pid:3469 exited_at:{seconds:1750483831 nanos:636667751}" Jun 21 05:30:31.660250 containerd[1542]: time="2025-06-21T05:30:31.659878561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78\" id:\"db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78\" pid:3469 exited_at:{seconds:1750483831 nanos:636667751}" Jun 21 05:30:31.706898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db1a01c26b9f1202891508e11ee22453afa998121811264e72ca1bd6323d0f78-rootfs.mount: Deactivated successfully. Jun 21 05:30:31.713271 kubelet[2705]: I0621 05:30:31.713177 2705 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 21 05:30:31.832569 containerd[1542]: time="2025-06-21T05:30:31.831694522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 21 05:30:31.833155 systemd[1]: Created slice kubepods-burstable-podbc707386_a573_4931_82ab_55786356b796.slice - libcontainer container kubepods-burstable-podbc707386_a573_4931_82ab_55786356b796.slice. Jun 21 05:30:31.841144 kubelet[2705]: I0621 05:30:31.841077 2705 status_manager.go:895] "Failed to get status for pod" podUID="bc707386-a573-4931-82ab-55786356b796" pod="kube-system/coredns-674b8bbfcf-gqv9d" err="pods \"coredns-674b8bbfcf-gqv9d\" is forbidden: User \"system:node:ci-4372.0.0-0-a0fa6d352b\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372.0.0-0-a0fa6d352b' and this object" Jun 21 05:30:31.860240 kubelet[2705]: E0621 05:30:31.859873 2705 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4372.0.0-0-a0fa6d352b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372.0.0-0-a0fa6d352b' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Jun 21 05:30:31.886316 systemd[1]: Created slice kubepods-besteffort-pod0ae29d9a_bf5d_4742_896a_2a2ead377607.slice - libcontainer container kubepods-besteffort-pod0ae29d9a_bf5d_4742_896a_2a2ead377607.slice. Jun 21 05:30:31.905240 kubelet[2705]: I0621 05:30:31.904105 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48b4f5a0-06d2-4b85-854f-4c514806e6d7-config-volume\") pod \"coredns-674b8bbfcf-xpcbg\" (UID: \"48b4f5a0-06d2-4b85-854f-4c514806e6d7\") " pod="kube-system/coredns-674b8bbfcf-xpcbg" Jun 21 05:30:31.905911 kubelet[2705]: I0621 05:30:31.905851 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ae29d9a-bf5d-4742-896a-2a2ead377607-tigera-ca-bundle\") pod \"calico-kube-controllers-84cc887c7f-r77kt\" (UID: \"0ae29d9a-bf5d-4742-896a-2a2ead377607\") " pod="calico-system/calico-kube-controllers-84cc887c7f-r77kt" Jun 21 05:30:31.906895 kubelet[2705]: I0621 05:30:31.906861 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcxts\" (UniqueName: \"kubernetes.io/projected/0ae29d9a-bf5d-4742-896a-2a2ead377607-kube-api-access-vcxts\") pod \"calico-kube-controllers-84cc887c7f-r77kt\" (UID: \"0ae29d9a-bf5d-4742-896a-2a2ead377607\") " pod="calico-system/calico-kube-controllers-84cc887c7f-r77kt" Jun 21 05:30:31.908144 kubelet[2705]: I0621 05:30:31.908076 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnxqt\" (UniqueName: \"kubernetes.io/projected/bc707386-a573-4931-82ab-55786356b796-kube-api-access-hnxqt\") pod \"coredns-674b8bbfcf-gqv9d\" (UID: \"bc707386-a573-4931-82ab-55786356b796\") " pod="kube-system/coredns-674b8bbfcf-gqv9d" Jun 21 05:30:31.909188 kubelet[2705]: I0621 05:30:31.908433 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjzzt\" (UniqueName: \"kubernetes.io/projected/48b4f5a0-06d2-4b85-854f-4c514806e6d7-kube-api-access-kjzzt\") pod \"coredns-674b8bbfcf-xpcbg\" (UID: \"48b4f5a0-06d2-4b85-854f-4c514806e6d7\") " pod="kube-system/coredns-674b8bbfcf-xpcbg" Jun 21 05:30:31.909188 kubelet[2705]: I0621 05:30:31.908473 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc707386-a573-4931-82ab-55786356b796-config-volume\") pod \"coredns-674b8bbfcf-gqv9d\" (UID: \"bc707386-a573-4931-82ab-55786356b796\") " pod="kube-system/coredns-674b8bbfcf-gqv9d" Jun 21 05:30:31.926919 systemd[1]: Created slice kubepods-burstable-pod48b4f5a0_06d2_4b85_854f_4c514806e6d7.slice - libcontainer container kubepods-burstable-pod48b4f5a0_06d2_4b85_854f_4c514806e6d7.slice. Jun 21 05:30:31.976869 systemd[1]: Created slice kubepods-besteffort-pod181e331f_ec96_4597_85c5_2475203d63be.slice - libcontainer container kubepods-besteffort-pod181e331f_ec96_4597_85c5_2475203d63be.slice. Jun 21 05:30:31.989535 systemd[1]: Created slice kubepods-besteffort-pod684e44d5_7d16_4ab2_86ae_3cb7892ca253.slice - libcontainer container kubepods-besteffort-pod684e44d5_7d16_4ab2_86ae_3cb7892ca253.slice. Jun 21 05:30:31.998087 systemd[1]: Created slice kubepods-besteffort-pod7b16b6e7_bce4_413f_ad33_f27b0fa03961.slice - libcontainer container kubepods-besteffort-pod7b16b6e7_bce4_413f_ad33_f27b0fa03961.slice. Jun 21 05:30:32.009176 kubelet[2705]: I0621 05:30:32.009019 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b16b6e7-bce4-413f-ad33-f27b0fa03961-calico-apiserver-certs\") pod \"calico-apiserver-7d864bcf8d-25t7g\" (UID: \"7b16b6e7-bce4-413f-ad33-f27b0fa03961\") " pod="calico-apiserver/calico-apiserver-7d864bcf8d-25t7g" Jun 21 05:30:32.009176 kubelet[2705]: I0621 05:30:32.009071 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/684e44d5-7d16-4ab2-86ae-3cb7892ca253-goldmane-key-pair\") pod \"goldmane-5bd85449d4-5jcrs\" (UID: \"684e44d5-7d16-4ab2-86ae-3cb7892ca253\") " pod="calico-system/goldmane-5bd85449d4-5jcrs" Jun 21 05:30:32.009176 kubelet[2705]: I0621 05:30:32.009094 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/181e331f-ec96-4597-85c5-2475203d63be-whisker-ca-bundle\") pod \"whisker-7978c464b-gmm6h\" (UID: \"181e331f-ec96-4597-85c5-2475203d63be\") " pod="calico-system/whisker-7978c464b-gmm6h" Jun 21 05:30:32.014075 systemd[1]: Created slice kubepods-besteffort-pod5c75a7c9_289a_4fec_aaea_f545ac34e00f.slice - libcontainer container kubepods-besteffort-pod5c75a7c9_289a_4fec_aaea_f545ac34e00f.slice. Jun 21 05:30:32.014321 kubelet[2705]: I0621 05:30:32.014187 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbblv\" (UniqueName: \"kubernetes.io/projected/181e331f-ec96-4597-85c5-2475203d63be-kube-api-access-zbblv\") pod \"whisker-7978c464b-gmm6h\" (UID: \"181e331f-ec96-4597-85c5-2475203d63be\") " pod="calico-system/whisker-7978c464b-gmm6h" Jun 21 05:30:32.014437 kubelet[2705]: I0621 05:30:32.014324 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684e44d5-7d16-4ab2-86ae-3cb7892ca253-config\") pod \"goldmane-5bd85449d4-5jcrs\" (UID: \"684e44d5-7d16-4ab2-86ae-3cb7892ca253\") " pod="calico-system/goldmane-5bd85449d4-5jcrs" Jun 21 05:30:32.014437 kubelet[2705]: I0621 05:30:32.014406 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpfkm\" (UniqueName: \"kubernetes.io/projected/5c75a7c9-289a-4fec-aaea-f545ac34e00f-kube-api-access-zpfkm\") pod \"calico-apiserver-7d864bcf8d-rsvrn\" (UID: \"5c75a7c9-289a-4fec-aaea-f545ac34e00f\") " pod="calico-apiserver/calico-apiserver-7d864bcf8d-rsvrn" Jun 21 05:30:32.014575 kubelet[2705]: I0621 05:30:32.014514 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mbjj\" (UniqueName: \"kubernetes.io/projected/7b16b6e7-bce4-413f-ad33-f27b0fa03961-kube-api-access-5mbjj\") pod \"calico-apiserver-7d864bcf8d-25t7g\" (UID: \"7b16b6e7-bce4-413f-ad33-f27b0fa03961\") " pod="calico-apiserver/calico-apiserver-7d864bcf8d-25t7g" Jun 21 05:30:32.014575 kubelet[2705]: I0621 05:30:32.014542 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/684e44d5-7d16-4ab2-86ae-3cb7892ca253-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-5jcrs\" (UID: \"684e44d5-7d16-4ab2-86ae-3cb7892ca253\") " pod="calico-system/goldmane-5bd85449d4-5jcrs" Jun 21 05:30:32.014575 kubelet[2705]: I0621 05:30:32.014567 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppzpn\" (UniqueName: \"kubernetes.io/projected/684e44d5-7d16-4ab2-86ae-3cb7892ca253-kube-api-access-ppzpn\") pod \"goldmane-5bd85449d4-5jcrs\" (UID: \"684e44d5-7d16-4ab2-86ae-3cb7892ca253\") " pod="calico-system/goldmane-5bd85449d4-5jcrs" Jun 21 05:30:32.014700 kubelet[2705]: I0621 05:30:32.014593 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5c75a7c9-289a-4fec-aaea-f545ac34e00f-calico-apiserver-certs\") pod \"calico-apiserver-7d864bcf8d-rsvrn\" (UID: \"5c75a7c9-289a-4fec-aaea-f545ac34e00f\") " pod="calico-apiserver/calico-apiserver-7d864bcf8d-rsvrn" Jun 21 05:30:32.014700 kubelet[2705]: I0621 05:30:32.014624 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/181e331f-ec96-4597-85c5-2475203d63be-whisker-backend-key-pair\") pod \"whisker-7978c464b-gmm6h\" (UID: \"181e331f-ec96-4597-85c5-2475203d63be\") " pod="calico-system/whisker-7978c464b-gmm6h" Jun 21 05:30:32.223734 containerd[1542]: time="2025-06-21T05:30:32.223681734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84cc887c7f-r77kt,Uid:0ae29d9a-bf5d-4742-896a-2a2ead377607,Namespace:calico-system,Attempt:0,}" Jun 21 05:30:32.285153 containerd[1542]: time="2025-06-21T05:30:32.283979869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7978c464b-gmm6h,Uid:181e331f-ec96-4597-85c5-2475203d63be,Namespace:calico-system,Attempt:0,}" Jun 21 05:30:32.297636 containerd[1542]: time="2025-06-21T05:30:32.297579172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-5jcrs,Uid:684e44d5-7d16-4ab2-86ae-3cb7892ca253,Namespace:calico-system,Attempt:0,}" Jun 21 05:30:32.316694 containerd[1542]: time="2025-06-21T05:30:32.316630405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d864bcf8d-25t7g,Uid:7b16b6e7-bce4-413f-ad33-f27b0fa03961,Namespace:calico-apiserver,Attempt:0,}" Jun 21 05:30:32.333633 containerd[1542]: time="2025-06-21T05:30:32.333582410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d864bcf8d-rsvrn,Uid:5c75a7c9-289a-4fec-aaea-f545ac34e00f,Namespace:calico-apiserver,Attempt:0,}" Jun 21 05:30:32.517746 containerd[1542]: time="2025-06-21T05:30:32.517420683Z" level=error msg="Failed to destroy network for sandbox \"8d51f16ba4ab040968c1215966f07214f237acd8c4819d5fe0b10c2600952bba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.539699 containerd[1542]: time="2025-06-21T05:30:32.519547120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-5jcrs,Uid:684e44d5-7d16-4ab2-86ae-3cb7892ca253,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d51f16ba4ab040968c1215966f07214f237acd8c4819d5fe0b10c2600952bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.540036 kubelet[2705]: E0621 05:30:32.539978 2705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d51f16ba4ab040968c1215966f07214f237acd8c4819d5fe0b10c2600952bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.540171 kubelet[2705]: E0621 05:30:32.540079 2705 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d51f16ba4ab040968c1215966f07214f237acd8c4819d5fe0b10c2600952bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-5jcrs" Jun 21 05:30:32.540363 kubelet[2705]: E0621 05:30:32.540111 2705 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d51f16ba4ab040968c1215966f07214f237acd8c4819d5fe0b10c2600952bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-5jcrs" Jun 21 05:30:32.540561 kubelet[2705]: E0621 05:30:32.540456 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-5jcrs_calico-system(684e44d5-7d16-4ab2-86ae-3cb7892ca253)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-5jcrs_calico-system(684e44d5-7d16-4ab2-86ae-3cb7892ca253)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d51f16ba4ab040968c1215966f07214f237acd8c4819d5fe0b10c2600952bba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-5jcrs" podUID="684e44d5-7d16-4ab2-86ae-3cb7892ca253" Jun 21 05:30:32.542859 containerd[1542]: time="2025-06-21T05:30:32.542771892Z" level=error msg="Failed to destroy network for sandbox \"f44254c9d47dbecf11606872f7e8040de10fff719f77fc4719681cd089e611bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.546395 containerd[1542]: time="2025-06-21T05:30:32.545667987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d864bcf8d-rsvrn,Uid:5c75a7c9-289a-4fec-aaea-f545ac34e00f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f44254c9d47dbecf11606872f7e8040de10fff719f77fc4719681cd089e611bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.546650 kubelet[2705]: E0621 05:30:32.545959 2705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f44254c9d47dbecf11606872f7e8040de10fff719f77fc4719681cd089e611bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.546650 kubelet[2705]: E0621 05:30:32.546025 2705 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f44254c9d47dbecf11606872f7e8040de10fff719f77fc4719681cd089e611bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d864bcf8d-rsvrn" Jun 21 05:30:32.546650 kubelet[2705]: E0621 05:30:32.546047 2705 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f44254c9d47dbecf11606872f7e8040de10fff719f77fc4719681cd089e611bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d864bcf8d-rsvrn" Jun 21 05:30:32.546989 kubelet[2705]: E0621 05:30:32.546106 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d864bcf8d-rsvrn_calico-apiserver(5c75a7c9-289a-4fec-aaea-f545ac34e00f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d864bcf8d-rsvrn_calico-apiserver(5c75a7c9-289a-4fec-aaea-f545ac34e00f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f44254c9d47dbecf11606872f7e8040de10fff719f77fc4719681cd089e611bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d864bcf8d-rsvrn" podUID="5c75a7c9-289a-4fec-aaea-f545ac34e00f" Jun 21 05:30:32.560662 containerd[1542]: time="2025-06-21T05:30:32.560602190Z" level=error msg="Failed to destroy network for sandbox \"de9ad8a0de9f287c32096abedffbf83dc6f92579437c57002c3c47f1f2e5542b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.565945 containerd[1542]: time="2025-06-21T05:30:32.565888646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84cc887c7f-r77kt,Uid:0ae29d9a-bf5d-4742-896a-2a2ead377607,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de9ad8a0de9f287c32096abedffbf83dc6f92579437c57002c3c47f1f2e5542b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.567144 kubelet[2705]: E0621 05:30:32.566416 2705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de9ad8a0de9f287c32096abedffbf83dc6f92579437c57002c3c47f1f2e5542b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.567144 kubelet[2705]: E0621 05:30:32.566482 2705 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de9ad8a0de9f287c32096abedffbf83dc6f92579437c57002c3c47f1f2e5542b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84cc887c7f-r77kt" Jun 21 05:30:32.567144 kubelet[2705]: E0621 05:30:32.566509 2705 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de9ad8a0de9f287c32096abedffbf83dc6f92579437c57002c3c47f1f2e5542b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84cc887c7f-r77kt" Jun 21 05:30:32.567373 kubelet[2705]: E0621 05:30:32.566575 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84cc887c7f-r77kt_calico-system(0ae29d9a-bf5d-4742-896a-2a2ead377607)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84cc887c7f-r77kt_calico-system(0ae29d9a-bf5d-4742-896a-2a2ead377607)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de9ad8a0de9f287c32096abedffbf83dc6f92579437c57002c3c47f1f2e5542b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84cc887c7f-r77kt" podUID="0ae29d9a-bf5d-4742-896a-2a2ead377607" Jun 21 05:30:32.577198 containerd[1542]: time="2025-06-21T05:30:32.576502123Z" level=error msg="Failed to destroy network for sandbox \"37c313216a14bd5b76b2c6fc0c7f7a4e40d7b191074d19a26f68865ba5917b49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.581551 systemd[1]: Created slice kubepods-besteffort-pod0d57dcbc_26a6_4a6a_877e_2663d2596744.slice - libcontainer container kubepods-besteffort-pod0d57dcbc_26a6_4a6a_877e_2663d2596744.slice. Jun 21 05:30:32.585003 containerd[1542]: time="2025-06-21T05:30:32.583662001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7978c464b-gmm6h,Uid:181e331f-ec96-4597-85c5-2475203d63be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37c313216a14bd5b76b2c6fc0c7f7a4e40d7b191074d19a26f68865ba5917b49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.585248 kubelet[2705]: E0621 05:30:32.583968 2705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37c313216a14bd5b76b2c6fc0c7f7a4e40d7b191074d19a26f68865ba5917b49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.585248 kubelet[2705]: E0621 05:30:32.584051 2705 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37c313216a14bd5b76b2c6fc0c7f7a4e40d7b191074d19a26f68865ba5917b49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7978c464b-gmm6h" Jun 21 05:30:32.585248 kubelet[2705]: E0621 05:30:32.584072 2705 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37c313216a14bd5b76b2c6fc0c7f7a4e40d7b191074d19a26f68865ba5917b49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7978c464b-gmm6h" Jun 21 05:30:32.585372 kubelet[2705]: E0621 05:30:32.584173 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7978c464b-gmm6h_calico-system(181e331f-ec96-4597-85c5-2475203d63be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7978c464b-gmm6h_calico-system(181e331f-ec96-4597-85c5-2475203d63be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37c313216a14bd5b76b2c6fc0c7f7a4e40d7b191074d19a26f68865ba5917b49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7978c464b-gmm6h" podUID="181e331f-ec96-4597-85c5-2475203d63be" Jun 21 05:30:32.587492 containerd[1542]: time="2025-06-21T05:30:32.587427943Z" level=error msg="Failed to destroy network for sandbox \"5e35886ed18d1ca3988fadad43ce7ffea997116117b92703ecd67fe309d78344\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.589433 containerd[1542]: time="2025-06-21T05:30:32.589388281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d864bcf8d-25t7g,Uid:7b16b6e7-bce4-413f-ad33-f27b0fa03961,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e35886ed18d1ca3988fadad43ce7ffea997116117b92703ecd67fe309d78344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.589703 kubelet[2705]: E0621 05:30:32.589667 2705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e35886ed18d1ca3988fadad43ce7ffea997116117b92703ecd67fe309d78344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.589762 kubelet[2705]: E0621 05:30:32.589731 2705 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e35886ed18d1ca3988fadad43ce7ffea997116117b92703ecd67fe309d78344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d864bcf8d-25t7g" Jun 21 05:30:32.589793 kubelet[2705]: E0621 05:30:32.589757 2705 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e35886ed18d1ca3988fadad43ce7ffea997116117b92703ecd67fe309d78344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d864bcf8d-25t7g" Jun 21 05:30:32.589880 kubelet[2705]: E0621 05:30:32.589807 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d864bcf8d-25t7g_calico-apiserver(7b16b6e7-bce4-413f-ad33-f27b0fa03961)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d864bcf8d-25t7g_calico-apiserver(7b16b6e7-bce4-413f-ad33-f27b0fa03961)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e35886ed18d1ca3988fadad43ce7ffea997116117b92703ecd67fe309d78344\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d864bcf8d-25t7g" podUID="7b16b6e7-bce4-413f-ad33-f27b0fa03961" Jun 21 05:30:32.592090 containerd[1542]: time="2025-06-21T05:30:32.592055159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qvskp,Uid:0d57dcbc-26a6-4a6a-877e-2663d2596744,Namespace:calico-system,Attempt:0,}" Jun 21 05:30:32.661829 containerd[1542]: time="2025-06-21T05:30:32.661780430Z" level=error msg="Failed to destroy network for sandbox \"396db37ef17ce7a36edf0e5996e5ebf4fadaffb45a7606a721a86b03912f3b0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.666696 containerd[1542]: time="2025-06-21T05:30:32.666609956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qvskp,Uid:0d57dcbc-26a6-4a6a-877e-2663d2596744,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"396db37ef17ce7a36edf0e5996e5ebf4fadaffb45a7606a721a86b03912f3b0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.667000 kubelet[2705]: E0621 05:30:32.666956 2705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396db37ef17ce7a36edf0e5996e5ebf4fadaffb45a7606a721a86b03912f3b0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:32.667070 kubelet[2705]: E0621 05:30:32.667032 2705 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396db37ef17ce7a36edf0e5996e5ebf4fadaffb45a7606a721a86b03912f3b0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qvskp" Jun 21 05:30:32.667070 kubelet[2705]: E0621 05:30:32.667058 2705 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396db37ef17ce7a36edf0e5996e5ebf4fadaffb45a7606a721a86b03912f3b0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qvskp" Jun 21 05:30:32.667864 kubelet[2705]: E0621 05:30:32.667116 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qvskp_calico-system(0d57dcbc-26a6-4a6a-877e-2663d2596744)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qvskp_calico-system(0d57dcbc-26a6-4a6a-877e-2663d2596744)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"396db37ef17ce7a36edf0e5996e5ebf4fadaffb45a7606a721a86b03912f3b0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qvskp" podUID="0d57dcbc-26a6-4a6a-877e-2663d2596744" Jun 21 05:30:33.011152 kubelet[2705]: E0621 05:30:33.010765 2705 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jun 21 05:30:33.011152 kubelet[2705]: E0621 05:30:33.010989 2705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48b4f5a0-06d2-4b85-854f-4c514806e6d7-config-volume podName:48b4f5a0-06d2-4b85-854f-4c514806e6d7 nodeName:}" failed. No retries permitted until 2025-06-21 05:30:33.510964111 +0000 UTC m=+32.242761925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/48b4f5a0-06d2-4b85-854f-4c514806e6d7-config-volume") pod "coredns-674b8bbfcf-xpcbg" (UID: "48b4f5a0-06d2-4b85-854f-4c514806e6d7") : failed to sync configmap cache: timed out waiting for the condition Jun 21 05:30:33.017004 kubelet[2705]: E0621 05:30:33.016892 2705 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jun 21 05:30:33.017004 kubelet[2705]: E0621 05:30:33.016983 2705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc707386-a573-4931-82ab-55786356b796-config-volume podName:bc707386-a573-4931-82ab-55786356b796 nodeName:}" failed. No retries permitted until 2025-06-21 05:30:33.516964954 +0000 UTC m=+32.248762758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bc707386-a573-4931-82ab-55786356b796-config-volume") pod "coredns-674b8bbfcf-gqv9d" (UID: "bc707386-a573-4931-82ab-55786356b796") : failed to sync configmap cache: timed out waiting for the condition Jun 21 05:30:33.645095 kubelet[2705]: E0621 05:30:33.644742 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:33.646482 containerd[1542]: time="2025-06-21T05:30:33.646431013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gqv9d,Uid:bc707386-a573-4931-82ab-55786356b796,Namespace:kube-system,Attempt:0,}" Jun 21 05:30:33.745607 containerd[1542]: time="2025-06-21T05:30:33.745488849Z" level=error msg="Failed to destroy network for sandbox \"6604a5b61ec69e8d95320e6b24196326ce6cf653a94d1792b69029f437c601ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:33.750298 containerd[1542]: time="2025-06-21T05:30:33.750195689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gqv9d,Uid:bc707386-a573-4931-82ab-55786356b796,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6604a5b61ec69e8d95320e6b24196326ce6cf653a94d1792b69029f437c601ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:33.752344 kubelet[2705]: E0621 05:30:33.750586 2705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6604a5b61ec69e8d95320e6b24196326ce6cf653a94d1792b69029f437c601ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:33.752344 kubelet[2705]: E0621 05:30:33.750651 2705 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6604a5b61ec69e8d95320e6b24196326ce6cf653a94d1792b69029f437c601ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gqv9d" Jun 21 05:30:33.752344 kubelet[2705]: E0621 05:30:33.750675 2705 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6604a5b61ec69e8d95320e6b24196326ce6cf653a94d1792b69029f437c601ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gqv9d" Jun 21 05:30:33.750681 systemd[1]: run-netns-cni\x2ddd006897\x2d14ec\x2d6336\x2d5428\x2d46c54229d8ec.mount: Deactivated successfully. Jun 21 05:30:33.752892 kubelet[2705]: E0621 05:30:33.752643 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gqv9d_kube-system(bc707386-a573-4931-82ab-55786356b796)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gqv9d_kube-system(bc707386-a573-4931-82ab-55786356b796)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6604a5b61ec69e8d95320e6b24196326ce6cf653a94d1792b69029f437c601ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gqv9d" podUID="bc707386-a573-4931-82ab-55786356b796" Jun 21 05:30:33.769300 kubelet[2705]: E0621 05:30:33.769090 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:33.773631 containerd[1542]: time="2025-06-21T05:30:33.773574559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xpcbg,Uid:48b4f5a0-06d2-4b85-854f-4c514806e6d7,Namespace:kube-system,Attempt:0,}" Jun 21 05:30:33.894590 containerd[1542]: time="2025-06-21T05:30:33.894509871Z" level=error msg="Failed to destroy network for sandbox \"26dc063033aadfb0946e6a036a7baca8ab9df60c611a2aa2389aee008765cf98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:33.899436 containerd[1542]: time="2025-06-21T05:30:33.899291469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xpcbg,Uid:48b4f5a0-06d2-4b85-854f-4c514806e6d7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26dc063033aadfb0946e6a036a7baca8ab9df60c611a2aa2389aee008765cf98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:33.900306 systemd[1]: run-netns-cni\x2dadb8dc26\x2d52cb\x2dbdb1\x2d6145\x2db0af1d970698.mount: Deactivated successfully. Jun 21 05:30:33.901803 kubelet[2705]: E0621 05:30:33.901502 2705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26dc063033aadfb0946e6a036a7baca8ab9df60c611a2aa2389aee008765cf98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:30:33.901803 kubelet[2705]: E0621 05:30:33.901575 2705 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26dc063033aadfb0946e6a036a7baca8ab9df60c611a2aa2389aee008765cf98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xpcbg" Jun 21 05:30:33.901803 kubelet[2705]: E0621 05:30:33.901612 2705 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26dc063033aadfb0946e6a036a7baca8ab9df60c611a2aa2389aee008765cf98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xpcbg" Jun 21 05:30:33.901930 kubelet[2705]: E0621 05:30:33.901684 2705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xpcbg_kube-system(48b4f5a0-06d2-4b85-854f-4c514806e6d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xpcbg_kube-system(48b4f5a0-06d2-4b85-854f-4c514806e6d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26dc063033aadfb0946e6a036a7baca8ab9df60c611a2aa2389aee008765cf98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xpcbg" podUID="48b4f5a0-06d2-4b85-854f-4c514806e6d7" Jun 21 05:30:40.093036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1593704135.mount: Deactivated successfully. Jun 21 05:30:40.144781 containerd[1542]: time="2025-06-21T05:30:40.144642080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:40.147564 containerd[1542]: time="2025-06-21T05:30:40.147436186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 21 05:30:40.150171 containerd[1542]: time="2025-06-21T05:30:40.150055171Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:40.151069 containerd[1542]: time="2025-06-21T05:30:40.151017972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 8.319264041s" Jun 21 05:30:40.151433 containerd[1542]: time="2025-06-21T05:30:40.151074740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 21 05:30:40.152859 containerd[1542]: time="2025-06-21T05:30:40.152822529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:40.179499 containerd[1542]: time="2025-06-21T05:30:40.179432672Z" level=info msg="CreateContainer within sandbox \"9c566d750e82019d61172dbf067450192e7eb6b74342998662b6c58bb1bfe400\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 21 05:30:40.199475 containerd[1542]: time="2025-06-21T05:30:40.199411578Z" level=info msg="Container 1bdcacf2826d5024d86afc87abc588d11bd16aa7485ecf29f9f29ed5f2facef1: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:40.205776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833777779.mount: Deactivated successfully. Jun 21 05:30:40.219563 containerd[1542]: time="2025-06-21T05:30:40.219421893Z" level=info msg="CreateContainer within sandbox \"9c566d750e82019d61172dbf067450192e7eb6b74342998662b6c58bb1bfe400\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1bdcacf2826d5024d86afc87abc588d11bd16aa7485ecf29f9f29ed5f2facef1\"" Jun 21 05:30:40.220506 containerd[1542]: time="2025-06-21T05:30:40.220474964Z" level=info msg="StartContainer for \"1bdcacf2826d5024d86afc87abc588d11bd16aa7485ecf29f9f29ed5f2facef1\"" Jun 21 05:30:40.229844 containerd[1542]: time="2025-06-21T05:30:40.224559058Z" level=info msg="connecting to shim 1bdcacf2826d5024d86afc87abc588d11bd16aa7485ecf29f9f29ed5f2facef1" address="unix:///run/containerd/s/2f2134832e23459199078d137491340fdc1161371a4e65c932c2b5aae4cf8482" protocol=ttrpc version=3 Jun 21 05:30:40.370842 systemd[1]: Started cri-containerd-1bdcacf2826d5024d86afc87abc588d11bd16aa7485ecf29f9f29ed5f2facef1.scope - libcontainer container 1bdcacf2826d5024d86afc87abc588d11bd16aa7485ecf29f9f29ed5f2facef1. Jun 21 05:30:40.478858 containerd[1542]: time="2025-06-21T05:30:40.478750008Z" level=info msg="StartContainer for \"1bdcacf2826d5024d86afc87abc588d11bd16aa7485ecf29f9f29ed5f2facef1\" returns successfully" Jun 21 05:30:40.879841 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 21 05:30:40.880586 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 21 05:30:41.177742 kubelet[2705]: I0621 05:30:41.177650 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-87snn" podStartSLOduration=2.5084568640000002 podStartE2EDuration="21.177614997s" podCreationTimestamp="2025-06-21 05:30:20 +0000 UTC" firstStartedPulling="2025-06-21 05:30:21.483935862 +0000 UTC m=+20.215733663" lastFinishedPulling="2025-06-21 05:30:40.153093995 +0000 UTC m=+38.884891796" observedRunningTime="2025-06-21 05:30:40.908387011 +0000 UTC m=+39.640184841" watchObservedRunningTime="2025-06-21 05:30:41.177614997 +0000 UTC m=+39.909412822" Jun 21 05:30:41.297514 kubelet[2705]: I0621 05:30:41.297447 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/181e331f-ec96-4597-85c5-2475203d63be-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "181e331f-ec96-4597-85c5-2475203d63be" (UID: "181e331f-ec96-4597-85c5-2475203d63be"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 21 05:30:41.300631 kubelet[2705]: I0621 05:30:41.300561 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/181e331f-ec96-4597-85c5-2475203d63be-whisker-ca-bundle\") pod \"181e331f-ec96-4597-85c5-2475203d63be\" (UID: \"181e331f-ec96-4597-85c5-2475203d63be\") " Jun 21 05:30:41.300808 kubelet[2705]: I0621 05:30:41.300706 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/181e331f-ec96-4597-85c5-2475203d63be-whisker-backend-key-pair\") pod \"181e331f-ec96-4597-85c5-2475203d63be\" (UID: \"181e331f-ec96-4597-85c5-2475203d63be\") " Jun 21 05:30:41.300808 kubelet[2705]: I0621 05:30:41.300762 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbblv\" (UniqueName: \"kubernetes.io/projected/181e331f-ec96-4597-85c5-2475203d63be-kube-api-access-zbblv\") pod \"181e331f-ec96-4597-85c5-2475203d63be\" (UID: \"181e331f-ec96-4597-85c5-2475203d63be\") " Jun 21 05:30:41.300914 kubelet[2705]: I0621 05:30:41.300883 2705 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/181e331f-ec96-4597-85c5-2475203d63be-whisker-ca-bundle\") on node \"ci-4372.0.0-0-a0fa6d352b\" DevicePath \"\"" Jun 21 05:30:41.311613 kubelet[2705]: I0621 05:30:41.311543 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181e331f-ec96-4597-85c5-2475203d63be-kube-api-access-zbblv" (OuterVolumeSpecName: "kube-api-access-zbblv") pod "181e331f-ec96-4597-85c5-2475203d63be" (UID: "181e331f-ec96-4597-85c5-2475203d63be"). InnerVolumeSpecName "kube-api-access-zbblv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 05:30:41.312980 systemd[1]: var-lib-kubelet-pods-181e331f\x2dec96\x2d4597\x2d85c5\x2d2475203d63be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzbblv.mount: Deactivated successfully. Jun 21 05:30:41.314952 kubelet[2705]: I0621 05:30:41.314839 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/181e331f-ec96-4597-85c5-2475203d63be-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "181e331f-ec96-4597-85c5-2475203d63be" (UID: "181e331f-ec96-4597-85c5-2475203d63be"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 21 05:30:41.315351 systemd[1]: var-lib-kubelet-pods-181e331f\x2dec96\x2d4597\x2d85c5\x2d2475203d63be-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 21 05:30:41.401738 kubelet[2705]: I0621 05:30:41.401372 2705 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zbblv\" (UniqueName: \"kubernetes.io/projected/181e331f-ec96-4597-85c5-2475203d63be-kube-api-access-zbblv\") on node \"ci-4372.0.0-0-a0fa6d352b\" DevicePath \"\"" Jun 21 05:30:41.401738 kubelet[2705]: I0621 05:30:41.401688 2705 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/181e331f-ec96-4597-85c5-2475203d63be-whisker-backend-key-pair\") on node \"ci-4372.0.0-0-a0fa6d352b\" DevicePath \"\"" Jun 21 05:30:41.583450 systemd[1]: Removed slice kubepods-besteffort-pod181e331f_ec96_4597_85c5_2475203d63be.slice - libcontainer container kubepods-besteffort-pod181e331f_ec96_4597_85c5_2475203d63be.slice. Jun 21 05:30:41.882765 kubelet[2705]: I0621 05:30:41.882022 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:30:42.021150 systemd[1]: Created slice kubepods-besteffort-pod4bb41f88_3f34_40e7_9b55_50666e01edc7.slice - libcontainer container kubepods-besteffort-pod4bb41f88_3f34_40e7_9b55_50666e01edc7.slice. Jun 21 05:30:42.106866 kubelet[2705]: I0621 05:30:42.106682 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bb41f88-3f34-40e7-9b55-50666e01edc7-whisker-ca-bundle\") pod \"whisker-7cb94d654c-8vt8f\" (UID: \"4bb41f88-3f34-40e7-9b55-50666e01edc7\") " pod="calico-system/whisker-7cb94d654c-8vt8f" Jun 21 05:30:42.106866 kubelet[2705]: I0621 05:30:42.106765 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4bb41f88-3f34-40e7-9b55-50666e01edc7-whisker-backend-key-pair\") pod \"whisker-7cb94d654c-8vt8f\" (UID: \"4bb41f88-3f34-40e7-9b55-50666e01edc7\") " pod="calico-system/whisker-7cb94d654c-8vt8f" Jun 21 05:30:42.106866 kubelet[2705]: I0621 05:30:42.106785 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hndzx\" (UniqueName: \"kubernetes.io/projected/4bb41f88-3f34-40e7-9b55-50666e01edc7-kube-api-access-hndzx\") pod \"whisker-7cb94d654c-8vt8f\" (UID: \"4bb41f88-3f34-40e7-9b55-50666e01edc7\") " pod="calico-system/whisker-7cb94d654c-8vt8f" Jun 21 05:30:42.332489 containerd[1542]: time="2025-06-21T05:30:42.332395226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cb94d654c-8vt8f,Uid:4bb41f88-3f34-40e7-9b55-50666e01edc7,Namespace:calico-system,Attempt:0,}" Jun 21 05:30:42.701694 systemd-networkd[1456]: calie30a546e90c: Link UP Jun 21 05:30:42.704099 systemd-networkd[1456]: calie30a546e90c: Gained carrier Jun 21 05:30:42.741944 containerd[1542]: 2025-06-21 05:30:42.382 [INFO][3797] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 05:30:42.741944 containerd[1542]: 2025-06-21 05:30:42.417 [INFO][3797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-eth0 whisker-7cb94d654c- calico-system 4bb41f88-3f34-40e7-9b55-50666e01edc7 953 0 2025-06-21 05:30:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7cb94d654c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.0.0-0-a0fa6d352b whisker-7cb94d654c-8vt8f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie30a546e90c [] [] }} ContainerID="392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" Namespace="calico-system" Pod="whisker-7cb94d654c-8vt8f" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-" Jun 21 05:30:42.741944 containerd[1542]: 2025-06-21 05:30:42.417 [INFO][3797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" Namespace="calico-system" Pod="whisker-7cb94d654c-8vt8f" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-eth0" Jun 21 05:30:42.741944 containerd[1542]: 2025-06-21 05:30:42.606 [INFO][3805] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" HandleID="k8s-pod-network.392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-eth0" Jun 21 05:30:42.742803 containerd[1542]: 2025-06-21 05:30:42.609 [INFO][3805] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" HandleID="k8s-pod-network.392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004062d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.0-0-a0fa6d352b", "pod":"whisker-7cb94d654c-8vt8f", "timestamp":"2025-06-21 05:30:42.606171377 +0000 UTC"}, Hostname:"ci-4372.0.0-0-a0fa6d352b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:30:42.742803 containerd[1542]: 2025-06-21 05:30:42.609 [INFO][3805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:30:42.742803 containerd[1542]: 2025-06-21 05:30:42.609 [INFO][3805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:30:42.742803 containerd[1542]: 2025-06-21 05:30:42.609 [INFO][3805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-0-a0fa6d352b' Jun 21 05:30:42.742803 containerd[1542]: 2025-06-21 05:30:42.629 [INFO][3805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:42.742803 containerd[1542]: 2025-06-21 05:30:42.643 [INFO][3805] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:42.742803 containerd[1542]: 2025-06-21 05:30:42.652 [INFO][3805] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:42.742803 containerd[1542]: 2025-06-21 05:30:42.656 [INFO][3805] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:42.742803 containerd[1542]: 2025-06-21 05:30:42.660 [INFO][3805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:42.745816 containerd[1542]: 2025-06-21 05:30:42.660 [INFO][3805] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:42.745816 containerd[1542]: 2025-06-21 05:30:42.663 [INFO][3805] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0 Jun 21 05:30:42.745816 containerd[1542]: 2025-06-21 05:30:42.669 [INFO][3805] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:42.745816 containerd[1542]: 2025-06-21 05:30:42.678 [INFO][3805] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.1/26] block=192.168.15.0/26 handle="k8s-pod-network.392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:42.745816 containerd[1542]: 2025-06-21 05:30:42.678 [INFO][3805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.1/26] handle="k8s-pod-network.392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:42.745816 containerd[1542]: 2025-06-21 05:30:42.678 [INFO][3805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:30:42.745816 containerd[1542]: 2025-06-21 05:30:42.678 [INFO][3805] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.1/26] IPv6=[] ContainerID="392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" HandleID="k8s-pod-network.392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-eth0" Jun 21 05:30:42.747029 containerd[1542]: 2025-06-21 05:30:42.682 [INFO][3797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" Namespace="calico-system" Pod="whisker-7cb94d654c-8vt8f" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-eth0", GenerateName:"whisker-7cb94d654c-", Namespace:"calico-system", SelfLink:"", UID:"4bb41f88-3f34-40e7-9b55-50666e01edc7", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cb94d654c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"", Pod:"whisker-7cb94d654c-8vt8f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie30a546e90c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:42.747029 containerd[1542]: 2025-06-21 05:30:42.682 [INFO][3797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.1/32] ContainerID="392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" Namespace="calico-system" Pod="whisker-7cb94d654c-8vt8f" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-eth0" Jun 21 05:30:42.749695 containerd[1542]: 2025-06-21 05:30:42.682 [INFO][3797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie30a546e90c ContainerID="392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" Namespace="calico-system" Pod="whisker-7cb94d654c-8vt8f" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-eth0" Jun 21 05:30:42.749695 containerd[1542]: 2025-06-21 05:30:42.698 [INFO][3797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" Namespace="calico-system" Pod="whisker-7cb94d654c-8vt8f" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-eth0" Jun 21 05:30:42.749797 containerd[1542]: 2025-06-21 05:30:42.698 [INFO][3797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" Namespace="calico-system" Pod="whisker-7cb94d654c-8vt8f" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-eth0", GenerateName:"whisker-7cb94d654c-", Namespace:"calico-system", SelfLink:"", UID:"4bb41f88-3f34-40e7-9b55-50666e01edc7", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cb94d654c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0", Pod:"whisker-7cb94d654c-8vt8f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie30a546e90c", MAC:"5a:77:2e:b3:11:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:42.749876 containerd[1542]: 2025-06-21 05:30:42.720 [INFO][3797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" Namespace="calico-system" Pod="whisker-7cb94d654c-8vt8f" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-whisker--7cb94d654c--8vt8f-eth0" Jun 21 05:30:42.852647 containerd[1542]: time="2025-06-21T05:30:42.852509753Z" level=info msg="connecting to shim 392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0" address="unix:///run/containerd/s/107fe9eed804086b83d44e279af10ba8e4e87410da648d8ec2d9eabd08b03235" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:30:42.931644 systemd[1]: Started cri-containerd-392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0.scope - libcontainer container 392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0. Jun 21 05:30:43.076402 containerd[1542]: time="2025-06-21T05:30:43.076356791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cb94d654c-8vt8f,Uid:4bb41f88-3f34-40e7-9b55-50666e01edc7,Namespace:calico-system,Attempt:0,} returns sandbox id \"392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0\"" Jun 21 05:30:43.112522 containerd[1542]: time="2025-06-21T05:30:43.112469471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 21 05:30:43.585887 containerd[1542]: time="2025-06-21T05:30:43.585779378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d864bcf8d-rsvrn,Uid:5c75a7c9-289a-4fec-aaea-f545ac34e00f,Namespace:calico-apiserver,Attempt:0,}" Jun 21 05:30:43.603104 kubelet[2705]: I0621 05:30:43.602929 2705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="181e331f-ec96-4597-85c5-2475203d63be" path="/var/lib/kubelet/pods/181e331f-ec96-4597-85c5-2475203d63be/volumes" Jun 21 05:30:43.752388 systemd-networkd[1456]: calie30a546e90c: Gained IPv6LL Jun 21 05:30:43.762594 systemd-networkd[1456]: cali7c18a01d081: Link UP Jun 21 05:30:43.764427 systemd-networkd[1456]: cali7c18a01d081: Gained carrier Jun 21 05:30:43.796115 containerd[1542]: 2025-06-21 05:30:43.634 [INFO][3957] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 05:30:43.796115 containerd[1542]: 2025-06-21 05:30:43.649 [INFO][3957] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-eth0 calico-apiserver-7d864bcf8d- calico-apiserver 5c75a7c9-289a-4fec-aaea-f545ac34e00f 881 0 2025-06-21 05:30:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d864bcf8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.0-0-a0fa6d352b calico-apiserver-7d864bcf8d-rsvrn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7c18a01d081 [] [] }} ContainerID="b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-rsvrn" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-" Jun 21 05:30:43.796115 containerd[1542]: 2025-06-21 05:30:43.649 [INFO][3957] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-rsvrn" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-eth0" Jun 21 05:30:43.796115 containerd[1542]: 2025-06-21 05:30:43.691 [INFO][3969] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" HandleID="k8s-pod-network.b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-eth0" Jun 21 05:30:43.797700 containerd[1542]: 2025-06-21 05:30:43.691 [INFO][3969] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" HandleID="k8s-pod-network.b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f310), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.0-0-a0fa6d352b", "pod":"calico-apiserver-7d864bcf8d-rsvrn", "timestamp":"2025-06-21 05:30:43.691266987 +0000 UTC"}, Hostname:"ci-4372.0.0-0-a0fa6d352b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:30:43.797700 containerd[1542]: 2025-06-21 05:30:43.691 [INFO][3969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:30:43.797700 containerd[1542]: 2025-06-21 05:30:43.691 [INFO][3969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:30:43.797700 containerd[1542]: 2025-06-21 05:30:43.692 [INFO][3969] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-0-a0fa6d352b' Jun 21 05:30:43.797700 containerd[1542]: 2025-06-21 05:30:43.703 [INFO][3969] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:43.797700 containerd[1542]: 2025-06-21 05:30:43.712 [INFO][3969] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:43.797700 containerd[1542]: 2025-06-21 05:30:43.721 [INFO][3969] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:43.797700 containerd[1542]: 2025-06-21 05:30:43.724 [INFO][3969] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:43.797700 containerd[1542]: 2025-06-21 05:30:43.729 [INFO][3969] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:43.799110 containerd[1542]: 2025-06-21 05:30:43.729 [INFO][3969] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:43.799110 containerd[1542]: 2025-06-21 05:30:43.734 [INFO][3969] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210 Jun 21 05:30:43.799110 containerd[1542]: 2025-06-21 05:30:43.741 [INFO][3969] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:43.799110 containerd[1542]: 2025-06-21 05:30:43.754 [INFO][3969] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.2/26] block=192.168.15.0/26 handle="k8s-pod-network.b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:43.799110 containerd[1542]: 2025-06-21 05:30:43.754 [INFO][3969] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.2/26] handle="k8s-pod-network.b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:43.799110 containerd[1542]: 2025-06-21 05:30:43.754 [INFO][3969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:30:43.799110 containerd[1542]: 2025-06-21 05:30:43.755 [INFO][3969] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.2/26] IPv6=[] ContainerID="b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" HandleID="k8s-pod-network.b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-eth0" Jun 21 05:30:43.800976 containerd[1542]: 2025-06-21 05:30:43.758 [INFO][3957] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-rsvrn" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-eth0", GenerateName:"calico-apiserver-7d864bcf8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c75a7c9-289a-4fec-aaea-f545ac34e00f", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d864bcf8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"", Pod:"calico-apiserver-7d864bcf8d-rsvrn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c18a01d081", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:43.801372 containerd[1542]: 2025-06-21 05:30:43.759 [INFO][3957] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.2/32] ContainerID="b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-rsvrn" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-eth0" Jun 21 05:30:43.801372 containerd[1542]: 2025-06-21 05:30:43.759 [INFO][3957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c18a01d081 ContainerID="b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-rsvrn" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-eth0" Jun 21 05:30:43.801372 containerd[1542]: 2025-06-21 05:30:43.763 [INFO][3957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-rsvrn" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-eth0" Jun 21 05:30:43.801838 containerd[1542]: 2025-06-21 05:30:43.765 [INFO][3957] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-rsvrn" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-eth0", GenerateName:"calico-apiserver-7d864bcf8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c75a7c9-289a-4fec-aaea-f545ac34e00f", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d864bcf8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210", Pod:"calico-apiserver-7d864bcf8d-rsvrn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c18a01d081", MAC:"b6:c4:fd:69:6d:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:43.802005 containerd[1542]: 2025-06-21 05:30:43.788 [INFO][3957] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-rsvrn" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--rsvrn-eth0" Jun 21 05:30:43.828791 containerd[1542]: time="2025-06-21T05:30:43.828740516Z" level=info msg="connecting to shim b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210" address="unix:///run/containerd/s/a3463cef11f8411b97f7eaa1fba6a31daf9b92b870645b2304a3c58cb74a14d3" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:30:43.872768 systemd[1]: Started cri-containerd-b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210.scope - libcontainer container b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210. Jun 21 05:30:43.940499 containerd[1542]: time="2025-06-21T05:30:43.940446405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d864bcf8d-rsvrn,Uid:5c75a7c9-289a-4fec-aaea-f545ac34e00f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210\"" Jun 21 05:30:44.068540 kubelet[2705]: I0621 05:30:44.068476 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:30:44.069436 kubelet[2705]: E0621 05:30:44.069391 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:44.566527 containerd[1542]: time="2025-06-21T05:30:44.566425169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-5jcrs,Uid:684e44d5-7d16-4ab2-86ae-3cb7892ca253,Namespace:calico-system,Attempt:0,}" Jun 21 05:30:44.864667 systemd-networkd[1456]: cali627337c167e: Link UP Jun 21 05:30:44.864947 systemd-networkd[1456]: cali627337c167e: Gained carrier Jun 21 05:30:44.897237 containerd[1542]: 2025-06-21 05:30:44.639 [INFO][4079] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 05:30:44.897237 containerd[1542]: 2025-06-21 05:30:44.665 [INFO][4079] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-eth0 goldmane-5bd85449d4- calico-system 684e44d5-7d16-4ab2-86ae-3cb7892ca253 884 0 2025-06-21 05:30:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.0.0-0-a0fa6d352b goldmane-5bd85449d4-5jcrs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali627337c167e [] [] }} ContainerID="64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" Namespace="calico-system" Pod="goldmane-5bd85449d4-5jcrs" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-" Jun 21 05:30:44.897237 containerd[1542]: 2025-06-21 05:30:44.665 [INFO][4079] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" Namespace="calico-system" Pod="goldmane-5bd85449d4-5jcrs" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-eth0" Jun 21 05:30:44.897237 containerd[1542]: 2025-06-21 05:30:44.755 [INFO][4093] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" HandleID="k8s-pod-network.64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-eth0" Jun 21 05:30:44.897739 containerd[1542]: 2025-06-21 05:30:44.755 [INFO][4093] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" HandleID="k8s-pod-network.64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.0-0-a0fa6d352b", "pod":"goldmane-5bd85449d4-5jcrs", "timestamp":"2025-06-21 05:30:44.755672322 +0000 UTC"}, Hostname:"ci-4372.0.0-0-a0fa6d352b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:30:44.897739 containerd[1542]: 2025-06-21 05:30:44.755 [INFO][4093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:30:44.897739 containerd[1542]: 2025-06-21 05:30:44.755 [INFO][4093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:30:44.897739 containerd[1542]: 2025-06-21 05:30:44.755 [INFO][4093] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-0-a0fa6d352b' Jun 21 05:30:44.897739 containerd[1542]: 2025-06-21 05:30:44.771 [INFO][4093] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:44.897739 containerd[1542]: 2025-06-21 05:30:44.785 [INFO][4093] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:44.897739 containerd[1542]: 2025-06-21 05:30:44.799 [INFO][4093] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:44.897739 containerd[1542]: 2025-06-21 05:30:44.804 [INFO][4093] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:44.897739 containerd[1542]: 2025-06-21 05:30:44.811 [INFO][4093] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:44.898003 containerd[1542]: 2025-06-21 05:30:44.812 [INFO][4093] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:44.898003 containerd[1542]: 2025-06-21 05:30:44.816 [INFO][4093] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf Jun 21 05:30:44.898003 containerd[1542]: 2025-06-21 05:30:44.836 [INFO][4093] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:44.898003 containerd[1542]: 2025-06-21 05:30:44.852 [INFO][4093] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.3/26] block=192.168.15.0/26 handle="k8s-pod-network.64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:44.898003 containerd[1542]: 2025-06-21 05:30:44.853 [INFO][4093] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.3/26] handle="k8s-pod-network.64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:44.898003 containerd[1542]: 2025-06-21 05:30:44.853 [INFO][4093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:30:44.898003 containerd[1542]: 2025-06-21 05:30:44.853 [INFO][4093] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.3/26] IPv6=[] ContainerID="64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" HandleID="k8s-pod-network.64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-eth0" Jun 21 05:30:44.898561 containerd[1542]: 2025-06-21 05:30:44.859 [INFO][4079] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" Namespace="calico-system" Pod="goldmane-5bd85449d4-5jcrs" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"684e44d5-7d16-4ab2-86ae-3cb7892ca253", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"", Pod:"goldmane-5bd85449d4-5jcrs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali627337c167e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:44.898561 containerd[1542]: 2025-06-21 05:30:44.859 [INFO][4079] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.3/32] ContainerID="64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" Namespace="calico-system" Pod="goldmane-5bd85449d4-5jcrs" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-eth0" Jun 21 05:30:44.898699 containerd[1542]: 2025-06-21 05:30:44.859 [INFO][4079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali627337c167e ContainerID="64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" Namespace="calico-system" Pod="goldmane-5bd85449d4-5jcrs" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-eth0" Jun 21 05:30:44.898699 containerd[1542]: 2025-06-21 05:30:44.864 [INFO][4079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" Namespace="calico-system" Pod="goldmane-5bd85449d4-5jcrs" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-eth0" Jun 21 05:30:44.898757 containerd[1542]: 2025-06-21 05:30:44.864 [INFO][4079] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" Namespace="calico-system" Pod="goldmane-5bd85449d4-5jcrs" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"684e44d5-7d16-4ab2-86ae-3cb7892ca253", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf", Pod:"goldmane-5bd85449d4-5jcrs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali627337c167e", MAC:"f2:bb:2c:e0:3e:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:44.898821 containerd[1542]: 2025-06-21 05:30:44.887 [INFO][4079] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" Namespace="calico-system" Pod="goldmane-5bd85449d4-5jcrs" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-goldmane--5bd85449d4--5jcrs-eth0" Jun 21 05:30:44.922153 containerd[1542]: time="2025-06-21T05:30:44.920984832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:44.926148 containerd[1542]: time="2025-06-21T05:30:44.924508764Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:44.926881 containerd[1542]: time="2025-06-21T05:30:44.926834537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 21 05:30:44.938147 containerd[1542]: time="2025-06-21T05:30:44.937840204Z" level=info msg="connecting to shim 64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf" address="unix:///run/containerd/s/ccb5ad7301161b2dd64bf93a9a05c9c3571275baf256878d3481be4e11eb5f7f" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:30:44.939215 kubelet[2705]: E0621 05:30:44.939180 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:44.949719 containerd[1542]: time="2025-06-21T05:30:44.947259630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:44.950351 containerd[1542]: time="2025-06-21T05:30:44.947941775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 1.83542184s" Jun 21 05:30:44.950705 containerd[1542]: time="2025-06-21T05:30:44.950511917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 21 05:30:44.955037 containerd[1542]: time="2025-06-21T05:30:44.954958309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 21 05:30:44.962543 containerd[1542]: time="2025-06-21T05:30:44.961345225Z" level=info msg="CreateContainer within sandbox \"392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 21 05:30:44.968290 systemd-networkd[1456]: cali7c18a01d081: Gained IPv6LL Jun 21 05:30:44.997745 containerd[1542]: time="2025-06-21T05:30:44.997690055Z" level=info msg="Container c4f4aa58d10aee93136bba61a3678368443a0cd515b285212c84bf8de634ebd0: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:45.014566 containerd[1542]: time="2025-06-21T05:30:45.014520464Z" level=info msg="CreateContainer within sandbox \"392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c4f4aa58d10aee93136bba61a3678368443a0cd515b285212c84bf8de634ebd0\"" Jun 21 05:30:45.024526 containerd[1542]: time="2025-06-21T05:30:45.024436650Z" level=info msg="StartContainer for \"c4f4aa58d10aee93136bba61a3678368443a0cd515b285212c84bf8de634ebd0\"" Jun 21 05:30:45.032411 systemd[1]: Started cri-containerd-64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf.scope - libcontainer container 64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf. Jun 21 05:30:45.047348 containerd[1542]: time="2025-06-21T05:30:45.042010436Z" level=info msg="connecting to shim c4f4aa58d10aee93136bba61a3678368443a0cd515b285212c84bf8de634ebd0" address="unix:///run/containerd/s/107fe9eed804086b83d44e279af10ba8e4e87410da648d8ec2d9eabd08b03235" protocol=ttrpc version=3 Jun 21 05:30:45.099495 systemd[1]: Started cri-containerd-c4f4aa58d10aee93136bba61a3678368443a0cd515b285212c84bf8de634ebd0.scope - libcontainer container c4f4aa58d10aee93136bba61a3678368443a0cd515b285212c84bf8de634ebd0. Jun 21 05:30:45.198784 containerd[1542]: time="2025-06-21T05:30:45.198731728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-5jcrs,Uid:684e44d5-7d16-4ab2-86ae-3cb7892ca253,Namespace:calico-system,Attempt:0,} returns sandbox id \"64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf\"" Jun 21 05:30:45.218336 containerd[1542]: time="2025-06-21T05:30:45.218261544Z" level=info msg="StartContainer for \"c4f4aa58d10aee93136bba61a3678368443a0cd515b285212c84bf8de634ebd0\" returns successfully" Jun 21 05:30:45.433917 systemd-networkd[1456]: vxlan.calico: Link UP Jun 21 05:30:45.433930 systemd-networkd[1456]: vxlan.calico: Gained carrier Jun 21 05:30:45.563064 kubelet[2705]: E0621 05:30:45.562945 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:45.565146 containerd[1542]: time="2025-06-21T05:30:45.564612707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xpcbg,Uid:48b4f5a0-06d2-4b85-854f-4c514806e6d7,Namespace:kube-system,Attempt:0,}" Jun 21 05:30:45.566901 containerd[1542]: time="2025-06-21T05:30:45.566609120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84cc887c7f-r77kt,Uid:0ae29d9a-bf5d-4742-896a-2a2ead377607,Namespace:calico-system,Attempt:0,}" Jun 21 05:30:45.570090 containerd[1542]: time="2025-06-21T05:30:45.569952345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d864bcf8d-25t7g,Uid:7b16b6e7-bce4-413f-ad33-f27b0fa03961,Namespace:calico-apiserver,Attempt:0,}" Jun 21 05:30:45.902640 systemd-networkd[1456]: cali5245c508207: Link UP Jun 21 05:30:45.903780 systemd-networkd[1456]: cali5245c508207: Gained carrier Jun 21 05:30:45.966506 containerd[1542]: 2025-06-21 05:30:45.708 [INFO][4228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-eth0 coredns-674b8bbfcf- kube-system 48b4f5a0-06d2-4b85-854f-4c514806e6d7 885 0 2025-06-21 05:30:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.0-0-a0fa6d352b coredns-674b8bbfcf-xpcbg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5245c508207 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpcbg" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-" Jun 21 05:30:45.966506 containerd[1542]: 2025-06-21 05:30:45.708 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpcbg" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-eth0" Jun 21 05:30:45.966506 containerd[1542]: 2025-06-21 05:30:45.803 [INFO][4272] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" HandleID="k8s-pod-network.8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-eth0" Jun 21 05:30:45.967868 containerd[1542]: 2025-06-21 05:30:45.804 [INFO][4272] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" HandleID="k8s-pod-network.8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5030), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.0-0-a0fa6d352b", "pod":"coredns-674b8bbfcf-xpcbg", "timestamp":"2025-06-21 05:30:45.803786266 +0000 UTC"}, Hostname:"ci-4372.0.0-0-a0fa6d352b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:30:45.967868 containerd[1542]: 2025-06-21 05:30:45.804 [INFO][4272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:30:45.967868 containerd[1542]: 2025-06-21 05:30:45.804 [INFO][4272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:30:45.967868 containerd[1542]: 2025-06-21 05:30:45.804 [INFO][4272] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-0-a0fa6d352b' Jun 21 05:30:45.967868 containerd[1542]: 2025-06-21 05:30:45.825 [INFO][4272] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:45.967868 containerd[1542]: 2025-06-21 05:30:45.838 [INFO][4272] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:45.967868 containerd[1542]: 2025-06-21 05:30:45.852 [INFO][4272] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:45.967868 containerd[1542]: 2025-06-21 05:30:45.858 [INFO][4272] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:45.967868 containerd[1542]: 2025-06-21 05:30:45.862 [INFO][4272] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:45.968999 containerd[1542]: 2025-06-21 05:30:45.862 [INFO][4272] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:45.968999 containerd[1542]: 2025-06-21 05:30:45.865 [INFO][4272] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d Jun 21 05:30:45.968999 containerd[1542]: 2025-06-21 05:30:45.877 [INFO][4272] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:45.968999 containerd[1542]: 2025-06-21 05:30:45.885 [INFO][4272] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.4/26] block=192.168.15.0/26 handle="k8s-pod-network.8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:45.968999 containerd[1542]: 2025-06-21 05:30:45.886 [INFO][4272] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.4/26] handle="k8s-pod-network.8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:45.968999 containerd[1542]: 2025-06-21 05:30:45.886 [INFO][4272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:30:45.968999 containerd[1542]: 2025-06-21 05:30:45.886 [INFO][4272] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.4/26] IPv6=[] ContainerID="8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" HandleID="k8s-pod-network.8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-eth0" Jun 21 05:30:45.969340 containerd[1542]: 2025-06-21 05:30:45.892 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpcbg" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"48b4f5a0-06d2-4b85-854f-4c514806e6d7", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"", Pod:"coredns-674b8bbfcf-xpcbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5245c508207", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:45.969340 containerd[1542]: 2025-06-21 05:30:45.892 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.4/32] ContainerID="8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpcbg" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-eth0" Jun 21 05:30:45.969340 containerd[1542]: 2025-06-21 05:30:45.892 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5245c508207 ContainerID="8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpcbg" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-eth0" Jun 21 05:30:45.969340 containerd[1542]: 2025-06-21 05:30:45.906 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpcbg" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-eth0" Jun 21 05:30:45.969340 containerd[1542]: 2025-06-21 05:30:45.909 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpcbg" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"48b4f5a0-06d2-4b85-854f-4c514806e6d7", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d", Pod:"coredns-674b8bbfcf-xpcbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5245c508207", MAC:"5a:0c:17:a6:ca:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:45.969340 containerd[1542]: 2025-06-21 05:30:45.939 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpcbg" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--xpcbg-eth0" Jun 21 05:30:46.082630 containerd[1542]: time="2025-06-21T05:30:46.082057150Z" level=info msg="connecting to shim 8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d" address="unix:///run/containerd/s/02609a62345c9502abfb3f4e60166656ff158a98a597f3cf696e8b713b3e4dff" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:30:46.099201 systemd-networkd[1456]: cali13574e5946d: Link UP Jun 21 05:30:46.109533 systemd-networkd[1456]: cali13574e5946d: Gained carrier Jun 21 05:30:46.161462 systemd[1]: Started cri-containerd-8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d.scope - libcontainer container 8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d. Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:45.677 [INFO][4235] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-eth0 calico-apiserver-7d864bcf8d- calico-apiserver 7b16b6e7-bce4-413f-ad33-f27b0fa03961 879 0 2025-06-21 05:30:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d864bcf8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.0-0-a0fa6d352b calico-apiserver-7d864bcf8d-25t7g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali13574e5946d [] [] }} ContainerID="772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-25t7g" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-" Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:45.677 [INFO][4235] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-25t7g" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-eth0" Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:45.809 [INFO][4265] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" HandleID="k8s-pod-network.772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-eth0" Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:45.810 [INFO][4265] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" HandleID="k8s-pod-network.772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002593b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.0-0-a0fa6d352b", "pod":"calico-apiserver-7d864bcf8d-25t7g", "timestamp":"2025-06-21 05:30:45.809023499 +0000 UTC"}, Hostname:"ci-4372.0.0-0-a0fa6d352b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:45.812 [INFO][4265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:45.886 [INFO][4265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:45.886 [INFO][4265] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-0-a0fa6d352b' Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:45.926 [INFO][4265] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:45.946 [INFO][4265] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:45.970 [INFO][4265] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:45.981 [INFO][4265] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:45.992 [INFO][4265] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:45.992 [INFO][4265] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:46.002 [INFO][4265] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61 Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:46.026 [INFO][4265] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:46.043 [INFO][4265] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.5/26] block=192.168.15.0/26 handle="k8s-pod-network.772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:46.044 [INFO][4265] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.5/26] handle="k8s-pod-network.772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:46.044 [INFO][4265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:30:46.174089 containerd[1542]: 2025-06-21 05:30:46.044 [INFO][4265] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.5/26] IPv6=[] ContainerID="772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" HandleID="k8s-pod-network.772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-eth0" Jun 21 05:30:46.176708 containerd[1542]: 2025-06-21 05:30:46.056 [INFO][4235] cni-plugin/k8s.go 418: Populated endpoint ContainerID="772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-25t7g" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-eth0", GenerateName:"calico-apiserver-7d864bcf8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b16b6e7-bce4-413f-ad33-f27b0fa03961", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d864bcf8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"", Pod:"calico-apiserver-7d864bcf8d-25t7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13574e5946d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:46.176708 containerd[1542]: 2025-06-21 05:30:46.058 [INFO][4235] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.5/32] ContainerID="772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-25t7g" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-eth0" Jun 21 05:30:46.176708 containerd[1542]: 2025-06-21 05:30:46.058 [INFO][4235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13574e5946d ContainerID="772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-25t7g" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-eth0" Jun 21 05:30:46.176708 containerd[1542]: 2025-06-21 05:30:46.130 [INFO][4235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-25t7g" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-eth0" Jun 21 05:30:46.176708 containerd[1542]: 2025-06-21 05:30:46.135 [INFO][4235] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-25t7g" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-eth0", GenerateName:"calico-apiserver-7d864bcf8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b16b6e7-bce4-413f-ad33-f27b0fa03961", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d864bcf8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61", Pod:"calico-apiserver-7d864bcf8d-25t7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13574e5946d", MAC:"96:31:7b:ce:62:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:46.176708 containerd[1542]: 2025-06-21 05:30:46.167 [INFO][4235] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" Namespace="calico-apiserver" Pod="calico-apiserver-7d864bcf8d-25t7g" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--apiserver--7d864bcf8d--25t7g-eth0" Jun 21 05:30:46.236228 containerd[1542]: time="2025-06-21T05:30:46.236100590Z" level=info msg="connecting to shim 772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61" address="unix:///run/containerd/s/0857d3245c223e9dc70473dbc735c2e27c9f33a3f1e5b6b543245eb285def396" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:30:46.294773 systemd-networkd[1456]: cali4de5a968ed0: Link UP Jun 21 05:30:46.303110 systemd-networkd[1456]: cali4de5a968ed0: Gained carrier Jun 21 05:30:46.327607 systemd[1]: Started cri-containerd-772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61.scope - libcontainer container 772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61. Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:45.729 [INFO][4247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-eth0 calico-kube-controllers-84cc887c7f- calico-system 0ae29d9a-bf5d-4742-896a-2a2ead377607 882 0 2025-06-21 05:30:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84cc887c7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.0.0-0-a0fa6d352b calico-kube-controllers-84cc887c7f-r77kt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4de5a968ed0 [] [] }} ContainerID="0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" Namespace="calico-system" Pod="calico-kube-controllers-84cc887c7f-r77kt" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-" Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:45.729 [INFO][4247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" Namespace="calico-system" Pod="calico-kube-controllers-84cc887c7f-r77kt" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-eth0" Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:45.832 [INFO][4277] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" HandleID="k8s-pod-network.0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-eth0" Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:45.833 [INFO][4277] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" HandleID="k8s-pod-network.0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af270), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.0-0-a0fa6d352b", "pod":"calico-kube-controllers-84cc887c7f-r77kt", "timestamp":"2025-06-21 05:30:45.832905363 +0000 UTC"}, Hostname:"ci-4372.0.0-0-a0fa6d352b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:45.833 [INFO][4277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.044 [INFO][4277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.044 [INFO][4277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-0-a0fa6d352b' Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.088 [INFO][4277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.150 [INFO][4277] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.186 [INFO][4277] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.193 [INFO][4277] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.202 [INFO][4277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.202 [INFO][4277] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.213 [INFO][4277] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.234 [INFO][4277] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.271 [INFO][4277] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.6/26] block=192.168.15.0/26 handle="k8s-pod-network.0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.273 [INFO][4277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.6/26] handle="k8s-pod-network.0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.273 [INFO][4277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:30:46.379717 containerd[1542]: 2025-06-21 05:30:46.273 [INFO][4277] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.6/26] IPv6=[] ContainerID="0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" HandleID="k8s-pod-network.0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-eth0" Jun 21 05:30:46.381803 containerd[1542]: 2025-06-21 05:30:46.280 [INFO][4247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" Namespace="calico-system" Pod="calico-kube-controllers-84cc887c7f-r77kt" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-eth0", GenerateName:"calico-kube-controllers-84cc887c7f-", Namespace:"calico-system", SelfLink:"", UID:"0ae29d9a-bf5d-4742-896a-2a2ead377607", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84cc887c7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"", Pod:"calico-kube-controllers-84cc887c7f-r77kt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4de5a968ed0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:46.381803 containerd[1542]: 2025-06-21 05:30:46.280 [INFO][4247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.6/32] ContainerID="0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" Namespace="calico-system" Pod="calico-kube-controllers-84cc887c7f-r77kt" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-eth0" Jun 21 05:30:46.381803 containerd[1542]: 2025-06-21 05:30:46.280 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4de5a968ed0 ContainerID="0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" Namespace="calico-system" Pod="calico-kube-controllers-84cc887c7f-r77kt" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-eth0" Jun 21 05:30:46.381803 containerd[1542]: 2025-06-21 05:30:46.307 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" Namespace="calico-system" Pod="calico-kube-controllers-84cc887c7f-r77kt" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-eth0" Jun 21 05:30:46.381803 containerd[1542]: 2025-06-21 05:30:46.324 [INFO][4247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" Namespace="calico-system" Pod="calico-kube-controllers-84cc887c7f-r77kt" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-eth0", GenerateName:"calico-kube-controllers-84cc887c7f-", Namespace:"calico-system", SelfLink:"", UID:"0ae29d9a-bf5d-4742-896a-2a2ead377607", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84cc887c7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb", Pod:"calico-kube-controllers-84cc887c7f-r77kt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4de5a968ed0", MAC:"c6:9b:54:4b:60:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:46.381803 containerd[1542]: 2025-06-21 05:30:46.371 [INFO][4247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" Namespace="calico-system" Pod="calico-kube-controllers-84cc887c7f-r77kt" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-calico--kube--controllers--84cc887c7f--r77kt-eth0" Jun 21 05:30:46.404978 containerd[1542]: time="2025-06-21T05:30:46.404710296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xpcbg,Uid:48b4f5a0-06d2-4b85-854f-4c514806e6d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d\"" Jun 21 05:30:46.411167 kubelet[2705]: E0621 05:30:46.410323 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:46.433718 containerd[1542]: time="2025-06-21T05:30:46.433452366Z" level=info msg="CreateContainer within sandbox \"8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 05:30:46.466472 containerd[1542]: time="2025-06-21T05:30:46.466389847Z" level=info msg="connecting to shim 0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb" address="unix:///run/containerd/s/5449dbbc27ba9c20c40168a47621c274a8f9a1c35a09bf5dd52d54ad964a2d7d" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:30:46.483358 containerd[1542]: time="2025-06-21T05:30:46.483294368Z" level=info msg="Container fe069b68ed02a4eef5ca984d6d1451fcf881c9c795413d71c42a39184da718ce: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:46.535583 containerd[1542]: time="2025-06-21T05:30:46.534271082Z" level=info msg="CreateContainer within sandbox \"8675bacdced5b81e3c91b0957d182e5bbf99248de9008025ba95f96cbc07f29d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe069b68ed02a4eef5ca984d6d1451fcf881c9c795413d71c42a39184da718ce\"" Jun 21 05:30:46.538741 containerd[1542]: time="2025-06-21T05:30:46.538397324Z" level=info msg="StartContainer for \"fe069b68ed02a4eef5ca984d6d1451fcf881c9c795413d71c42a39184da718ce\"" Jun 21 05:30:46.545024 containerd[1542]: time="2025-06-21T05:30:46.544965084Z" level=info msg="connecting to shim fe069b68ed02a4eef5ca984d6d1451fcf881c9c795413d71c42a39184da718ce" address="unix:///run/containerd/s/02609a62345c9502abfb3f4e60166656ff158a98a597f3cf696e8b713b3e4dff" protocol=ttrpc version=3 Jun 21 05:30:46.569672 systemd-networkd[1456]: cali627337c167e: Gained IPv6LL Jun 21 05:30:46.602453 containerd[1542]: time="2025-06-21T05:30:46.602384753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d864bcf8d-25t7g,Uid:7b16b6e7-bce4-413f-ad33-f27b0fa03961,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61\"" Jun 21 05:30:46.609731 systemd[1]: Started cri-containerd-0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb.scope - libcontainer container 0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb. Jun 21 05:30:46.643953 systemd[1]: Started cri-containerd-fe069b68ed02a4eef5ca984d6d1451fcf881c9c795413d71c42a39184da718ce.scope - libcontainer container fe069b68ed02a4eef5ca984d6d1451fcf881c9c795413d71c42a39184da718ce. Jun 21 05:30:46.760586 systemd-networkd[1456]: vxlan.calico: Gained IPv6LL Jun 21 05:30:46.790456 containerd[1542]: time="2025-06-21T05:30:46.790305889Z" level=info msg="StartContainer for \"fe069b68ed02a4eef5ca984d6d1451fcf881c9c795413d71c42a39184da718ce\" returns successfully" Jun 21 05:30:46.818450 containerd[1542]: time="2025-06-21T05:30:46.818396464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84cc887c7f-r77kt,Uid:0ae29d9a-bf5d-4742-896a-2a2ead377607,Namespace:calico-system,Attempt:0,} returns sandbox id \"0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb\"" Jun 21 05:30:46.992766 kubelet[2705]: E0621 05:30:46.992253 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:47.083157 kubelet[2705]: I0621 05:30:47.073535 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xpcbg" podStartSLOduration=41.073503326 podStartE2EDuration="41.073503326s" podCreationTimestamp="2025-06-21 05:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:30:47.069185086 +0000 UTC m=+45.800982908" watchObservedRunningTime="2025-06-21 05:30:47.073503326 +0000 UTC m=+45.805301151" Jun 21 05:30:47.528814 systemd-networkd[1456]: cali13574e5946d: Gained IPv6LL Jun 21 05:30:47.565210 containerd[1542]: time="2025-06-21T05:30:47.564434021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qvskp,Uid:0d57dcbc-26a6-4a6a-877e-2663d2596744,Namespace:calico-system,Attempt:0,}" Jun 21 05:30:47.566717 kubelet[2705]: E0621 05:30:47.566677 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:47.577727 containerd[1542]: time="2025-06-21T05:30:47.577672463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gqv9d,Uid:bc707386-a573-4931-82ab-55786356b796,Namespace:kube-system,Attempt:0,}" Jun 21 05:30:47.784409 systemd-networkd[1456]: cali5245c508207: Gained IPv6LL Jun 21 05:30:47.912310 systemd-networkd[1456]: cali4de5a968ed0: Gained IPv6LL Jun 21 05:30:47.939388 systemd-networkd[1456]: cali9a7a2d5dfd4: Link UP Jun 21 05:30:47.943416 systemd-networkd[1456]: cali9a7a2d5dfd4: Gained carrier Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.705 [INFO][4528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-eth0 csi-node-driver- calico-system 0d57dcbc-26a6-4a6a-877e-2663d2596744 768 0 2025-06-21 05:30:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.0.0-0-a0fa6d352b csi-node-driver-qvskp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9a7a2d5dfd4 [] [] }} ContainerID="745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" Namespace="calico-system" Pod="csi-node-driver-qvskp" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-" Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.705 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" Namespace="calico-system" Pod="csi-node-driver-qvskp" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-eth0" Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.800 [INFO][4552] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" HandleID="k8s-pod-network.745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-eth0" Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.802 [INFO][4552] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" HandleID="k8s-pod-network.745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e300), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.0-0-a0fa6d352b", "pod":"csi-node-driver-qvskp", "timestamp":"2025-06-21 05:30:47.800220711 +0000 UTC"}, Hostname:"ci-4372.0.0-0-a0fa6d352b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.802 [INFO][4552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.802 [INFO][4552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.802 [INFO][4552] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-0-a0fa6d352b' Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.821 [INFO][4552] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.836 [INFO][4552] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.852 [INFO][4552] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.860 [INFO][4552] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.871 [INFO][4552] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.872 [INFO][4552] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.878 [INFO][4552] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8 Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.906 [INFO][4552] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.918 [INFO][4552] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.7/26] block=192.168.15.0/26 handle="k8s-pod-network.745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.918 [INFO][4552] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.7/26] handle="k8s-pod-network.745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.919 [INFO][4552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:30:48.002520 containerd[1542]: 2025-06-21 05:30:47.920 [INFO][4552] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.7/26] IPv6=[] ContainerID="745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" HandleID="k8s-pod-network.745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-eth0" Jun 21 05:30:48.006692 containerd[1542]: 2025-06-21 05:30:47.933 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" Namespace="calico-system" Pod="csi-node-driver-qvskp" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0d57dcbc-26a6-4a6a-877e-2663d2596744", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"", Pod:"csi-node-driver-qvskp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a7a2d5dfd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:48.006692 containerd[1542]: 2025-06-21 05:30:47.933 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.7/32] ContainerID="745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" Namespace="calico-system" Pod="csi-node-driver-qvskp" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-eth0" Jun 21 05:30:48.006692 containerd[1542]: 2025-06-21 05:30:47.933 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a7a2d5dfd4 ContainerID="745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" Namespace="calico-system" Pod="csi-node-driver-qvskp" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-eth0" Jun 21 05:30:48.006692 containerd[1542]: 2025-06-21 05:30:47.948 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" Namespace="calico-system" Pod="csi-node-driver-qvskp" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-eth0" Jun 21 05:30:48.006692 containerd[1542]: 2025-06-21 05:30:47.951 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" Namespace="calico-system" Pod="csi-node-driver-qvskp" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0d57dcbc-26a6-4a6a-877e-2663d2596744", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8", Pod:"csi-node-driver-qvskp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a7a2d5dfd4", MAC:"7a:5f:6d:1a:40:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:48.006692 containerd[1542]: 2025-06-21 05:30:47.979 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" Namespace="calico-system" Pod="csi-node-driver-qvskp" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-csi--node--driver--qvskp-eth0" Jun 21 05:30:48.026981 kubelet[2705]: E0621 05:30:48.026934 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:48.131093 containerd[1542]: time="2025-06-21T05:30:48.130621399Z" level=info msg="connecting to shim 745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8" address="unix:///run/containerd/s/60a7b906c912c3d3b160b1e8c1774b446bd0f1a38d2ae259eb1d4a05a3964398" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:30:48.226960 systemd-networkd[1456]: calibcfbc4fa442: Link UP Jun 21 05:30:48.246199 systemd-networkd[1456]: calibcfbc4fa442: Gained carrier Jun 21 05:30:48.313418 systemd[1]: Started cri-containerd-745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8.scope - libcontainer container 745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8. Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:47.741 [INFO][4537] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-eth0 coredns-674b8bbfcf- kube-system bc707386-a573-4931-82ab-55786356b796 878 0 2025-06-21 05:30:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.0-0-a0fa6d352b coredns-674b8bbfcf-gqv9d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibcfbc4fa442 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqv9d" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-" Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:47.742 [INFO][4537] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqv9d" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-eth0" Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:47.857 [INFO][4557] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" HandleID="k8s-pod-network.aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-eth0" Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:47.858 [INFO][4557] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" HandleID="k8s-pod-network.aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000636b80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.0-0-a0fa6d352b", "pod":"coredns-674b8bbfcf-gqv9d", "timestamp":"2025-06-21 05:30:47.857891733 +0000 UTC"}, Hostname:"ci-4372.0.0-0-a0fa6d352b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:47.858 [INFO][4557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:47.920 [INFO][4557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:47.920 [INFO][4557] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.0-0-a0fa6d352b' Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:47.960 [INFO][4557] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:47.998 [INFO][4557] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:48.024 [INFO][4557] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:48.046 [INFO][4557] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:48.057 [INFO][4557] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:48.057 [INFO][4557] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:48.065 [INFO][4557] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:48.105 [INFO][4557] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:48.165 [INFO][4557] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.8/26] block=192.168.15.0/26 handle="k8s-pod-network.aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:48.168 [INFO][4557] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.8/26] handle="k8s-pod-network.aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" host="ci-4372.0.0-0-a0fa6d352b" Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:48.170 [INFO][4557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:30:48.325693 containerd[1542]: 2025-06-21 05:30:48.171 [INFO][4557] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.8/26] IPv6=[] ContainerID="aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" HandleID="k8s-pod-network.aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" Workload="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-eth0" Jun 21 05:30:48.330115 containerd[1542]: 2025-06-21 05:30:48.202 [INFO][4537] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqv9d" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bc707386-a573-4931-82ab-55786356b796", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"", Pod:"coredns-674b8bbfcf-gqv9d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcfbc4fa442", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:48.330115 containerd[1542]: 2025-06-21 05:30:48.204 [INFO][4537] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.8/32] ContainerID="aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqv9d" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-eth0" Jun 21 05:30:48.330115 containerd[1542]: 2025-06-21 05:30:48.205 [INFO][4537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibcfbc4fa442 ContainerID="aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqv9d" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-eth0" Jun 21 05:30:48.330115 containerd[1542]: 2025-06-21 05:30:48.250 [INFO][4537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqv9d" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-eth0" Jun 21 05:30:48.330115 containerd[1542]: 2025-06-21 05:30:48.253 [INFO][4537] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqv9d" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bc707386-a573-4931-82ab-55786356b796", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.0-0-a0fa6d352b", ContainerID:"aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c", Pod:"coredns-674b8bbfcf-gqv9d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcfbc4fa442", MAC:"7e:aa:0a:c9:6d:90", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:30:48.330115 containerd[1542]: 2025-06-21 05:30:48.292 [INFO][4537] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqv9d" WorkloadEndpoint="ci--4372.0.0--0--a0fa6d352b-k8s-coredns--674b8bbfcf--gqv9d-eth0" Jun 21 05:30:48.454664 containerd[1542]: time="2025-06-21T05:30:48.454601725Z" level=info msg="connecting to shim aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c" address="unix:///run/containerd/s/a0c21b35256b008b77236efdb3465c826586c2e6794f2a64fe344f4bd30a7070" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:30:48.484005 kubelet[2705]: I0621 05:30:48.483789 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:30:48.595219 systemd[1]: Started cri-containerd-aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c.scope - libcontainer container aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c. Jun 21 05:30:48.674933 containerd[1542]: time="2025-06-21T05:30:48.674876219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qvskp,Uid:0d57dcbc-26a6-4a6a-877e-2663d2596744,Namespace:calico-system,Attempt:0,} returns sandbox id \"745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8\"" Jun 21 05:30:48.830985 containerd[1542]: time="2025-06-21T05:30:48.830706145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gqv9d,Uid:bc707386-a573-4931-82ab-55786356b796,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c\"" Jun 21 05:30:48.834525 kubelet[2705]: E0621 05:30:48.834466 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:48.858174 containerd[1542]: time="2025-06-21T05:30:48.857807853Z" level=info msg="CreateContainer within sandbox \"aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 05:30:48.881265 containerd[1542]: time="2025-06-21T05:30:48.881077907Z" level=info msg="Container 9781dd8e0acf0c55f16eec3cef03c1b73276c76fd09610ab2c469b6ae60ffac4: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:48.896190 containerd[1542]: time="2025-06-21T05:30:48.894321239Z" level=info msg="CreateContainer within sandbox \"aa575ddf1acbf8db631b285bd9a15b5e238aabcf0b85baecd7e75957ee0daa7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9781dd8e0acf0c55f16eec3cef03c1b73276c76fd09610ab2c469b6ae60ffac4\"" Jun 21 05:30:48.899352 containerd[1542]: time="2025-06-21T05:30:48.898947762Z" level=info msg="StartContainer for \"9781dd8e0acf0c55f16eec3cef03c1b73276c76fd09610ab2c469b6ae60ffac4\"" Jun 21 05:30:48.904949 containerd[1542]: time="2025-06-21T05:30:48.904890953Z" level=info msg="connecting to shim 9781dd8e0acf0c55f16eec3cef03c1b73276c76fd09610ab2c469b6ae60ffac4" address="unix:///run/containerd/s/a0c21b35256b008b77236efdb3465c826586c2e6794f2a64fe344f4bd30a7070" protocol=ttrpc version=3 Jun 21 05:30:48.991535 systemd[1]: Started cri-containerd-9781dd8e0acf0c55f16eec3cef03c1b73276c76fd09610ab2c469b6ae60ffac4.scope - libcontainer container 9781dd8e0acf0c55f16eec3cef03c1b73276c76fd09610ab2c469b6ae60ffac4. Jun 21 05:30:49.049504 kubelet[2705]: E0621 05:30:49.049465 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:49.194362 containerd[1542]: time="2025-06-21T05:30:49.193808886Z" level=info msg="StartContainer for \"9781dd8e0acf0c55f16eec3cef03c1b73276c76fd09610ab2c469b6ae60ffac4\" returns successfully" Jun 21 05:30:49.322326 systemd-networkd[1456]: cali9a7a2d5dfd4: Gained IPv6LL Jun 21 05:30:49.461230 containerd[1542]: time="2025-06-21T05:30:49.460788115Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bdcacf2826d5024d86afc87abc588d11bd16aa7485ecf29f9f29ed5f2facef1\" id:\"1a9d5e641663c6e8a6f8b8142a4f188648cb30f54bee9c9ba7a77b295be700ab\" pid:4706 exited_at:{seconds:1750483849 nanos:407613107}" Jun 21 05:30:49.852634 containerd[1542]: time="2025-06-21T05:30:49.852484796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bdcacf2826d5024d86afc87abc588d11bd16aa7485ecf29f9f29ed5f2facef1\" id:\"18b1e46a399b12dab8dd7b35092ab23fdfcf56c2638e491601a694efc6cb2f50\" pid:4751 exited_at:{seconds:1750483849 nanos:850786019}" Jun 21 05:30:49.905889 containerd[1542]: time="2025-06-21T05:30:49.904708075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:49.905889 containerd[1542]: time="2025-06-21T05:30:49.905810915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 21 05:30:49.907248 containerd[1542]: time="2025-06-21T05:30:49.907190870Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:49.912444 containerd[1542]: time="2025-06-21T05:30:49.912286415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:49.913320 containerd[1542]: time="2025-06-21T05:30:49.912839207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 4.956682645s" Jun 21 05:30:49.913320 containerd[1542]: time="2025-06-21T05:30:49.912878135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 21 05:30:49.927657 containerd[1542]: time="2025-06-21T05:30:49.926734580Z" level=info msg="CreateContainer within sandbox \"b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 05:30:49.936869 containerd[1542]: time="2025-06-21T05:30:49.935749319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 21 05:30:49.937363 containerd[1542]: time="2025-06-21T05:30:49.936938684Z" level=info msg="Container 4e5e102dfb6ffb344fff8edef41de547a86c172e8ed2658ba7af06380e4ec334: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:49.944110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611215320.mount: Deactivated successfully. Jun 21 05:30:49.959613 containerd[1542]: time="2025-06-21T05:30:49.959325478Z" level=info msg="CreateContainer within sandbox \"b8f96288fbd03edf3db4d6d5fa470d9b4576328bf237cc0a3b92e6813bbd9210\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4e5e102dfb6ffb344fff8edef41de547a86c172e8ed2658ba7af06380e4ec334\"" Jun 21 05:30:49.960739 containerd[1542]: time="2025-06-21T05:30:49.960693996Z" level=info msg="StartContainer for \"4e5e102dfb6ffb344fff8edef41de547a86c172e8ed2658ba7af06380e4ec334\"" Jun 21 05:30:49.963074 containerd[1542]: time="2025-06-21T05:30:49.963019592Z" level=info msg="connecting to shim 4e5e102dfb6ffb344fff8edef41de547a86c172e8ed2658ba7af06380e4ec334" address="unix:///run/containerd/s/a3463cef11f8411b97f7eaa1fba6a31daf9b92b870645b2304a3c58cb74a14d3" protocol=ttrpc version=3 Jun 21 05:30:50.021976 systemd[1]: Started cri-containerd-4e5e102dfb6ffb344fff8edef41de547a86c172e8ed2658ba7af06380e4ec334.scope - libcontainer container 4e5e102dfb6ffb344fff8edef41de547a86c172e8ed2658ba7af06380e4ec334. Jun 21 05:30:50.065046 kubelet[2705]: E0621 05:30:50.064604 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:50.090933 kubelet[2705]: I0621 05:30:50.090849 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gqv9d" podStartSLOduration=44.090822767 podStartE2EDuration="44.090822767s" podCreationTimestamp="2025-06-21 05:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:30:50.090452435 +0000 UTC m=+48.822250274" watchObservedRunningTime="2025-06-21 05:30:50.090822767 +0000 UTC m=+48.822620946" Jun 21 05:30:50.218697 systemd-networkd[1456]: calibcfbc4fa442: Gained IPv6LL Jun 21 05:30:50.234302 containerd[1542]: time="2025-06-21T05:30:50.234212407Z" level=info msg="StartContainer for \"4e5e102dfb6ffb344fff8edef41de547a86c172e8ed2658ba7af06380e4ec334\" returns successfully" Jun 21 05:30:51.074695 kubelet[2705]: E0621 05:30:51.074294 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:51.151327 kubelet[2705]: I0621 05:30:51.151218 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d864bcf8d-rsvrn" podStartSLOduration=28.177646155 podStartE2EDuration="34.151188676s" podCreationTimestamp="2025-06-21 05:30:17 +0000 UTC" firstStartedPulling="2025-06-21 05:30:43.942236215 +0000 UTC m=+42.674034032" lastFinishedPulling="2025-06-21 05:30:49.915778739 +0000 UTC m=+48.647576553" observedRunningTime="2025-06-21 05:30:51.107837293 +0000 UTC m=+49.839635119" watchObservedRunningTime="2025-06-21 05:30:51.151188676 +0000 UTC m=+49.882986510" Jun 21 05:30:52.085223 kubelet[2705]: E0621 05:30:52.083798 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:52.086825 kubelet[2705]: I0621 05:30:52.086793 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:30:53.100037 kubelet[2705]: E0621 05:30:53.099573 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:30:54.101168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount952300695.mount: Deactivated successfully. Jun 21 05:30:55.421100 containerd[1542]: time="2025-06-21T05:30:55.420945341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:55.424894 containerd[1542]: time="2025-06-21T05:30:55.424731835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 21 05:30:55.426183 containerd[1542]: time="2025-06-21T05:30:55.424852084Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:55.428244 containerd[1542]: time="2025-06-21T05:30:55.428199378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:30:55.430885 containerd[1542]: time="2025-06-21T05:30:55.430815450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 5.493576366s" Jun 21 05:30:55.431507 containerd[1542]: time="2025-06-21T05:30:55.431478974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 21 05:30:55.436826 containerd[1542]: time="2025-06-21T05:30:55.436785747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 21 05:30:55.441221 containerd[1542]: time="2025-06-21T05:30:55.440659818Z" level=info msg="CreateContainer within sandbox \"64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 21 05:30:55.464665 containerd[1542]: time="2025-06-21T05:30:55.464485574Z" level=info msg="Container 29df60d927987954d379b5b9e63e8982428657efcc9bb600def0bb03493b6012: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:30:55.477952 containerd[1542]: time="2025-06-21T05:30:55.477852466Z" level=info msg="CreateContainer within sandbox \"64bc357ea9cc3755f137bcedc28f720c5b7d6bad5d58ac2c450168905cb98abf\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"29df60d927987954d379b5b9e63e8982428657efcc9bb600def0bb03493b6012\"" Jun 21 05:30:55.479857 containerd[1542]: time="2025-06-21T05:30:55.479796063Z" level=info msg="StartContainer for \"29df60d927987954d379b5b9e63e8982428657efcc9bb600def0bb03493b6012\"" Jun 21 05:30:55.485156 containerd[1542]: time="2025-06-21T05:30:55.484678620Z" level=info msg="connecting to shim 29df60d927987954d379b5b9e63e8982428657efcc9bb600def0bb03493b6012" address="unix:///run/containerd/s/ccb5ad7301161b2dd64bf93a9a05c9c3571275baf256878d3481be4e11eb5f7f" protocol=ttrpc version=3 Jun 21 05:30:55.596352 systemd[1]: Started cri-containerd-29df60d927987954d379b5b9e63e8982428657efcc9bb600def0bb03493b6012.scope - libcontainer container 29df60d927987954d379b5b9e63e8982428657efcc9bb600def0bb03493b6012. Jun 21 05:30:55.737170 containerd[1542]: time="2025-06-21T05:30:55.736400640Z" level=info msg="StartContainer for \"29df60d927987954d379b5b9e63e8982428657efcc9bb600def0bb03493b6012\" returns successfully" Jun 21 05:30:56.612768 containerd[1542]: time="2025-06-21T05:30:56.612701053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29df60d927987954d379b5b9e63e8982428657efcc9bb600def0bb03493b6012\" id:\"aa461919d6a16ed7ecf905d2b848fca85cabb74ec567df1359c32665dd4d4546\" pid:4877 exit_status:1 exited_at:{seconds:1750483856 nanos:607039737}" Jun 21 05:30:57.344721 systemd[1]: Started sshd@7-164.92.73.218:22-139.178.68.195:53864.service - OpenSSH per-connection server daemon (139.178.68.195:53864). Jun 21 05:30:57.599712 containerd[1542]: time="2025-06-21T05:30:57.599438835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29df60d927987954d379b5b9e63e8982428657efcc9bb600def0bb03493b6012\" id:\"fc64f5cb4cb2274a138b489c566483f879d33b19781ba8bd05b09fb1bbecbf13\" pid:4911 exit_status:1 exited_at:{seconds:1750483857 nanos:598468681}" Jun 21 05:30:57.691851 sshd[4918]: Accepted publickey for core from 139.178.68.195 port 53864 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:30:57.710226 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:30:57.740255 systemd-logind[1514]: New session 8 of user core. Jun 21 05:30:57.745481 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 05:30:58.879085 sshd[4924]: Connection closed by 139.178.68.195 port 53864 Jun 21 05:30:58.879784 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Jun 21 05:30:58.901001 systemd[1]: sshd@7-164.92.73.218:22-139.178.68.195:53864.service: Deactivated successfully. Jun 21 05:30:58.901307 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. Jun 21 05:30:58.911715 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 05:30:58.922326 systemd-logind[1514]: Removed session 8. Jun 21 05:30:59.193158 containerd[1542]: time="2025-06-21T05:30:59.193014147Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29df60d927987954d379b5b9e63e8982428657efcc9bb600def0bb03493b6012\" id:\"5aa8127e32c6cee788288b84ba810f6e040628ae9a4e562861eb0c9113a9994d\" pid:4944 exit_status:1 exited_at:{seconds:1750483859 nanos:183051320}" Jun 21 05:30:59.958639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160303738.mount: Deactivated successfully. Jun 21 05:31:00.042945 containerd[1542]: time="2025-06-21T05:31:00.042857063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:31:00.045884 containerd[1542]: time="2025-06-21T05:31:00.045722953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 21 05:31:00.059508 containerd[1542]: time="2025-06-21T05:31:00.059429076Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:31:00.068363 containerd[1542]: time="2025-06-21T05:31:00.068285643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:31:00.072706 containerd[1542]: time="2025-06-21T05:31:00.072627749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 4.634733004s" Jun 21 05:31:00.072706 containerd[1542]: time="2025-06-21T05:31:00.072693238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 21 05:31:00.076086 containerd[1542]: time="2025-06-21T05:31:00.074587811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 21 05:31:00.085459 containerd[1542]: time="2025-06-21T05:31:00.085397552Z" level=info msg="CreateContainer within sandbox \"392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 21 05:31:00.182769 containerd[1542]: time="2025-06-21T05:31:00.182670941Z" level=info msg="Container b1c2186b61f75a59037fa73c75181bf969ce9ce6ae6b679b5d750aaf49ef2c69: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:31:00.239794 containerd[1542]: time="2025-06-21T05:31:00.237235959Z" level=info msg="CreateContainer within sandbox \"392b24be81a33bbea48d7f5c30cea2d1bf258fdf7101d49870f9032ffbb3b2e0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b1c2186b61f75a59037fa73c75181bf969ce9ce6ae6b679b5d750aaf49ef2c69\"" Jun 21 05:31:00.247473 containerd[1542]: time="2025-06-21T05:31:00.247398860Z" level=info msg="StartContainer for \"b1c2186b61f75a59037fa73c75181bf969ce9ce6ae6b679b5d750aaf49ef2c69\"" Jun 21 05:31:00.262141 containerd[1542]: time="2025-06-21T05:31:00.261372251Z" level=info msg="connecting to shim b1c2186b61f75a59037fa73c75181bf969ce9ce6ae6b679b5d750aaf49ef2c69" address="unix:///run/containerd/s/107fe9eed804086b83d44e279af10ba8e4e87410da648d8ec2d9eabd08b03235" protocol=ttrpc version=3 Jun 21 05:31:00.339661 systemd[1]: Started cri-containerd-b1c2186b61f75a59037fa73c75181bf969ce9ce6ae6b679b5d750aaf49ef2c69.scope - libcontainer container b1c2186b61f75a59037fa73c75181bf969ce9ce6ae6b679b5d750aaf49ef2c69. Jun 21 05:31:00.558104 containerd[1542]: time="2025-06-21T05:31:00.557397510Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:31:00.559452 containerd[1542]: time="2025-06-21T05:31:00.559380165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 21 05:31:00.567145 containerd[1542]: time="2025-06-21T05:31:00.567022427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 492.370758ms" Jun 21 05:31:00.567882 containerd[1542]: time="2025-06-21T05:31:00.567105066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 21 05:31:00.571233 containerd[1542]: time="2025-06-21T05:31:00.570361344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 21 05:31:00.576422 containerd[1542]: time="2025-06-21T05:31:00.576369968Z" level=info msg="CreateContainer within sandbox \"772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 05:31:00.598196 containerd[1542]: time="2025-06-21T05:31:00.596278670Z" level=info msg="Container c60dad8741d4fc2629f77bc6e82b432fc038cf562be34819fdc4c87e1f7feb71: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:31:00.612538 containerd[1542]: time="2025-06-21T05:31:00.612464838Z" level=info msg="CreateContainer within sandbox \"772fcd7bc1d321b7927fbeb95de6b3594c3bce20793f76026c8d9d3ef12c5c61\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c60dad8741d4fc2629f77bc6e82b432fc038cf562be34819fdc4c87e1f7feb71\"" Jun 21 05:31:00.614718 containerd[1542]: time="2025-06-21T05:31:00.614658374Z" level=info msg="StartContainer for \"c60dad8741d4fc2629f77bc6e82b432fc038cf562be34819fdc4c87e1f7feb71\"" Jun 21 05:31:00.618281 containerd[1542]: time="2025-06-21T05:31:00.618224297Z" level=info msg="connecting to shim c60dad8741d4fc2629f77bc6e82b432fc038cf562be34819fdc4c87e1f7feb71" address="unix:///run/containerd/s/0857d3245c223e9dc70473dbc735c2e27c9f33a3f1e5b6b543245eb285def396" protocol=ttrpc version=3 Jun 21 05:31:00.731510 systemd[1]: Started cri-containerd-c60dad8741d4fc2629f77bc6e82b432fc038cf562be34819fdc4c87e1f7feb71.scope - libcontainer container c60dad8741d4fc2629f77bc6e82b432fc038cf562be34819fdc4c87e1f7feb71. Jun 21 05:31:00.819722 containerd[1542]: time="2025-06-21T05:31:00.818843669Z" level=info msg="StartContainer for \"b1c2186b61f75a59037fa73c75181bf969ce9ce6ae6b679b5d750aaf49ef2c69\" returns successfully" Jun 21 05:31:01.062313 containerd[1542]: time="2025-06-21T05:31:01.062248293Z" level=info msg="StartContainer for \"c60dad8741d4fc2629f77bc6e82b432fc038cf562be34819fdc4c87e1f7feb71\" returns successfully" Jun 21 05:31:01.340351 kubelet[2705]: I0621 05:31:01.335082 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-5jcrs" podStartSLOduration=31.102353896 podStartE2EDuration="41.335024848s" podCreationTimestamp="2025-06-21 05:30:20 +0000 UTC" firstStartedPulling="2025-06-21 05:30:45.202251629 +0000 UTC m=+43.934049443" lastFinishedPulling="2025-06-21 05:30:55.434922582 +0000 UTC m=+54.166720395" observedRunningTime="2025-06-21 05:30:56.255666549 +0000 UTC m=+54.987464370" watchObservedRunningTime="2025-06-21 05:31:01.335024848 +0000 UTC m=+60.066822670" Jun 21 05:31:01.341842 kubelet[2705]: I0621 05:31:01.340445 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7cb94d654c-8vt8f" podStartSLOduration=3.360842051 podStartE2EDuration="20.340403957s" podCreationTimestamp="2025-06-21 05:30:41 +0000 UTC" firstStartedPulling="2025-06-21 05:30:43.094326722 +0000 UTC m=+41.826124523" lastFinishedPulling="2025-06-21 05:31:00.073888611 +0000 UTC m=+58.805686429" observedRunningTime="2025-06-21 05:31:01.332342916 +0000 UTC m=+60.064140753" watchObservedRunningTime="2025-06-21 05:31:01.340403957 +0000 UTC m=+60.072201782" Jun 21 05:31:02.393699 kubelet[2705]: I0621 05:31:02.393351 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:31:02.624438 kubelet[2705]: I0621 05:31:02.624351 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d864bcf8d-25t7g" podStartSLOduration=31.660171462 podStartE2EDuration="45.624319967s" podCreationTimestamp="2025-06-21 05:30:17 +0000 UTC" firstStartedPulling="2025-06-21 05:30:46.60521939 +0000 UTC m=+45.337017207" lastFinishedPulling="2025-06-21 05:31:00.569367892 +0000 UTC m=+59.301165712" observedRunningTime="2025-06-21 05:31:01.39298117 +0000 UTC m=+60.124778996" watchObservedRunningTime="2025-06-21 05:31:02.624319967 +0000 UTC m=+61.356117791" Jun 21 05:31:03.911763 systemd[1]: Started sshd@8-164.92.73.218:22-139.178.68.195:58494.service - OpenSSH per-connection server daemon (139.178.68.195:58494). Jun 21 05:31:04.299565 sshd[5053]: Accepted publickey for core from 139.178.68.195 port 58494 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:04.311414 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:04.330997 systemd-logind[1514]: New session 9 of user core. Jun 21 05:31:04.333788 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 05:31:05.321585 sshd[5056]: Connection closed by 139.178.68.195 port 58494 Jun 21 05:31:05.322587 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:05.332701 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. Jun 21 05:31:05.337960 systemd[1]: sshd@8-164.92.73.218:22-139.178.68.195:58494.service: Deactivated successfully. Jun 21 05:31:05.346607 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 05:31:05.353101 systemd-logind[1514]: Removed session 9. Jun 21 05:31:06.032773 containerd[1542]: time="2025-06-21T05:31:06.032696908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:31:06.038166 containerd[1542]: time="2025-06-21T05:31:06.038071879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 21 05:31:06.040488 containerd[1542]: time="2025-06-21T05:31:06.040426142Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:31:06.048150 containerd[1542]: time="2025-06-21T05:31:06.047744592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:31:06.049978 containerd[1542]: time="2025-06-21T05:31:06.049932653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 5.479503496s" Jun 21 05:31:06.050186 containerd[1542]: time="2025-06-21T05:31:06.050168824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 21 05:31:06.107102 containerd[1542]: time="2025-06-21T05:31:06.106788911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 21 05:31:06.439503 containerd[1542]: time="2025-06-21T05:31:06.439447871Z" level=info msg="CreateContainer within sandbox \"0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 21 05:31:06.450568 containerd[1542]: time="2025-06-21T05:31:06.450512722Z" level=info msg="Container 05d6b06d1a901a40562a0512deaa8acbb91c6a667368071784261eeb4e7a9fca: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:31:06.465337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645821518.mount: Deactivated successfully. Jun 21 05:31:06.502819 containerd[1542]: time="2025-06-21T05:31:06.502372281Z" level=info msg="CreateContainer within sandbox \"0809e12b4ac4d576269d032852c5ece81e413cbad7ce9a90a285ef8863a82fdb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"05d6b06d1a901a40562a0512deaa8acbb91c6a667368071784261eeb4e7a9fca\"" Jun 21 05:31:06.509558 containerd[1542]: time="2025-06-21T05:31:06.509505328Z" level=info msg="StartContainer for \"05d6b06d1a901a40562a0512deaa8acbb91c6a667368071784261eeb4e7a9fca\"" Jun 21 05:31:06.536872 containerd[1542]: time="2025-06-21T05:31:06.536756611Z" level=info msg="connecting to shim 05d6b06d1a901a40562a0512deaa8acbb91c6a667368071784261eeb4e7a9fca" address="unix:///run/containerd/s/5449dbbc27ba9c20c40168a47621c274a8f9a1c35a09bf5dd52d54ad964a2d7d" protocol=ttrpc version=3 Jun 21 05:31:06.589378 systemd[1]: Started cri-containerd-05d6b06d1a901a40562a0512deaa8acbb91c6a667368071784261eeb4e7a9fca.scope - libcontainer container 05d6b06d1a901a40562a0512deaa8acbb91c6a667368071784261eeb4e7a9fca. Jun 21 05:31:06.864275 containerd[1542]: time="2025-06-21T05:31:06.863779064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29df60d927987954d379b5b9e63e8982428657efcc9bb600def0bb03493b6012\" id:\"3f383b4ded398c0646c76a0203ac1b7f281e5b118533fbcc1f369c3a8293f3e6\" pid:5077 exited_at:{seconds:1750483866 nanos:778398431}" Jun 21 05:31:07.060294 containerd[1542]: time="2025-06-21T05:31:07.060243874Z" level=info msg="StartContainer for \"05d6b06d1a901a40562a0512deaa8acbb91c6a667368071784261eeb4e7a9fca\" returns successfully" Jun 21 05:31:07.647478 kubelet[2705]: I0621 05:31:07.647272 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84cc887c7f-r77kt" podStartSLOduration=27.352506652 podStartE2EDuration="46.635356234s" podCreationTimestamp="2025-06-21 05:30:21 +0000 UTC" firstStartedPulling="2025-06-21 05:30:46.823576848 +0000 UTC m=+45.555374664" lastFinishedPulling="2025-06-21 05:31:06.106426426 +0000 UTC m=+64.838224246" observedRunningTime="2025-06-21 05:31:07.613115291 +0000 UTC m=+66.344913113" watchObservedRunningTime="2025-06-21 05:31:07.635356234 +0000 UTC m=+66.367154057" Jun 21 05:31:07.744683 containerd[1542]: time="2025-06-21T05:31:07.744539571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05d6b06d1a901a40562a0512deaa8acbb91c6a667368071784261eeb4e7a9fca\" id:\"8930315c8b4f9123cf915851de4a13c78f43871974f63a5426533cf945aeaa62\" pid:5159 exited_at:{seconds:1750483867 nanos:743316888}" Jun 21 05:31:08.007242 containerd[1542]: time="2025-06-21T05:31:08.007176487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:31:08.008759 containerd[1542]: time="2025-06-21T05:31:08.008718040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 21 05:31:08.011412 containerd[1542]: time="2025-06-21T05:31:08.011368392Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:31:08.014158 containerd[1542]: time="2025-06-21T05:31:08.014046344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:31:08.015886 containerd[1542]: time="2025-06-21T05:31:08.015770668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 1.90838606s" Jun 21 05:31:08.015886 containerd[1542]: time="2025-06-21T05:31:08.015825767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 21 05:31:08.045730 containerd[1542]: time="2025-06-21T05:31:08.045649973Z" level=info msg="CreateContainer within sandbox \"745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 21 05:31:08.084178 containerd[1542]: time="2025-06-21T05:31:08.081344176Z" level=info msg="Container 78d7700a2c3c75759d163a163b5d6ca40f5e3ea9d4b1d50f6635a0b2f22aea7f: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:31:08.128376 containerd[1542]: time="2025-06-21T05:31:08.128320170Z" level=info msg="CreateContainer within sandbox \"745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"78d7700a2c3c75759d163a163b5d6ca40f5e3ea9d4b1d50f6635a0b2f22aea7f\"" Jun 21 05:31:08.129341 containerd[1542]: time="2025-06-21T05:31:08.129297091Z" level=info msg="StartContainer for \"78d7700a2c3c75759d163a163b5d6ca40f5e3ea9d4b1d50f6635a0b2f22aea7f\"" Jun 21 05:31:08.132981 containerd[1542]: time="2025-06-21T05:31:08.132935096Z" level=info msg="connecting to shim 78d7700a2c3c75759d163a163b5d6ca40f5e3ea9d4b1d50f6635a0b2f22aea7f" address="unix:///run/containerd/s/60a7b906c912c3d3b160b1e8c1774b446bd0f1a38d2ae259eb1d4a05a3964398" protocol=ttrpc version=3 Jun 21 05:31:08.179626 systemd[1]: Started cri-containerd-78d7700a2c3c75759d163a163b5d6ca40f5e3ea9d4b1d50f6635a0b2f22aea7f.scope - libcontainer container 78d7700a2c3c75759d163a163b5d6ca40f5e3ea9d4b1d50f6635a0b2f22aea7f. Jun 21 05:31:08.352293 containerd[1542]: time="2025-06-21T05:31:08.351334387Z" level=info msg="StartContainer for \"78d7700a2c3c75759d163a163b5d6ca40f5e3ea9d4b1d50f6635a0b2f22aea7f\" returns successfully" Jun 21 05:31:08.383384 containerd[1542]: time="2025-06-21T05:31:08.383305295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 21 05:31:10.350945 systemd[1]: Started sshd@9-164.92.73.218:22-139.178.68.195:58510.service - OpenSSH per-connection server daemon (139.178.68.195:58510). Jun 21 05:31:10.600511 sshd[5211]: Accepted publickey for core from 139.178.68.195 port 58510 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:10.607431 sshd-session[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:10.623252 systemd-logind[1514]: New session 10 of user core. Jun 21 05:31:10.629412 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 05:31:10.859260 containerd[1542]: time="2025-06-21T05:31:10.858937534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:31:10.864292 containerd[1542]: time="2025-06-21T05:31:10.862658618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 21 05:31:10.865903 containerd[1542]: time="2025-06-21T05:31:10.865844639Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:31:10.876825 containerd[1542]: time="2025-06-21T05:31:10.875310698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:31:10.876825 containerd[1542]: time="2025-06-21T05:31:10.875966318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 2.492610969s" Jun 21 05:31:10.876825 containerd[1542]: time="2025-06-21T05:31:10.876001987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 21 05:31:10.888174 containerd[1542]: time="2025-06-21T05:31:10.887599441Z" level=info msg="CreateContainer within sandbox \"745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 21 05:31:10.906487 containerd[1542]: time="2025-06-21T05:31:10.906442066Z" level=info msg="Container 2fd108315a6ec369a8b55d4a5c163ab88ef96c09c8d20feb309f5322aeec897c: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:31:10.956641 containerd[1542]: time="2025-06-21T05:31:10.952103120Z" level=info msg="CreateContainer within sandbox \"745118e61c03086fc4456ebbee326ea7a1a09b4cfd26e4732ae783c0ab8f66f8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2fd108315a6ec369a8b55d4a5c163ab88ef96c09c8d20feb309f5322aeec897c\"" Jun 21 05:31:10.960921 containerd[1542]: time="2025-06-21T05:31:10.960408379Z" level=info msg="StartContainer for \"2fd108315a6ec369a8b55d4a5c163ab88ef96c09c8d20feb309f5322aeec897c\"" Jun 21 05:31:10.968658 containerd[1542]: time="2025-06-21T05:31:10.968531007Z" level=info msg="connecting to shim 2fd108315a6ec369a8b55d4a5c163ab88ef96c09c8d20feb309f5322aeec897c" address="unix:///run/containerd/s/60a7b906c912c3d3b160b1e8c1774b446bd0f1a38d2ae259eb1d4a05a3964398" protocol=ttrpc version=3 Jun 21 05:31:11.143551 systemd[1]: Started cri-containerd-2fd108315a6ec369a8b55d4a5c163ab88ef96c09c8d20feb309f5322aeec897c.scope - libcontainer container 2fd108315a6ec369a8b55d4a5c163ab88ef96c09c8d20feb309f5322aeec897c. Jun 21 05:31:11.763017 sshd[5214]: Connection closed by 139.178.68.195 port 58510 Jun 21 05:31:11.762871 sshd-session[5211]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:11.787842 systemd[1]: sshd@9-164.92.73.218:22-139.178.68.195:58510.service: Deactivated successfully. Jun 21 05:31:11.796409 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 05:31:11.818254 systemd-logind[1514]: Session 10 logged out. Waiting for processes to exit. Jun 21 05:31:11.826359 systemd[1]: Started sshd@10-164.92.73.218:22-139.178.68.195:58526.service - OpenSSH per-connection server daemon (139.178.68.195:58526). Jun 21 05:31:11.834945 systemd-logind[1514]: Removed session 10. Jun 21 05:31:11.903312 containerd[1542]: time="2025-06-21T05:31:11.903219007Z" level=info msg="StartContainer for \"2fd108315a6ec369a8b55d4a5c163ab88ef96c09c8d20feb309f5322aeec897c\" returns successfully" Jun 21 05:31:11.969787 sshd[5255]: Accepted publickey for core from 139.178.68.195 port 58526 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:11.976518 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:11.987819 systemd-logind[1514]: New session 11 of user core. Jun 21 05:31:11.993699 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 05:31:12.394187 sshd[5263]: Connection closed by 139.178.68.195 port 58526 Jun 21 05:31:12.393909 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:12.408522 systemd[1]: sshd@10-164.92.73.218:22-139.178.68.195:58526.service: Deactivated successfully. Jun 21 05:31:12.413882 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 05:31:12.416540 systemd-logind[1514]: Session 11 logged out. Waiting for processes to exit. Jun 21 05:31:12.428037 systemd[1]: Started sshd@11-164.92.73.218:22-139.178.68.195:58534.service - OpenSSH per-connection server daemon (139.178.68.195:58534). Jun 21 05:31:12.432226 systemd-logind[1514]: Removed session 11. Jun 21 05:31:12.548002 sshd[5273]: Accepted publickey for core from 139.178.68.195 port 58534 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:12.553301 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:12.566318 systemd-logind[1514]: New session 12 of user core. Jun 21 05:31:12.570395 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 05:31:12.845823 sshd[5275]: Connection closed by 139.178.68.195 port 58534 Jun 21 05:31:12.848947 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:12.858763 systemd-logind[1514]: Session 12 logged out. Waiting for processes to exit. Jun 21 05:31:12.858870 systemd[1]: sshd@11-164.92.73.218:22-139.178.68.195:58534.service: Deactivated successfully. Jun 21 05:31:12.865026 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 05:31:12.873341 systemd-logind[1514]: Removed session 12. Jun 21 05:31:13.060862 kubelet[2705]: I0621 05:31:13.060808 2705 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 21 05:31:13.061827 kubelet[2705]: I0621 05:31:13.061485 2705 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 21 05:31:16.579153 kubelet[2705]: E0621 05:31:16.578022 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:31:17.866634 systemd[1]: Started sshd@12-164.92.73.218:22-139.178.68.195:43742.service - OpenSSH per-connection server daemon (139.178.68.195:43742). Jun 21 05:31:18.040024 sshd[5295]: Accepted publickey for core from 139.178.68.195 port 43742 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:18.043355 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:18.050197 systemd-logind[1514]: New session 13 of user core. Jun 21 05:31:18.057846 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 05:31:18.364247 sshd[5297]: Connection closed by 139.178.68.195 port 43742 Jun 21 05:31:18.364596 sshd-session[5295]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:18.373837 systemd[1]: sshd@12-164.92.73.218:22-139.178.68.195:43742.service: Deactivated successfully. Jun 21 05:31:18.379006 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 05:31:18.381277 systemd-logind[1514]: Session 13 logged out. Waiting for processes to exit. Jun 21 05:31:18.383694 systemd-logind[1514]: Removed session 13. Jun 21 05:31:19.569171 kubelet[2705]: E0621 05:31:19.569056 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:31:20.506387 containerd[1542]: time="2025-06-21T05:31:20.506314482Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bdcacf2826d5024d86afc87abc588d11bd16aa7485ecf29f9f29ed5f2facef1\" id:\"3b0b4aa42ee272af6f459b72f86ed6e0b8bce7bc41934b1ce83e1c0ea902fc08\" pid:5320 exited_at:{seconds:1750483880 nanos:470744604}" Jun 21 05:31:20.565349 kubelet[2705]: E0621 05:31:20.564189 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:31:23.388383 systemd[1]: Started sshd@13-164.92.73.218:22-139.178.68.195:43746.service - OpenSSH per-connection server daemon (139.178.68.195:43746). Jun 21 05:31:23.584286 sshd[5333]: Accepted publickey for core from 139.178.68.195 port 43746 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:23.591540 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:23.607716 systemd-logind[1514]: New session 14 of user core. Jun 21 05:31:23.613418 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 05:31:24.165562 sshd[5335]: Connection closed by 139.178.68.195 port 43746 Jun 21 05:31:24.166443 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:24.181604 systemd[1]: sshd@13-164.92.73.218:22-139.178.68.195:43746.service: Deactivated successfully. Jun 21 05:31:24.190727 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 05:31:24.196526 systemd-logind[1514]: Session 14 logged out. Waiting for processes to exit. Jun 21 05:31:24.199454 systemd-logind[1514]: Removed session 14. Jun 21 05:31:28.713804 containerd[1542]: time="2025-06-21T05:31:28.713752570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29df60d927987954d379b5b9e63e8982428657efcc9bb600def0bb03493b6012\" id:\"2f9f1d8f4dfc1da5c9b4ecc39b2e55bba34e16024fbb43d30ffea5354e34d5cb\" pid:5365 exited_at:{seconds:1750483888 nanos:713176020}" Jun 21 05:31:28.854769 kubelet[2705]: I0621 05:31:28.811613 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qvskp" podStartSLOduration=45.60813352 podStartE2EDuration="1m7.789042929s" podCreationTimestamp="2025-06-21 05:30:21 +0000 UTC" firstStartedPulling="2025-06-21 05:30:48.700587394 +0000 UTC m=+47.432385200" lastFinishedPulling="2025-06-21 05:31:10.881496808 +0000 UTC m=+69.613294609" observedRunningTime="2025-06-21 05:31:13.132446821 +0000 UTC m=+71.864244647" watchObservedRunningTime="2025-06-21 05:31:28.789042929 +0000 UTC m=+87.520840755" Jun 21 05:31:29.190562 systemd[1]: Started sshd@14-164.92.73.218:22-139.178.68.195:48654.service - OpenSSH per-connection server daemon (139.178.68.195:48654). Jun 21 05:31:29.329201 sshd[5380]: Accepted publickey for core from 139.178.68.195 port 48654 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:29.331989 sshd-session[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:29.346107 systemd-logind[1514]: New session 15 of user core. Jun 21 05:31:29.353225 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 05:31:29.796555 sshd[5382]: Connection closed by 139.178.68.195 port 48654 Jun 21 05:31:29.800628 sshd-session[5380]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:29.813717 systemd-logind[1514]: Session 15 logged out. Waiting for processes to exit. Jun 21 05:31:29.814673 systemd[1]: sshd@14-164.92.73.218:22-139.178.68.195:48654.service: Deactivated successfully. Jun 21 05:31:29.820303 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 05:31:29.825797 systemd-logind[1514]: Removed session 15. Jun 21 05:31:31.409755 containerd[1542]: time="2025-06-21T05:31:31.409657451Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05d6b06d1a901a40562a0512deaa8acbb91c6a667368071784261eeb4e7a9fca\" id:\"eca2cf6d23f1a6fe34d86db83b6dd15b71af192fccebb57ffa5c4f0d7a275b02\" pid:5405 exited_at:{seconds:1750483891 nanos:409098481}" Jun 21 05:31:34.563582 kubelet[2705]: E0621 05:31:34.563514 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 21 05:31:34.816712 systemd[1]: Started sshd@15-164.92.73.218:22-139.178.68.195:36064.service - OpenSSH per-connection server daemon (139.178.68.195:36064). Jun 21 05:31:34.952647 sshd[5416]: Accepted publickey for core from 139.178.68.195 port 36064 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:34.956487 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:34.966793 systemd-logind[1514]: New session 16 of user core. Jun 21 05:31:34.972457 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 05:31:35.348219 sshd[5418]: Connection closed by 139.178.68.195 port 36064 Jun 21 05:31:35.350670 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:35.362459 systemd[1]: sshd@15-164.92.73.218:22-139.178.68.195:36064.service: Deactivated successfully. Jun 21 05:31:35.367692 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 05:31:35.370650 systemd-logind[1514]: Session 16 logged out. Waiting for processes to exit. Jun 21 05:31:35.381016 systemd[1]: Started sshd@16-164.92.73.218:22-139.178.68.195:36072.service - OpenSSH per-connection server daemon (139.178.68.195:36072). Jun 21 05:31:35.385047 systemd-logind[1514]: Removed session 16. Jun 21 05:31:35.470383 sshd[5429]: Accepted publickey for core from 139.178.68.195 port 36072 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:35.472695 sshd-session[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:35.483632 systemd-logind[1514]: New session 17 of user core. Jun 21 05:31:35.489392 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 05:31:35.869419 sshd[5431]: Connection closed by 139.178.68.195 port 36072 Jun 21 05:31:35.871847 sshd-session[5429]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:35.886551 systemd[1]: Started sshd@17-164.92.73.218:22-139.178.68.195:36078.service - OpenSSH per-connection server daemon (139.178.68.195:36078). Jun 21 05:31:35.888375 systemd[1]: sshd@16-164.92.73.218:22-139.178.68.195:36072.service: Deactivated successfully. Jun 21 05:31:35.892477 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 05:31:35.901076 systemd-logind[1514]: Session 17 logged out. Waiting for processes to exit. Jun 21 05:31:35.906102 systemd-logind[1514]: Removed session 17. Jun 21 05:31:35.983082 sshd[5438]: Accepted publickey for core from 139.178.68.195 port 36078 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:35.986201 sshd-session[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:35.995318 systemd-logind[1514]: New session 18 of user core. Jun 21 05:31:36.000632 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 05:31:37.554071 sshd[5443]: Connection closed by 139.178.68.195 port 36078 Jun 21 05:31:37.558269 sshd-session[5438]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:37.579090 systemd[1]: sshd@17-164.92.73.218:22-139.178.68.195:36078.service: Deactivated successfully. Jun 21 05:31:37.588458 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 05:31:37.596468 systemd-logind[1514]: Session 18 logged out. Waiting for processes to exit. Jun 21 05:31:37.610843 systemd[1]: Started sshd@18-164.92.73.218:22-139.178.68.195:36094.service - OpenSSH per-connection server daemon (139.178.68.195:36094). Jun 21 05:31:37.619584 systemd-logind[1514]: Removed session 18. Jun 21 05:31:37.742851 sshd[5479]: Accepted publickey for core from 139.178.68.195 port 36094 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:37.747239 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:37.759482 systemd-logind[1514]: New session 19 of user core. Jun 21 05:31:37.765480 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 05:31:37.805780 containerd[1542]: time="2025-06-21T05:31:37.805614539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05d6b06d1a901a40562a0512deaa8acbb91c6a667368071784261eeb4e7a9fca\" id:\"78865b0cb08b591e5f6f60399f2dfa25fcf07ea5da03e6413a358606fbfcbd13\" pid:5468 exited_at:{seconds:1750483897 nanos:805163133}" Jun 21 05:31:38.613172 sshd[5485]: Connection closed by 139.178.68.195 port 36094 Jun 21 05:31:38.614713 sshd-session[5479]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:38.634357 systemd[1]: sshd@18-164.92.73.218:22-139.178.68.195:36094.service: Deactivated successfully. Jun 21 05:31:38.643633 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 05:31:38.645639 systemd-logind[1514]: Session 19 logged out. Waiting for processes to exit. Jun 21 05:31:38.655052 systemd[1]: Started sshd@19-164.92.73.218:22-139.178.68.195:36108.service - OpenSSH per-connection server daemon (139.178.68.195:36108). Jun 21 05:31:38.660975 systemd-logind[1514]: Removed session 19. Jun 21 05:31:38.794182 sshd[5499]: Accepted publickey for core from 139.178.68.195 port 36108 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:38.799057 sshd-session[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:38.812965 systemd-logind[1514]: New session 20 of user core. Jun 21 05:31:38.821629 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 05:31:39.104966 sshd[5501]: Connection closed by 139.178.68.195 port 36108 Jun 21 05:31:39.105643 sshd-session[5499]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:39.120472 systemd[1]: sshd@19-164.92.73.218:22-139.178.68.195:36108.service: Deactivated successfully. Jun 21 05:31:39.128343 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 05:31:39.131086 systemd-logind[1514]: Session 20 logged out. Waiting for processes to exit. Jun 21 05:31:39.134096 systemd-logind[1514]: Removed session 20. Jun 21 05:31:44.127887 systemd[1]: Started sshd@20-164.92.73.218:22-139.178.68.195:39172.service - OpenSSH per-connection server daemon (139.178.68.195:39172). Jun 21 05:31:44.262406 sshd[5516]: Accepted publickey for core from 139.178.68.195 port 39172 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:44.273825 sshd-session[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:44.296196 systemd-logind[1514]: New session 21 of user core. Jun 21 05:31:44.300505 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 05:31:44.859435 sshd[5518]: Connection closed by 139.178.68.195 port 39172 Jun 21 05:31:44.862787 sshd-session[5516]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:44.879339 systemd-logind[1514]: Session 21 logged out. Waiting for processes to exit. Jun 21 05:31:44.879350 systemd[1]: sshd@20-164.92.73.218:22-139.178.68.195:39172.service: Deactivated successfully. Jun 21 05:31:44.884552 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 05:31:44.891824 systemd-logind[1514]: Removed session 21. Jun 21 05:31:49.884202 systemd[1]: Started sshd@21-164.92.73.218:22-139.178.68.195:39178.service - OpenSSH per-connection server daemon (139.178.68.195:39178). Jun 21 05:31:50.081496 sshd[5553]: Accepted publickey for core from 139.178.68.195 port 39178 ssh2: RSA SHA256:esrwHbjCvD8R4I7sQRiHa5Rpu9l1igA0BMtQzkIUH4o Jun 21 05:31:50.085569 sshd-session[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:31:50.110233 systemd-logind[1514]: New session 22 of user core. Jun 21 05:31:50.118441 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 05:31:50.411296 containerd[1542]: time="2025-06-21T05:31:50.410299141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bdcacf2826d5024d86afc87abc588d11bd16aa7485ecf29f9f29ed5f2facef1\" id:\"c2bb33426049a872986bc35cf061703aa3639017f3340d16cc23af50730cba34\" pid:5542 exited_at:{seconds:1750483910 nanos:408362784}" Jun 21 05:31:51.026699 sshd[5556]: Connection closed by 139.178.68.195 port 39178 Jun 21 05:31:51.028556 sshd-session[5553]: pam_unix(sshd:session): session closed for user core Jun 21 05:31:51.035234 systemd[1]: sshd@21-164.92.73.218:22-139.178.68.195:39178.service: Deactivated successfully. Jun 21 05:31:51.038486 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 05:31:51.042304 systemd-logind[1514]: Session 22 logged out. Waiting for processes to exit. Jun 21 05:31:51.045201 systemd-logind[1514]: Removed session 22.