Oct 13 05:49:08.881892 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Oct 12 22:37:12 -00 2025 Oct 13 05:49:08.881925 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:49:08.881936 kernel: BIOS-provided physical RAM map: Oct 13 05:49:08.881943 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 13 05:49:08.881949 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 13 05:49:08.881956 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 13 05:49:08.881964 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Oct 13 05:49:08.882409 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Oct 13 05:49:08.882436 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:49:08.882443 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 13 05:49:08.882450 kernel: NX (Execute Disable) protection: active Oct 13 05:49:08.882457 kernel: APIC: Static calls initialized Oct 13 05:49:08.882464 kernel: SMBIOS 2.8 present. Oct 13 05:49:08.882472 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 13 05:49:08.882484 kernel: DMI: Memory slots populated: 1/1 Oct 13 05:49:08.882492 kernel: Hypervisor detected: KVM Oct 13 05:49:08.882503 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 13 05:49:08.882511 kernel: kvm-clock: using sched offset of 4473434620 cycles Oct 13 05:49:08.882520 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 05:49:08.882528 kernel: tsc: Detected 2494.134 MHz processor Oct 13 05:49:08.882536 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:49:08.882544 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:49:08.882552 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Oct 13 05:49:08.882564 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 13 05:49:08.882572 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:49:08.882580 kernel: ACPI: Early table checksum verification disabled Oct 13 05:49:08.882588 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Oct 13 05:49:08.882596 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:49:08.882604 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:49:08.882612 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:49:08.882620 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 13 05:49:08.882628 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:49:08.882654 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:49:08.882666 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:49:08.882677 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:49:08.882688 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 13 05:49:08.882700 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 13 05:49:08.882710 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 13 05:49:08.882718 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 13 05:49:08.882727 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 13 05:49:08.882744 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 13 05:49:08.882752 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 13 05:49:08.882761 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 13 05:49:08.882769 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 13 05:49:08.882777 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Oct 13 05:49:08.882790 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Oct 13 05:49:08.882799 kernel: Zone ranges: Oct 13 05:49:08.882807 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:49:08.882815 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Oct 13 05:49:08.882823 kernel: Normal empty Oct 13 05:49:08.882832 kernel: Device empty Oct 13 05:49:08.882840 kernel: Movable zone start for each node Oct 13 05:49:08.882848 kernel: Early memory node ranges Oct 13 05:49:08.882857 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 13 05:49:08.882865 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Oct 13 05:49:08.882877 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Oct 13 05:49:08.882885 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:49:08.882893 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 13 05:49:08.882902 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Oct 13 05:49:08.882910 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 13 05:49:08.882918 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 13 05:49:08.882929 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:49:08.882938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 13 05:49:08.882949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 13 05:49:08.882961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:49:08.885007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 13 05:49:08.885052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 13 05:49:08.885062 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:49:08.885071 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 13 05:49:08.885080 kernel: TSC deadline timer available Oct 13 05:49:08.885089 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:49:08.885097 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:49:08.885106 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:49:08.885122 kernel: CPU topo: Max. threads per core: 1 Oct 13 05:49:08.885130 kernel: CPU topo: Num. cores per package: 2 Oct 13 05:49:08.885155 kernel: CPU topo: Num. threads per package: 2 Oct 13 05:49:08.885168 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 13 05:49:08.885180 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 13 05:49:08.885193 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 13 05:49:08.885204 kernel: Booting paravirtualized kernel on KVM Oct 13 05:49:08.885216 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:49:08.885225 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 13 05:49:08.885233 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 13 05:49:08.885246 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 13 05:49:08.885254 kernel: pcpu-alloc: [0] 0 1 Oct 13 05:49:08.885263 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 13 05:49:08.885274 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:49:08.885283 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:49:08.885291 kernel: random: crng init done Oct 13 05:49:08.885300 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:49:08.885308 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 13 05:49:08.885320 kernel: Fallback order for Node 0: 0 Oct 13 05:49:08.885329 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Oct 13 05:49:08.885337 kernel: Policy zone: DMA32 Oct 13 05:49:08.885346 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:49:08.885354 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 13 05:49:08.885363 kernel: Kernel/User page tables isolation: enabled Oct 13 05:49:08.885371 kernel: ftrace: allocating 40139 entries in 157 pages Oct 13 05:49:08.885380 kernel: ftrace: allocated 157 pages with 5 groups Oct 13 05:49:08.885388 kernel: Dynamic Preempt: voluntary Oct 13 05:49:08.885400 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:49:08.885411 kernel: rcu: RCU event tracing is enabled. Oct 13 05:49:08.885419 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 13 05:49:08.885428 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:49:08.885436 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:49:08.885445 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:49:08.885453 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:49:08.885461 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 13 05:49:08.885471 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:49:08.885487 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:49:08.885507 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:49:08.885517 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 13 05:49:08.885525 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:49:08.885533 kernel: Console: colour VGA+ 80x25 Oct 13 05:49:08.885542 kernel: printk: legacy console [tty0] enabled Oct 13 05:49:08.885550 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:49:08.885559 kernel: ACPI: Core revision 20240827 Oct 13 05:49:08.885567 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 13 05:49:08.885596 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:49:08.885605 kernel: x2apic enabled Oct 13 05:49:08.885614 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:49:08.885626 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 13 05:49:08.885638 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Oct 13 05:49:08.885647 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) Oct 13 05:49:08.885656 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 13 05:49:08.885665 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 13 05:49:08.885674 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:49:08.885688 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:49:08.885697 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:49:08.885706 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 13 05:49:08.885714 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 05:49:08.885724 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 05:49:08.885732 kernel: MDS: Mitigation: Clear CPU buffers Oct 13 05:49:08.885741 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 13 05:49:08.885753 kernel: active return thunk: its_return_thunk Oct 13 05:49:08.885762 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 13 05:49:08.885771 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:49:08.885780 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:49:08.885789 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:49:08.885798 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:49:08.885807 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 13 05:49:08.885815 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:49:08.885824 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:49:08.885837 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:49:08.885846 kernel: landlock: Up and running. Oct 13 05:49:08.885855 kernel: SELinux: Initializing. Oct 13 05:49:08.885864 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 13 05:49:08.885873 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 13 05:49:08.885882 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 13 05:49:08.885891 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 13 05:49:08.885899 kernel: signal: max sigframe size: 1776 Oct 13 05:49:08.885908 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:49:08.885922 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:49:08.885931 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:49:08.885940 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 13 05:49:08.885949 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:49:08.885959 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:49:08.885997 kernel: .... node #0, CPUs: #1 Oct 13 05:49:08.886006 kernel: smp: Brought up 1 node, 2 CPUs Oct 13 05:49:08.886015 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) Oct 13 05:49:08.886024 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2443K rwdata, 10000K rodata, 54096K init, 2852K bss, 125140K reserved, 0K cma-reserved) Oct 13 05:49:08.886038 kernel: devtmpfs: initialized Oct 13 05:49:08.886047 kernel: x86/mm: Memory block size: 128MB Oct 13 05:49:08.886056 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:49:08.886064 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 13 05:49:08.886074 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:49:08.886082 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:49:08.886091 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:49:08.886100 kernel: audit: type=2000 audit(1760334544.872:1): state=initialized audit_enabled=0 res=1 Oct 13 05:49:08.886109 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:49:08.886121 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:49:08.886130 kernel: cpuidle: using governor menu Oct 13 05:49:08.886139 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:49:08.886148 kernel: dca service started, version 1.12.1 Oct 13 05:49:08.886156 kernel: PCI: Using configuration type 1 for base access Oct 13 05:49:08.886165 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:49:08.886175 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:49:08.886183 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:49:08.886192 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:49:08.886205 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:49:08.886214 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:49:08.886223 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:49:08.886232 kernel: ACPI: Interpreter enabled Oct 13 05:49:08.886240 kernel: ACPI: PM: (supports S0 S5) Oct 13 05:49:08.886249 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:49:08.886258 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:49:08.886267 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 05:49:08.886276 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 13 05:49:08.886288 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 05:49:08.886590 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:49:08.886715 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 13 05:49:08.886833 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 13 05:49:08.886846 kernel: acpiphp: Slot [3] registered Oct 13 05:49:08.886855 kernel: acpiphp: Slot [4] registered Oct 13 05:49:08.886864 kernel: acpiphp: Slot [5] registered Oct 13 05:49:08.886880 kernel: acpiphp: Slot [6] registered Oct 13 05:49:08.886889 kernel: acpiphp: Slot [7] registered Oct 13 05:49:08.886898 kernel: acpiphp: Slot [8] registered Oct 13 05:49:08.886907 kernel: acpiphp: Slot [9] registered Oct 13 05:49:08.886916 kernel: acpiphp: Slot [10] registered Oct 13 05:49:08.886924 kernel: acpiphp: Slot [11] registered Oct 13 05:49:08.886933 kernel: acpiphp: Slot [12] registered Oct 13 05:49:08.886942 kernel: acpiphp: Slot [13] registered Oct 13 05:49:08.886951 kernel: acpiphp: Slot [14] registered Oct 13 05:49:08.886960 kernel: acpiphp: Slot [15] registered Oct 13 05:49:08.887227 kernel: acpiphp: Slot [16] registered Oct 13 05:49:08.887239 kernel: acpiphp: Slot [17] registered Oct 13 05:49:08.887248 kernel: acpiphp: Slot [18] registered Oct 13 05:49:08.887257 kernel: acpiphp: Slot [19] registered Oct 13 05:49:08.887266 kernel: acpiphp: Slot [20] registered Oct 13 05:49:08.887275 kernel: acpiphp: Slot [21] registered Oct 13 05:49:08.887284 kernel: acpiphp: Slot [22] registered Oct 13 05:49:08.887292 kernel: acpiphp: Slot [23] registered Oct 13 05:49:08.887348 kernel: acpiphp: Slot [24] registered Oct 13 05:49:08.887364 kernel: acpiphp: Slot [25] registered Oct 13 05:49:08.887373 kernel: acpiphp: Slot [26] registered Oct 13 05:49:08.887382 kernel: acpiphp: Slot [27] registered Oct 13 05:49:08.887393 kernel: acpiphp: Slot [28] registered Oct 13 05:49:08.887406 kernel: acpiphp: Slot [29] registered Oct 13 05:49:08.887421 kernel: acpiphp: Slot [30] registered Oct 13 05:49:08.887437 kernel: acpiphp: Slot [31] registered Oct 13 05:49:08.887453 kernel: PCI host bridge to bus 0000:00 Oct 13 05:49:08.887661 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 05:49:08.887787 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 13 05:49:08.887875 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 05:49:08.887957 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 13 05:49:08.888053 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 13 05:49:08.888134 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 05:49:08.888263 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:49:08.888380 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Oct 13 05:49:08.888491 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Oct 13 05:49:08.888585 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Oct 13 05:49:08.888677 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 13 05:49:08.888769 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 13 05:49:08.888860 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 13 05:49:08.888950 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 13 05:49:08.889092 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Oct 13 05:49:08.889211 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Oct 13 05:49:08.889346 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 13 05:49:08.889463 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 13 05:49:08.889582 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 13 05:49:08.889696 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Oct 13 05:49:08.889800 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Oct 13 05:49:08.889923 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Oct 13 05:49:08.890075 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Oct 13 05:49:08.890199 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Oct 13 05:49:08.890332 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 05:49:08.890490 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 13 05:49:08.890602 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Oct 13 05:49:08.890707 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Oct 13 05:49:08.890802 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Oct 13 05:49:08.890924 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 13 05:49:08.891049 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Oct 13 05:49:08.891199 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Oct 13 05:49:08.891306 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 13 05:49:08.891444 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Oct 13 05:49:08.891556 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Oct 13 05:49:08.891648 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Oct 13 05:49:08.891761 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 13 05:49:08.891865 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 13 05:49:08.891958 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Oct 13 05:49:08.892085 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Oct 13 05:49:08.892178 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Oct 13 05:49:08.892293 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 13 05:49:08.892388 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Oct 13 05:49:08.892482 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Oct 13 05:49:08.892574 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Oct 13 05:49:08.892692 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 05:49:08.892786 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Oct 13 05:49:08.892886 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 13 05:49:08.892898 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 13 05:49:08.892907 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 13 05:49:08.892916 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 05:49:08.892926 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 13 05:49:08.892935 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 13 05:49:08.892944 kernel: iommu: Default domain type: Translated Oct 13 05:49:08.892953 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:49:08.892966 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:49:08.892990 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 05:49:08.892999 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 13 05:49:08.893008 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Oct 13 05:49:08.893107 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 13 05:49:08.893271 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 13 05:49:08.893366 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 05:49:08.893379 kernel: vgaarb: loaded Oct 13 05:49:08.893388 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 13 05:49:08.893404 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 13 05:49:08.893413 kernel: clocksource: Switched to clocksource kvm-clock Oct 13 05:49:08.893422 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:49:08.893432 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:49:08.893442 kernel: pnp: PnP ACPI init Oct 13 05:49:08.893451 kernel: pnp: PnP ACPI: found 4 devices Oct 13 05:49:08.893460 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:49:08.893469 kernel: NET: Registered PF_INET protocol family Oct 13 05:49:08.893478 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:49:08.893491 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 13 05:49:08.893500 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:49:08.893509 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 13 05:49:08.893518 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 13 05:49:08.893527 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 13 05:49:08.893546 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 13 05:49:08.893555 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 13 05:49:08.893564 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:49:08.893573 kernel: NET: Registered PF_XDP protocol family Oct 13 05:49:08.893676 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 13 05:49:08.893762 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 13 05:49:08.893853 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 13 05:49:08.893936 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 13 05:49:08.894042 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 13 05:49:08.894143 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 13 05:49:08.894244 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 13 05:49:08.894258 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 13 05:49:08.894358 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 31328 usecs Oct 13 05:49:08.894372 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:49:08.894381 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 13 05:49:08.894391 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Oct 13 05:49:08.894400 kernel: Initialise system trusted keyrings Oct 13 05:49:08.894409 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 13 05:49:08.894418 kernel: Key type asymmetric registered Oct 13 05:49:08.894427 kernel: Asymmetric key parser 'x509' registered Oct 13 05:49:08.894440 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:49:08.894449 kernel: io scheduler mq-deadline registered Oct 13 05:49:08.894458 kernel: io scheduler kyber registered Oct 13 05:49:08.894468 kernel: io scheduler bfq registered Oct 13 05:49:08.894477 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:49:08.894486 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 13 05:49:08.894495 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 13 05:49:08.894504 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 13 05:49:08.894512 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:49:08.894521 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:49:08.894533 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 13 05:49:08.894542 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 05:49:08.894551 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 05:49:08.894667 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 13 05:49:08.894681 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 05:49:08.894764 kernel: rtc_cmos 00:03: registered as rtc0 Oct 13 05:49:08.894849 kernel: rtc_cmos 00:03: setting system clock to 2025-10-13T05:49:08 UTC (1760334548) Oct 13 05:49:08.894938 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 13 05:49:08.894950 kernel: intel_pstate: CPU model not supported Oct 13 05:49:08.894959 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:49:08.894980 kernel: Segment Routing with IPv6 Oct 13 05:49:08.894989 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:49:08.894998 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:49:08.895007 kernel: Key type dns_resolver registered Oct 13 05:49:08.895016 kernel: IPI shorthand broadcast: enabled Oct 13 05:49:08.895025 kernel: sched_clock: Marking stable (3195004352, 148379456)->(3370793233, -27409425) Oct 13 05:49:08.895039 kernel: registered taskstats version 1 Oct 13 05:49:08.895047 kernel: Loading compiled-in X.509 certificates Oct 13 05:49:08.895056 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: d8dbf4abead15098249886d373d42a3af4f50ccd' Oct 13 05:49:08.895065 kernel: Demotion targets for Node 0: null Oct 13 05:49:08.895074 kernel: Key type .fscrypt registered Oct 13 05:49:08.895083 kernel: Key type fscrypt-provisioning registered Oct 13 05:49:08.895118 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:49:08.895131 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:49:08.895140 kernel: ima: No architecture policies found Oct 13 05:49:08.895153 kernel: clk: Disabling unused clocks Oct 13 05:49:08.895162 kernel: Warning: unable to open an initial console. Oct 13 05:49:08.895172 kernel: Freeing unused kernel image (initmem) memory: 54096K Oct 13 05:49:08.895181 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:49:08.895190 kernel: Freeing unused kernel image (rodata/data gap) memory: 240K Oct 13 05:49:08.895199 kernel: Run /init as init process Oct 13 05:49:08.895209 kernel: with arguments: Oct 13 05:49:08.895219 kernel: /init Oct 13 05:49:08.895228 kernel: with environment: Oct 13 05:49:08.895240 kernel: HOME=/ Oct 13 05:49:08.895249 kernel: TERM=linux Oct 13 05:49:08.895258 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:49:08.895270 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:49:08.895283 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:49:08.895293 systemd[1]: Detected virtualization kvm. Oct 13 05:49:08.895302 systemd[1]: Detected architecture x86-64. Oct 13 05:49:08.895315 systemd[1]: Running in initrd. Oct 13 05:49:08.895325 systemd[1]: No hostname configured, using default hostname. Oct 13 05:49:08.895335 systemd[1]: Hostname set to . Oct 13 05:49:08.895345 systemd[1]: Initializing machine ID from VM UUID. Oct 13 05:49:08.895355 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:49:08.895364 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:49:08.895374 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:49:08.895384 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:49:08.895397 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:49:08.895407 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:49:08.895421 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:49:08.895432 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 13 05:49:08.895445 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 13 05:49:08.895455 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:49:08.895464 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:49:08.895474 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:49:08.895484 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:49:08.895493 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:49:08.895503 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:49:08.895513 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:49:08.895523 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:49:08.895536 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:49:08.895545 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:49:08.895555 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:49:08.895565 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:49:08.895575 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:49:08.895585 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:49:08.895594 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:49:08.895604 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:49:08.895617 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:49:08.895628 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:49:08.895637 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:49:08.895648 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:49:08.895657 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:49:08.895667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:49:08.895677 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:49:08.895727 systemd-journald[212]: Collecting audit messages is disabled. Oct 13 05:49:08.895756 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:49:08.895769 systemd-journald[212]: Journal started Oct 13 05:49:08.895791 systemd-journald[212]: Runtime Journal (/run/log/journal/110beac56e5743aeb24df909091ca577) is 4.9M, max 39.5M, 34.6M free. Oct 13 05:49:08.884302 systemd-modules-load[213]: Inserted module 'overlay' Oct 13 05:49:08.899193 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:49:08.900884 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:49:08.905998 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:49:08.913932 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:49:08.929996 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:49:08.934871 kernel: Bridge firewalling registered Oct 13 05:49:08.933530 systemd-modules-load[213]: Inserted module 'br_netfilter' Oct 13 05:49:08.934864 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:49:08.939885 systemd-tmpfiles[225]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:49:08.986704 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:49:08.987452 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:49:08.988251 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:49:08.992207 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:49:08.995143 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:49:08.998277 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:49:09.019959 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:49:09.024206 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:49:09.028741 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:49:09.040488 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:49:09.044139 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:49:09.067999 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:49:09.077290 systemd-resolved[241]: Positive Trust Anchors: Oct 13 05:49:09.077306 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:49:09.077345 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:49:09.084446 systemd-resolved[241]: Defaulting to hostname 'linux'. Oct 13 05:49:09.086047 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:49:09.086597 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:49:09.172039 kernel: SCSI subsystem initialized Oct 13 05:49:09.183015 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:49:09.195004 kernel: iscsi: registered transport (tcp) Oct 13 05:49:09.217193 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:49:09.217298 kernel: QLogic iSCSI HBA Driver Oct 13 05:49:09.243310 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:49:09.273356 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:49:09.276075 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:49:09.336516 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:49:09.338952 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:49:09.397010 kernel: raid6: avx2x4 gen() 17134 MB/s Oct 13 05:49:09.414028 kernel: raid6: avx2x2 gen() 17107 MB/s Oct 13 05:49:09.431093 kernel: raid6: avx2x1 gen() 13097 MB/s Oct 13 05:49:09.431187 kernel: raid6: using algorithm avx2x4 gen() 17134 MB/s Oct 13 05:49:09.450049 kernel: raid6: .... xor() 9233 MB/s, rmw enabled Oct 13 05:49:09.450175 kernel: raid6: using avx2x2 recovery algorithm Oct 13 05:49:09.473040 kernel: xor: automatically using best checksumming function avx Oct 13 05:49:09.653053 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:49:09.663274 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:49:09.667207 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:49:09.697279 systemd-udevd[460]: Using default interface naming scheme 'v255'. Oct 13 05:49:09.704655 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:49:09.709292 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:49:09.738465 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Oct 13 05:49:09.771440 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:49:09.773625 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:49:09.859061 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:49:09.863553 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:49:09.935012 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 13 05:49:09.946487 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 13 05:49:09.949657 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Oct 13 05:49:09.953003 kernel: scsi host0: Virtio SCSI HBA Oct 13 05:49:09.969995 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 05:49:09.970096 kernel: GPT:9289727 != 125829119 Oct 13 05:49:09.970111 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 05:49:09.970122 kernel: GPT:9289727 != 125829119 Oct 13 05:49:09.970133 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 05:49:09.970150 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:49:09.980999 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:49:10.009009 kernel: AES CTR mode by8 optimization enabled Oct 13 05:49:10.015123 kernel: ACPI: bus type USB registered Oct 13 05:49:10.019999 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 13 05:49:10.023001 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 13 05:49:10.026471 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Oct 13 05:49:10.047229 kernel: usbcore: registered new interface driver usbfs Oct 13 05:49:10.047306 kernel: usbcore: registered new interface driver hub Oct 13 05:49:10.050032 kernel: usbcore: registered new device driver usb Oct 13 05:49:10.054997 kernel: libata version 3.00 loaded. Oct 13 05:49:10.060497 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:49:10.062109 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:49:10.063999 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:49:10.067487 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:49:10.070032 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:49:10.074002 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 13 05:49:10.092996 kernel: scsi host1: ata_piix Oct 13 05:49:10.099010 kernel: scsi host2: ata_piix Oct 13 05:49:10.106047 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Oct 13 05:49:10.106124 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Oct 13 05:49:10.134464 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 05:49:10.181639 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:49:10.196607 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 05:49:10.207931 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:49:10.215169 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 13 05:49:10.215722 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 05:49:10.217870 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:49:10.246200 disk-uuid[605]: Primary Header is updated. Oct 13 05:49:10.246200 disk-uuid[605]: Secondary Entries is updated. Oct 13 05:49:10.246200 disk-uuid[605]: Secondary Header is updated. Oct 13 05:49:10.266018 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:49:10.279558 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 13 05:49:10.279879 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 13 05:49:10.280040 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 13 05:49:10.281817 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 13 05:49:10.282157 kernel: hub 1-0:1.0: USB hub found Oct 13 05:49:10.283090 kernel: hub 1-0:1.0: 2 ports detected Oct 13 05:49:10.284385 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:49:10.420326 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:49:10.447311 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:49:10.447920 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:49:10.448926 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:49:10.451084 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:49:10.486215 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:49:11.275349 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:49:11.276702 disk-uuid[606]: The operation has completed successfully. Oct 13 05:49:11.326612 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:49:11.327703 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:49:11.377252 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 13 05:49:11.404085 sh[630]: Success Oct 13 05:49:11.424627 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:49:11.424727 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:49:11.425612 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:49:11.436993 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Oct 13 05:49:11.482832 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:49:11.488113 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 13 05:49:11.503931 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 13 05:49:11.517358 kernel: BTRFS: device fsid c8746500-26f5-4ec1-9da8-aef51ec7db92 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (642) Oct 13 05:49:11.517424 kernel: BTRFS info (device dm-0): first mount of filesystem c8746500-26f5-4ec1-9da8-aef51ec7db92 Oct 13 05:49:11.521202 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:49:11.534402 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:49:11.534496 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:49:11.536620 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 13 05:49:11.537663 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:49:11.538546 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:49:11.539408 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:49:11.542808 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:49:11.572149 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (675) Oct 13 05:49:11.574057 kernel: BTRFS info (device vda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:49:11.575987 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:49:11.582057 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:49:11.582130 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:49:11.590018 kernel: BTRFS info (device vda6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:49:11.591119 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:49:11.595227 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:49:11.690491 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:49:11.693451 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:49:11.745187 systemd-networkd[811]: lo: Link UP Oct 13 05:49:11.745199 systemd-networkd[811]: lo: Gained carrier Oct 13 05:49:11.747722 systemd-networkd[811]: Enumeration completed Oct 13 05:49:11.747868 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:49:11.748485 systemd-networkd[811]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 13 05:49:11.748490 systemd-networkd[811]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 13 05:49:11.749381 systemd-networkd[811]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:49:11.749385 systemd-networkd[811]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:49:11.749893 systemd-networkd[811]: eth0: Link UP Oct 13 05:49:11.750281 systemd-networkd[811]: eth1: Link UP Oct 13 05:49:11.750503 systemd-networkd[811]: eth0: Gained carrier Oct 13 05:49:11.750513 systemd-networkd[811]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 13 05:49:11.753084 systemd[1]: Reached target network.target - Network. Oct 13 05:49:11.761849 systemd-networkd[811]: eth1: Gained carrier Oct 13 05:49:11.761869 systemd-networkd[811]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:49:11.773360 systemd-networkd[811]: eth0: DHCPv4 address 137.184.180.203/20, gateway 137.184.176.1 acquired from 169.254.169.253 Oct 13 05:49:11.794101 systemd-networkd[811]: eth1: DHCPv4 address 10.124.0.27/20 acquired from 169.254.169.253 Oct 13 05:49:11.800875 ignition[724]: Ignition 2.22.0 Oct 13 05:49:11.800891 ignition[724]: Stage: fetch-offline Oct 13 05:49:11.800933 ignition[724]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:49:11.800948 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 13 05:49:11.801151 ignition[724]: parsed url from cmdline: "" Oct 13 05:49:11.801160 ignition[724]: no config URL provided Oct 13 05:49:11.801170 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:49:11.804750 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:49:11.801186 ignition[724]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:49:11.801197 ignition[724]: failed to fetch config: resource requires networking Oct 13 05:49:11.801696 ignition[724]: Ignition finished successfully Oct 13 05:49:11.808154 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 13 05:49:11.855048 ignition[823]: Ignition 2.22.0 Oct 13 05:49:11.855753 ignition[823]: Stage: fetch Oct 13 05:49:11.855937 ignition[823]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:49:11.856000 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 13 05:49:11.856108 ignition[823]: parsed url from cmdline: "" Oct 13 05:49:11.856111 ignition[823]: no config URL provided Oct 13 05:49:11.856118 ignition[823]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:49:11.856126 ignition[823]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:49:11.856156 ignition[823]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 13 05:49:11.874194 ignition[823]: GET result: OK Oct 13 05:49:11.876299 ignition[823]: parsing config with SHA512: 0831137c9c87607f32192b1dd7f214b8f9ecb3d89348c3c85d45efd36433f9aef5423ed6fe65284956312fdf055e05241bcd8e66efff074f3039cf6d3d7429ff Oct 13 05:49:11.882633 unknown[823]: fetched base config from "system" Oct 13 05:49:11.882650 unknown[823]: fetched base config from "system" Oct 13 05:49:11.884118 ignition[823]: fetch: fetch complete Oct 13 05:49:11.882656 unknown[823]: fetched user config from "digitalocean" Oct 13 05:49:11.884128 ignition[823]: fetch: fetch passed Oct 13 05:49:11.884189 ignition[823]: Ignition finished successfully Oct 13 05:49:11.888462 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 13 05:49:11.892139 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:49:11.927833 ignition[830]: Ignition 2.22.0 Oct 13 05:49:11.928592 ignition[830]: Stage: kargs Oct 13 05:49:11.928777 ignition[830]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:49:11.928788 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 13 05:49:11.931623 ignition[830]: kargs: kargs passed Oct 13 05:49:11.932157 ignition[830]: Ignition finished successfully Oct 13 05:49:11.934320 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:49:11.936615 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:49:11.981020 ignition[837]: Ignition 2.22.0 Oct 13 05:49:11.981034 ignition[837]: Stage: disks Oct 13 05:49:11.981306 ignition[837]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:49:11.981320 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 13 05:49:11.982566 ignition[837]: disks: disks passed Oct 13 05:49:11.982625 ignition[837]: Ignition finished successfully Oct 13 05:49:11.985055 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:49:11.985820 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:49:11.986450 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:49:11.987451 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:49:11.988350 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:49:11.989280 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:49:11.991216 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:49:12.023481 systemd-fsck[846]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 13 05:49:12.027073 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:49:12.031170 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:49:12.151009 kernel: EXT4-fs (vda9): mounted filesystem 8b520359-9763-45f3-b7f7-db1e9fbc640d r/w with ordered data mode. Quota mode: none. Oct 13 05:49:12.151817 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:49:12.152935 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:49:12.155248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:49:12.157746 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:49:12.170291 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Oct 13 05:49:12.173768 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 13 05:49:12.180121 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (854) Oct 13 05:49:12.180149 kernel: BTRFS info (device vda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:49:12.175510 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:49:12.175601 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:49:12.183855 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:49:12.182545 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:49:12.188120 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:49:12.194202 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:49:12.194261 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:49:12.198522 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:49:12.268009 coreos-metadata[857]: Oct 13 05:49:12.267 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 13 05:49:12.277260 coreos-metadata[856]: Oct 13 05:49:12.276 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 13 05:49:12.279690 initrd-setup-root[884]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:49:12.281010 coreos-metadata[857]: Oct 13 05:49:12.280 INFO Fetch successful Oct 13 05:49:12.287906 coreos-metadata[857]: Oct 13 05:49:12.287 INFO wrote hostname ci-4459.1.0-5-82d9fc1916 to /sysroot/etc/hostname Oct 13 05:49:12.291261 initrd-setup-root[891]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:49:12.294223 coreos-metadata[856]: Oct 13 05:49:12.288 INFO Fetch successful Oct 13 05:49:12.292391 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 05:49:12.300775 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Oct 13 05:49:12.300903 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Oct 13 05:49:12.303918 initrd-setup-root[900]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:49:12.308916 initrd-setup-root[907]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:49:12.418444 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:49:12.421223 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:49:12.423863 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:49:12.444019 kernel: BTRFS info (device vda6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:49:12.462204 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:49:12.492027 ignition[975]: INFO : Ignition 2.22.0 Oct 13 05:49:12.492027 ignition[975]: INFO : Stage: mount Oct 13 05:49:12.492027 ignition[975]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:49:12.492027 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 13 05:49:12.496390 ignition[975]: INFO : mount: mount passed Oct 13 05:49:12.497015 ignition[975]: INFO : Ignition finished successfully Oct 13 05:49:12.499483 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:49:12.502265 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:49:12.515598 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:49:12.523644 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:49:12.549180 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (987) Oct 13 05:49:12.549270 kernel: BTRFS info (device vda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:49:12.551909 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:49:12.556403 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:49:12.556497 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:49:12.558879 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:49:12.605750 ignition[1004]: INFO : Ignition 2.22.0 Oct 13 05:49:12.605750 ignition[1004]: INFO : Stage: files Oct 13 05:49:12.607025 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:49:12.607025 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 13 05:49:12.608263 ignition[1004]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:49:12.608263 ignition[1004]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:49:12.608263 ignition[1004]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:49:12.611043 ignition[1004]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:49:12.611742 ignition[1004]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:49:12.611742 ignition[1004]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:49:12.611499 unknown[1004]: wrote ssh authorized keys file for user: core Oct 13 05:49:12.613700 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:49:12.614443 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 05:49:12.751036 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:49:12.785520 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:49:12.785520 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:49:12.787252 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:49:12.787252 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:49:12.787252 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:49:12.787252 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:49:12.787252 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:49:12.787252 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:49:12.787252 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:49:12.798106 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:49:12.798106 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:49:12.798106 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:49:12.798106 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:49:12.798106 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:49:12.798106 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 13 05:49:12.937410 systemd-networkd[811]: eth1: Gained IPv6LL Oct 13 05:49:13.069741 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:49:13.405247 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:49:13.406413 ignition[1004]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 05:49:13.407857 ignition[1004]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:49:13.409404 ignition[1004]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:49:13.410165 ignition[1004]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 05:49:13.410165 ignition[1004]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:49:13.413067 ignition[1004]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:49:13.413067 ignition[1004]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:49:13.413067 ignition[1004]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:49:13.413067 ignition[1004]: INFO : files: files passed Oct 13 05:49:13.413067 ignition[1004]: INFO : Ignition finished successfully Oct 13 05:49:13.415284 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:49:13.418547 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:49:13.419927 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:49:13.436419 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:49:13.436526 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:49:13.444681 initrd-setup-root-after-ignition[1033]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:49:13.444681 initrd-setup-root-after-ignition[1033]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:49:13.447496 initrd-setup-root-after-ignition[1037]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:49:13.449814 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:49:13.450692 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:49:13.452463 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:49:13.508461 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:49:13.508598 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:49:13.509834 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:49:13.510488 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:49:13.511401 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:49:13.512363 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:49:13.545890 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:49:13.548605 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:49:13.574999 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:49:13.576189 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:49:13.576906 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:49:13.578009 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:49:13.578202 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:49:13.579223 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:49:13.579801 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:49:13.580698 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:49:13.581579 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:49:13.582418 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:49:13.583327 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:49:13.584258 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:49:13.585231 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:49:13.586271 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:49:13.587189 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:49:13.588158 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:49:13.588977 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:49:13.589188 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:49:13.590202 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:49:13.590812 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:49:13.591576 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:49:13.591734 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:49:13.592479 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:49:13.592644 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:49:13.593792 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:49:13.593953 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:49:13.594998 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:49:13.595113 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:49:13.595940 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 13 05:49:13.596102 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 05:49:13.598067 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:49:13.602737 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:49:13.603265 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:49:13.603439 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:49:13.604048 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:49:13.604178 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:49:13.610781 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:49:13.613153 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:49:13.638900 ignition[1057]: INFO : Ignition 2.22.0 Oct 13 05:49:13.639656 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:49:13.640461 ignition[1057]: INFO : Stage: umount Oct 13 05:49:13.641334 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:49:13.641334 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 13 05:49:13.654832 ignition[1057]: INFO : umount: umount passed Oct 13 05:49:13.654832 ignition[1057]: INFO : Ignition finished successfully Oct 13 05:49:13.657142 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:49:13.657331 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:49:13.669452 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:49:13.669642 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:49:13.670749 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:49:13.670846 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:49:13.671597 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 13 05:49:13.671679 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 13 05:49:13.672563 systemd[1]: Stopped target network.target - Network. Oct 13 05:49:13.673479 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:49:13.673584 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:49:13.675324 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:49:13.676159 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:49:13.676417 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:49:13.677343 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:49:13.678405 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:49:13.679373 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:49:13.679427 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:49:13.680201 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:49:13.680240 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:49:13.681064 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:49:13.681191 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:49:13.682051 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:49:13.682095 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:49:13.683205 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:49:13.684201 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:49:13.685754 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:49:13.685867 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:49:13.686931 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:49:13.687331 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:49:13.691454 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:49:13.691607 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:49:13.695357 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 13 05:49:13.695695 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:49:13.695751 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:49:13.698653 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:49:13.698952 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:49:13.699096 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:49:13.700907 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 13 05:49:13.701718 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:49:13.703158 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:49:13.703209 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:49:13.705065 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:49:13.705599 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:49:13.705656 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:49:13.706358 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:49:13.706408 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:49:13.707470 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:49:13.707530 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:49:13.708232 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:49:13.709705 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 13 05:49:13.728583 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:49:13.729554 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:49:13.731361 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:49:13.732101 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:49:13.732679 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:49:13.732717 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:49:13.733397 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:49:13.733449 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:49:13.734652 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:49:13.734699 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:49:13.735613 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:49:13.735669 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:49:13.738161 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:49:13.738668 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:49:13.738723 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:49:13.741121 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:49:13.741188 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:49:13.742178 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 05:49:13.742227 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:49:13.743918 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:49:13.743987 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:49:13.745359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:49:13.745423 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:49:13.747803 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:49:13.752098 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:49:13.761265 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:49:13.761403 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:49:13.763154 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:49:13.764809 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:49:13.788488 systemd[1]: Switching root. Oct 13 05:49:13.846081 systemd-journald[212]: Journal stopped Oct 13 05:49:15.038873 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Oct 13 05:49:15.038942 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:49:15.038958 kernel: SELinux: policy capability open_perms=1 Oct 13 05:49:15.039000 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:49:15.039018 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:49:15.039034 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:49:15.039046 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:49:15.039057 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:49:15.039068 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:49:15.039080 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:49:15.039091 kernel: audit: type=1403 audit(1760334553.985:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:49:15.039104 systemd[1]: Successfully loaded SELinux policy in 75.722ms. Oct 13 05:49:15.039130 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.076ms. Oct 13 05:49:15.039144 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:49:15.039165 systemd[1]: Detected virtualization kvm. Oct 13 05:49:15.039183 systemd[1]: Detected architecture x86-64. Oct 13 05:49:15.039197 systemd[1]: Detected first boot. Oct 13 05:49:15.039209 systemd[1]: Hostname set to . Oct 13 05:49:15.039221 systemd[1]: Initializing machine ID from VM UUID. Oct 13 05:49:15.039238 kernel: Guest personality initialized and is inactive Oct 13 05:49:15.039255 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 05:49:15.039268 kernel: Initialized host personality Oct 13 05:49:15.039279 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:49:15.039292 zram_generator::config[1101]: No configuration found. Oct 13 05:49:15.039316 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:49:15.039330 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 13 05:49:15.039347 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:49:15.039361 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:49:15.039376 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:49:15.039389 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:49:15.039401 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:49:15.039413 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:49:15.039426 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:49:15.039438 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:49:15.039451 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:49:15.039468 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:49:15.039482 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:49:15.039497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:49:15.039509 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:49:15.039521 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:49:15.039533 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:49:15.039545 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:49:15.039558 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:49:15.039572 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:49:15.039584 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:49:15.039597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:49:15.039610 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:49:15.039623 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:49:15.039635 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:49:15.039647 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:49:15.039659 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:49:15.039671 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:49:15.039686 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:49:15.039698 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:49:15.039711 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:49:15.039727 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:49:15.039739 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:49:15.039751 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:49:15.039763 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:49:15.039775 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:49:15.039787 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:49:15.039799 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:49:15.039814 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:49:15.039826 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:49:15.039838 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:49:15.039851 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:49:15.039868 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:49:15.039879 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:49:15.039892 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:49:15.039904 systemd[1]: Reached target machines.target - Containers. Oct 13 05:49:15.039919 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:49:15.039931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:49:15.039943 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:49:15.039955 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:49:15.042013 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:49:15.042066 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:49:15.042080 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:49:15.042093 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:49:15.042112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:49:15.042126 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:49:15.042138 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:49:15.042150 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:49:15.042162 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:49:15.042174 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:49:15.042188 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:49:15.042201 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:49:15.042213 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:49:15.042231 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:49:15.042244 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:49:15.042257 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:49:15.042269 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:49:15.042282 systemd[1]: verity-setup.service: Deactivated successfully. Oct 13 05:49:15.042297 systemd[1]: Stopped verity-setup.service. Oct 13 05:49:15.042310 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:49:15.042322 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:49:15.042335 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:49:15.042348 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:49:15.042362 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:49:15.042374 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:49:15.042387 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:49:15.042399 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:49:15.042411 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:49:15.042424 kernel: loop: module loaded Oct 13 05:49:15.042437 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:49:15.042449 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:49:15.042461 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:49:15.042476 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:49:15.042488 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:49:15.042500 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:49:15.042512 kernel: fuse: init (API version 7.41) Oct 13 05:49:15.042523 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:49:15.042536 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:49:15.042548 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:49:15.042561 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:49:15.042573 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:49:15.042588 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:49:15.042601 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:49:15.042614 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:49:15.042627 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:49:15.042642 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:49:15.042655 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:49:15.042711 systemd-journald[1175]: Collecting audit messages is disabled. Oct 13 05:49:15.042739 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:49:15.042756 systemd-journald[1175]: Journal started Oct 13 05:49:15.042782 systemd-journald[1175]: Runtime Journal (/run/log/journal/110beac56e5743aeb24df909091ca577) is 4.9M, max 39.5M, 34.6M free. Oct 13 05:49:14.663388 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:49:14.685692 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 05:49:14.686315 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:49:15.047000 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:49:15.050702 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:49:15.062188 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:49:15.062271 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:49:15.070072 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:49:15.070160 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:49:15.095000 kernel: ACPI: bus type drm_connector registered Oct 13 05:49:15.100983 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:49:15.110005 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:49:15.114641 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:49:15.124327 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:49:15.124077 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:49:15.124736 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:49:15.126618 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:49:15.128421 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:49:15.130647 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:49:15.131274 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:49:15.161232 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:49:15.173944 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:49:15.187712 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:49:15.195154 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:49:15.205876 kernel: loop0: detected capacity change from 0 to 110984 Oct 13 05:49:15.215249 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:49:15.243001 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:49:15.258172 systemd-journald[1175]: Time spent on flushing to /var/log/journal/110beac56e5743aeb24df909091ca577 is 93.384ms for 1016 entries. Oct 13 05:49:15.258172 systemd-journald[1175]: System Journal (/var/log/journal/110beac56e5743aeb24df909091ca577) is 8M, max 195.6M, 187.6M free. Oct 13 05:49:15.364270 systemd-journald[1175]: Received client request to flush runtime journal. Oct 13 05:49:15.364343 kernel: loop1: detected capacity change from 0 to 8 Oct 13 05:49:15.364371 kernel: loop2: detected capacity change from 0 to 219144 Oct 13 05:49:15.277069 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:49:15.277616 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Oct 13 05:49:15.277631 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Oct 13 05:49:15.293573 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:49:15.299278 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:49:15.353555 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:49:15.368099 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:49:15.391011 kernel: loop3: detected capacity change from 0 to 128016 Oct 13 05:49:15.437516 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:49:15.446003 kernel: loop4: detected capacity change from 0 to 110984 Oct 13 05:49:15.443459 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:49:15.468002 kernel: loop5: detected capacity change from 0 to 8 Oct 13 05:49:15.480995 kernel: loop6: detected capacity change from 0 to 219144 Oct 13 05:49:15.527997 kernel: loop7: detected capacity change from 0 to 128016 Oct 13 05:49:15.547024 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Oct 13 05:49:15.547064 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Oct 13 05:49:15.550426 (sd-merge)[1252]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Oct 13 05:49:15.551416 (sd-merge)[1252]: Merged extensions into '/usr'. Oct 13 05:49:15.566404 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:49:15.568155 systemd[1]: Reload requested from client PID 1208 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:49:15.568307 systemd[1]: Reloading... Oct 13 05:49:15.732835 ldconfig[1199]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:49:15.754012 zram_generator::config[1281]: No configuration found. Oct 13 05:49:15.965586 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:49:15.965829 systemd[1]: Reloading finished in 395 ms. Oct 13 05:49:15.987019 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:49:15.992671 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:49:16.010184 systemd[1]: Starting ensure-sysext.service... Oct 13 05:49:16.013396 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:49:16.047469 systemd[1]: Reload requested from client PID 1324 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:49:16.047489 systemd[1]: Reloading... Oct 13 05:49:16.069589 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:49:16.069635 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:49:16.070114 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:49:16.070535 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:49:16.074094 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:49:16.074717 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Oct 13 05:49:16.077168 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Oct 13 05:49:16.086850 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:49:16.087037 systemd-tmpfiles[1325]: Skipping /boot Oct 13 05:49:16.115963 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:49:16.116034 systemd-tmpfiles[1325]: Skipping /boot Oct 13 05:49:16.170003 zram_generator::config[1352]: No configuration found. Oct 13 05:49:16.386755 systemd[1]: Reloading finished in 338 ms. Oct 13 05:49:16.399751 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:49:16.400838 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:49:16.416274 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:49:16.418801 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:49:16.427206 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:49:16.432262 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:49:16.438005 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:49:16.452772 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:49:16.461486 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:49:16.465526 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:49:16.465734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:49:16.470411 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:49:16.475792 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:49:16.487140 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:49:16.487954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:49:16.488159 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:49:16.488282 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:49:16.493875 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:49:16.496202 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:49:16.496416 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:49:16.496505 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:49:16.496616 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:49:16.506919 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:49:16.508507 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:49:16.509237 systemd-udevd[1402]: Using default interface naming scheme 'v255'. Oct 13 05:49:16.513387 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:49:16.514025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:49:16.514139 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:49:16.514275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:49:16.519263 systemd[1]: Finished ensure-sysext.service. Oct 13 05:49:16.529127 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 05:49:16.530108 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:49:16.551401 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:49:16.557006 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:49:16.570143 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:49:16.570897 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:49:16.573443 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:49:16.577444 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:49:16.578816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:49:16.579061 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:49:16.583515 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:49:16.583755 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:49:16.584676 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:49:16.589413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:49:16.589663 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:49:16.590837 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:49:16.594427 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:49:16.600242 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:49:16.603913 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:49:16.619998 augenrules[1448]: No rules Oct 13 05:49:16.623393 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:49:16.623701 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:49:16.650207 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:49:16.791687 systemd-resolved[1400]: Positive Trust Anchors: Oct 13 05:49:16.791703 systemd-resolved[1400]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:49:16.791741 systemd-resolved[1400]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:49:16.796065 systemd-resolved[1400]: Using system hostname 'ci-4459.1.0-5-82d9fc1916'. Oct 13 05:49:16.800091 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:49:16.800712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:49:16.808135 systemd-networkd[1439]: lo: Link UP Oct 13 05:49:16.808148 systemd-networkd[1439]: lo: Gained carrier Oct 13 05:49:16.811062 systemd-networkd[1439]: Enumeration completed Oct 13 05:49:16.811200 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:49:16.811791 systemd[1]: Reached target network.target - Network. Oct 13 05:49:16.815429 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:49:16.819199 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:49:16.821278 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 05:49:16.821848 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:49:16.823176 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:49:16.823856 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:49:16.825290 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:49:16.825736 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:49:16.826341 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:49:16.826373 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:49:16.827198 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:49:16.828195 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:49:16.829355 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:49:16.830709 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:49:16.834017 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:49:16.838016 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:49:16.844109 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:49:16.845698 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:49:16.846851 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:49:16.858462 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:49:16.860802 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:49:16.864566 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:49:16.868395 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:49:16.868905 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:49:16.869724 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:49:16.869755 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:49:16.871923 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:49:16.880397 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 13 05:49:16.890307 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:49:16.893881 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:49:16.896227 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:49:16.905268 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:49:16.912115 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:49:16.917322 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:49:16.922274 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:49:16.924886 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:49:16.933384 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:49:16.936188 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:49:16.944260 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:49:16.944822 jq[1487]: false Oct 13 05:49:16.945718 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:49:16.947332 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:49:16.952230 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:49:16.966132 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:49:16.969730 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:49:16.975342 google_oslogin_nss_cache[1491]: oslogin_cache_refresh[1491]: Refreshing passwd entry cache Oct 13 05:49:16.978119 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:49:16.978280 oslogin_cache_refresh[1491]: Refreshing passwd entry cache Oct 13 05:49:16.979026 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:49:16.979236 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:49:16.986575 google_oslogin_nss_cache[1491]: oslogin_cache_refresh[1491]: Failure getting users, quitting Oct 13 05:49:16.990938 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:49:16.991998 oslogin_cache_refresh[1491]: Failure getting users, quitting Oct 13 05:49:16.992058 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:49:16.993595 google_oslogin_nss_cache[1491]: oslogin_cache_refresh[1491]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:49:16.993595 google_oslogin_nss_cache[1491]: oslogin_cache_refresh[1491]: Refreshing group entry cache Oct 13 05:49:16.993595 google_oslogin_nss_cache[1491]: oslogin_cache_refresh[1491]: Failure getting groups, quitting Oct 13 05:49:16.993595 google_oslogin_nss_cache[1491]: oslogin_cache_refresh[1491]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:49:16.992041 oslogin_cache_refresh[1491]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:49:16.992098 oslogin_cache_refresh[1491]: Refreshing group entry cache Oct 13 05:49:16.992646 oslogin_cache_refresh[1491]: Failure getting groups, quitting Oct 13 05:49:16.992655 oslogin_cache_refresh[1491]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:49:16.995485 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:49:16.997392 coreos-metadata[1484]: Oct 13 05:49:16.997 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 13 05:49:16.997392 coreos-metadata[1484]: Oct 13 05:49:16.997 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Oct 13 05:49:16.997118 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:49:17.010770 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:49:17.012078 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:49:17.024832 extend-filesystems[1488]: Found /dev/vda6 Oct 13 05:49:17.035864 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Oct 13 05:49:17.039885 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 13 05:49:17.041115 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:49:17.047683 jq[1500]: true Oct 13 05:49:17.047958 update_engine[1498]: I20251013 05:49:17.047809 1498 main.cc:92] Flatcar Update Engine starting Oct 13 05:49:17.056405 extend-filesystems[1488]: Found /dev/vda9 Oct 13 05:49:17.063237 extend-filesystems[1488]: Checking size of /dev/vda9 Oct 13 05:49:17.086666 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:49:17.087678 tar[1506]: linux-amd64/LICENSE Oct 13 05:49:17.087947 jq[1534]: true Oct 13 05:49:17.099899 extend-filesystems[1488]: Resized partition /dev/vda9 Oct 13 05:49:17.107389 tar[1506]: linux-amd64/helm Oct 13 05:49:17.103938 dbus-daemon[1485]: [system] SELinux support is enabled Oct 13 05:49:17.103213 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:49:17.107866 extend-filesystems[1543]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:49:17.104202 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:49:17.113375 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:49:17.113432 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:49:17.114307 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:49:17.124993 kernel: ISO 9660 Extensions: RRIP_1991A Oct 13 05:49:17.124470 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:49:17.129226 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:49:17.129912 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 13 05:49:17.130658 update_engine[1498]: I20251013 05:49:17.130600 1498 update_check_scheduler.cc:74] Next update check in 7m42s Oct 13 05:49:17.132746 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 13 05:49:17.132850 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:49:17.154230 systemd-logind[1497]: New seat seat0. Oct 13 05:49:17.157004 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Oct 13 05:49:17.157268 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:49:17.216438 systemd-networkd[1439]: eth0: Configuring with /run/systemd/network/10-d2:01:70:36:2f:4b.network. Oct 13 05:49:17.217318 systemd-networkd[1439]: eth0: Link UP Oct 13 05:49:17.217471 systemd-networkd[1439]: eth0: Gained carrier Oct 13 05:49:17.222234 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Oct 13 05:49:17.299920 systemd-networkd[1439]: eth1: Configuring with /run/systemd/network/10-2e:8d:25:13:ae:fc.network. Oct 13 05:49:17.301414 systemd-timesyncd[1416]: Contacted time server 51.81.20.74:123 (0.flatcar.pool.ntp.org). Oct 13 05:49:17.301557 systemd-timesyncd[1416]: Initial clock synchronization to Mon 2025-10-13 05:49:16.925575 UTC. Oct 13 05:49:17.302138 bash[1564]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:49:17.304472 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:49:17.310542 systemd-networkd[1439]: eth1: Link UP Oct 13 05:49:17.310627 systemd[1]: Starting sshkeys.service... Oct 13 05:49:17.314316 systemd-networkd[1439]: eth1: Gained carrier Oct 13 05:49:17.358321 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 13 05:49:17.366385 kernel: ACPI: button: Power Button [PWRF] Oct 13 05:49:17.387192 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 13 05:49:17.389088 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 13 05:49:17.406098 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 13 05:49:17.428203 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:49:17.430811 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 05:49:17.430811 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 13 05:49:17.430811 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 13 05:49:17.441306 extend-filesystems[1488]: Resized filesystem in /dev/vda9 Oct 13 05:49:17.432887 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:49:17.433180 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:49:17.563845 coreos-metadata[1571]: Oct 13 05:49:17.562 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 13 05:49:17.565154 locksmithd[1545]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:49:17.578073 coreos-metadata[1571]: Oct 13 05:49:17.577 INFO Fetch successful Oct 13 05:49:17.604112 unknown[1571]: wrote ssh authorized keys file for user: core Oct 13 05:49:17.638014 update-ssh-keys[1583]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:49:17.639266 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 13 05:49:17.643993 systemd[1]: Finished sshkeys.service. Oct 13 05:49:17.648283 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:49:17.660478 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:49:17.685998 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 13 05:49:17.731336 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 13 05:49:17.738060 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:49:17.772106 containerd[1527]: time="2025-10-13T05:49:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:49:17.774002 containerd[1527]: time="2025-10-13T05:49:17.773861813Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:49:17.829382 containerd[1527]: time="2025-10-13T05:49:17.829157264Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.42µs" Oct 13 05:49:17.829382 containerd[1527]: time="2025-10-13T05:49:17.829203971Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:49:17.829382 containerd[1527]: time="2025-10-13T05:49:17.829226191Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:49:17.830132 containerd[1527]: time="2025-10-13T05:49:17.830101846Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:49:17.834519 containerd[1527]: time="2025-10-13T05:49:17.832092616Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:49:17.834519 containerd[1527]: time="2025-10-13T05:49:17.832168693Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:49:17.834519 containerd[1527]: time="2025-10-13T05:49:17.832271949Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:49:17.834519 containerd[1527]: time="2025-10-13T05:49:17.832284483Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:49:17.836128 containerd[1527]: time="2025-10-13T05:49:17.835713955Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:49:17.836128 containerd[1527]: time="2025-10-13T05:49:17.835758130Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:49:17.836128 containerd[1527]: time="2025-10-13T05:49:17.835778864Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:49:17.836128 containerd[1527]: time="2025-10-13T05:49:17.835792157Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:49:17.836128 containerd[1527]: time="2025-10-13T05:49:17.835990867Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:49:17.837001 containerd[1527]: time="2025-10-13T05:49:17.836959030Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:49:17.837149 containerd[1527]: time="2025-10-13T05:49:17.837132245Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:49:17.837228 containerd[1527]: time="2025-10-13T05:49:17.837214905Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:49:17.838581 containerd[1527]: time="2025-10-13T05:49:17.838083758Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:49:17.839900 containerd[1527]: time="2025-10-13T05:49:17.839794410Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:49:17.842044 containerd[1527]: time="2025-10-13T05:49:17.841784938Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849282787Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849362683Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849377596Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849403619Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849443831Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849468305Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849480755Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849505898Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849518385Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849528021Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849550292Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849564254Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849739256Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:49:17.850445 containerd[1527]: time="2025-10-13T05:49:17.849789024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:49:17.850828 containerd[1527]: time="2025-10-13T05:49:17.849807968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:49:17.850828 containerd[1527]: time="2025-10-13T05:49:17.849818647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:49:17.850828 containerd[1527]: time="2025-10-13T05:49:17.849829046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:49:17.850828 containerd[1527]: time="2025-10-13T05:49:17.849839316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:49:17.850828 containerd[1527]: time="2025-10-13T05:49:17.849860102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:49:17.850828 containerd[1527]: time="2025-10-13T05:49:17.849881950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:49:17.850828 containerd[1527]: time="2025-10-13T05:49:17.849895767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:49:17.850828 containerd[1527]: time="2025-10-13T05:49:17.849906482Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:49:17.850828 containerd[1527]: time="2025-10-13T05:49:17.849919255Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:49:17.850828 containerd[1527]: time="2025-10-13T05:49:17.850017487Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:49:17.850828 containerd[1527]: time="2025-10-13T05:49:17.850032645Z" level=info msg="Start snapshots syncer" Oct 13 05:49:17.850828 containerd[1527]: time="2025-10-13T05:49:17.850079463Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:49:17.854005 containerd[1527]: time="2025-10-13T05:49:17.852361717Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:49:17.854005 containerd[1527]: time="2025-10-13T05:49:17.852464969Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:49:17.854287 containerd[1527]: time="2025-10-13T05:49:17.852564019Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:49:17.854287 containerd[1527]: time="2025-10-13T05:49:17.852714781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:49:17.854287 containerd[1527]: time="2025-10-13T05:49:17.852776903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:49:17.854287 containerd[1527]: time="2025-10-13T05:49:17.852792929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:49:17.854287 containerd[1527]: time="2025-10-13T05:49:17.852815163Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:49:17.854287 containerd[1527]: time="2025-10-13T05:49:17.852830651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:49:17.854287 containerd[1527]: time="2025-10-13T05:49:17.852840651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:49:17.854287 containerd[1527]: time="2025-10-13T05:49:17.852853279Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:49:17.854287 containerd[1527]: time="2025-10-13T05:49:17.852879857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:49:17.854287 containerd[1527]: time="2025-10-13T05:49:17.852890610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:49:17.854287 containerd[1527]: time="2025-10-13T05:49:17.852901794Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:49:17.854287 containerd[1527]: time="2025-10-13T05:49:17.852942801Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:49:17.858158 containerd[1527]: time="2025-10-13T05:49:17.852958366Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:49:17.858158 containerd[1527]: time="2025-10-13T05:49:17.857487384Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:49:17.858158 containerd[1527]: time="2025-10-13T05:49:17.857535226Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:49:17.858158 containerd[1527]: time="2025-10-13T05:49:17.857548186Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:49:17.858158 containerd[1527]: time="2025-10-13T05:49:17.857576567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:49:17.858158 containerd[1527]: time="2025-10-13T05:49:17.857592018Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:49:17.858158 containerd[1527]: time="2025-10-13T05:49:17.857612774Z" level=info msg="runtime interface created" Oct 13 05:49:17.858158 containerd[1527]: time="2025-10-13T05:49:17.857618194Z" level=info msg="created NRI interface" Oct 13 05:49:17.858158 containerd[1527]: time="2025-10-13T05:49:17.857626222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:49:17.858158 containerd[1527]: time="2025-10-13T05:49:17.857642817Z" level=info msg="Connect containerd service" Oct 13 05:49:17.858158 containerd[1527]: time="2025-10-13T05:49:17.857706572Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:49:17.863001 containerd[1527]: time="2025-10-13T05:49:17.861861275Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:49:17.926671 kernel: EDAC MC: Ver: 3.0.0 Oct 13 05:49:17.984378 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:49:17.998055 coreos-metadata[1484]: Oct 13 05:49:17.997 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Oct 13 05:49:18.013990 coreos-metadata[1484]: Oct 13 05:49:18.009 INFO Fetch successful Oct 13 05:49:18.015002 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 13 05:49:18.018599 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 13 05:49:18.025319 kernel: Console: switching to colour dummy device 80x25 Oct 13 05:49:18.027317 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 13 05:49:18.027404 kernel: [drm] features: -context_init Oct 13 05:49:18.049873 kernel: [drm] number of scanouts: 1 Oct 13 05:49:18.049949 kernel: [drm] number of cap sets: 0 Oct 13 05:49:18.069255 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Oct 13 05:49:18.081257 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 13 05:49:18.082223 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:49:18.086511 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:49:18.089021 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 13 05:49:18.089090 kernel: Console: switching to colour frame buffer device 128x48 Oct 13 05:49:18.098138 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 13 05:49:18.103681 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:49:18.181290 containerd[1527]: time="2025-10-13T05:49:18.180413696Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:49:18.181290 containerd[1527]: time="2025-10-13T05:49:18.180470716Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:49:18.181290 containerd[1527]: time="2025-10-13T05:49:18.180494737Z" level=info msg="Start subscribing containerd event" Oct 13 05:49:18.181290 containerd[1527]: time="2025-10-13T05:49:18.180520371Z" level=info msg="Start recovering state" Oct 13 05:49:18.181290 containerd[1527]: time="2025-10-13T05:49:18.180616401Z" level=info msg="Start event monitor" Oct 13 05:49:18.181290 containerd[1527]: time="2025-10-13T05:49:18.180630307Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:49:18.181290 containerd[1527]: time="2025-10-13T05:49:18.180637033Z" level=info msg="Start streaming server" Oct 13 05:49:18.181290 containerd[1527]: time="2025-10-13T05:49:18.180654520Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:49:18.181290 containerd[1527]: time="2025-10-13T05:49:18.180662268Z" level=info msg="runtime interface starting up..." Oct 13 05:49:18.181290 containerd[1527]: time="2025-10-13T05:49:18.180668095Z" level=info msg="starting plugins..." Oct 13 05:49:18.181290 containerd[1527]: time="2025-10-13T05:49:18.180680843Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:49:18.181290 containerd[1527]: time="2025-10-13T05:49:18.180783392Z" level=info msg="containerd successfully booted in 0.409884s" Oct 13 05:49:18.182757 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:49:18.204253 systemd-logind[1497]: Watching system buttons on /dev/input/event2 (Power Button) Oct 13 05:49:18.249303 systemd-networkd[1439]: eth0: Gained IPv6LL Oct 13 05:49:18.251158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:49:18.251519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:49:18.253419 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:49:18.256233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:49:18.264629 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:49:18.267282 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:49:18.275916 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:49:18.285241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:49:18.293741 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:49:18.366396 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:49:18.395774 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:49:18.631317 tar[1506]: linux-amd64/README.md Oct 13 05:49:18.660265 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:49:18.756592 sshd_keygen[1526]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:49:18.786063 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:49:18.788909 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:49:18.809515 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:49:18.809847 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:49:18.816285 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:49:18.832782 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:49:18.837347 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:49:18.839238 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:49:18.840350 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:49:19.145313 systemd-networkd[1439]: eth1: Gained IPv6LL Oct 13 05:49:19.473074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:49:19.474456 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:49:19.476234 systemd[1]: Startup finished in 3.273s (kernel) + 5.321s (initrd) + 5.562s (userspace) = 14.158s. Oct 13 05:49:19.489142 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:49:20.007200 kubelet[1674]: E1013 05:49:20.007129 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:49:20.010164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:49:20.010313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:49:20.011037 systemd[1]: kubelet.service: Consumed 1.211s CPU time, 257.2M memory peak. Oct 13 05:49:22.225699 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:49:22.227699 systemd[1]: Started sshd@0-137.184.180.203:22-139.178.89.65:60770.service - OpenSSH per-connection server daemon (139.178.89.65:60770). Oct 13 05:49:22.330926 sshd[1686]: Accepted publickey for core from 139.178.89.65 port 60770 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:49:22.332858 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:49:22.340870 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:49:22.342105 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:49:22.351110 systemd-logind[1497]: New session 1 of user core. Oct 13 05:49:22.368584 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:49:22.373042 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:49:22.389933 (systemd)[1691]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:49:22.393557 systemd-logind[1497]: New session c1 of user core. Oct 13 05:49:22.537345 systemd[1691]: Queued start job for default target default.target. Oct 13 05:49:22.548573 systemd[1691]: Created slice app.slice - User Application Slice. Oct 13 05:49:22.548614 systemd[1691]: Reached target paths.target - Paths. Oct 13 05:49:22.548676 systemd[1691]: Reached target timers.target - Timers. Oct 13 05:49:22.550488 systemd[1691]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:49:22.585156 systemd[1691]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:49:22.585252 systemd[1691]: Reached target sockets.target - Sockets. Oct 13 05:49:22.585330 systemd[1691]: Reached target basic.target - Basic System. Oct 13 05:49:22.585384 systemd[1691]: Reached target default.target - Main User Target. Oct 13 05:49:22.585428 systemd[1691]: Startup finished in 182ms. Oct 13 05:49:22.585689 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:49:22.599345 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:49:22.666268 systemd[1]: Started sshd@1-137.184.180.203:22-139.178.89.65:60776.service - OpenSSH per-connection server daemon (139.178.89.65:60776). Oct 13 05:49:22.726583 sshd[1702]: Accepted publickey for core from 139.178.89.65 port 60776 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:49:22.728179 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:49:22.734334 systemd-logind[1497]: New session 2 of user core. Oct 13 05:49:22.738266 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:49:22.796914 sshd[1705]: Connection closed by 139.178.89.65 port 60776 Oct 13 05:49:22.797775 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Oct 13 05:49:22.808802 systemd[1]: sshd@1-137.184.180.203:22-139.178.89.65:60776.service: Deactivated successfully. Oct 13 05:49:22.810934 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 05:49:22.811958 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. Oct 13 05:49:22.815794 systemd[1]: Started sshd@2-137.184.180.203:22-139.178.89.65:60782.service - OpenSSH per-connection server daemon (139.178.89.65:60782). Oct 13 05:49:22.817216 systemd-logind[1497]: Removed session 2. Oct 13 05:49:22.887962 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 60782 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:49:22.889474 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:49:22.895660 systemd-logind[1497]: New session 3 of user core. Oct 13 05:49:22.920253 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:49:22.975000 sshd[1714]: Connection closed by 139.178.89.65 port 60782 Oct 13 05:49:22.974534 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Oct 13 05:49:22.987303 systemd[1]: sshd@2-137.184.180.203:22-139.178.89.65:60782.service: Deactivated successfully. Oct 13 05:49:22.989729 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 05:49:22.991518 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. Oct 13 05:49:22.994822 systemd[1]: Started sshd@3-137.184.180.203:22-139.178.89.65:60798.service - OpenSSH per-connection server daemon (139.178.89.65:60798). Oct 13 05:49:22.997177 systemd-logind[1497]: Removed session 3. Oct 13 05:49:23.055020 sshd[1720]: Accepted publickey for core from 139.178.89.65 port 60798 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:49:23.056465 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:49:23.063104 systemd-logind[1497]: New session 4 of user core. Oct 13 05:49:23.069261 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:49:23.131053 sshd[1723]: Connection closed by 139.178.89.65 port 60798 Oct 13 05:49:23.130921 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Oct 13 05:49:23.148925 systemd[1]: sshd@3-137.184.180.203:22-139.178.89.65:60798.service: Deactivated successfully. Oct 13 05:49:23.152071 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:49:23.153260 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:49:23.157338 systemd[1]: Started sshd@4-137.184.180.203:22-139.178.89.65:60810.service - OpenSSH per-connection server daemon (139.178.89.65:60810). Oct 13 05:49:23.158699 systemd-logind[1497]: Removed session 4. Oct 13 05:49:23.225229 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 60810 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:49:23.227300 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:49:23.234405 systemd-logind[1497]: New session 5 of user core. Oct 13 05:49:23.246354 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:49:23.330938 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:49:23.332391 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:49:23.345124 sudo[1733]: pam_unix(sudo:session): session closed for user root Oct 13 05:49:23.350020 sshd[1732]: Connection closed by 139.178.89.65 port 60810 Oct 13 05:49:23.349520 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Oct 13 05:49:23.360448 systemd[1]: sshd@4-137.184.180.203:22-139.178.89.65:60810.service: Deactivated successfully. Oct 13 05:49:23.363330 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:49:23.364673 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:49:23.368780 systemd[1]: Started sshd@5-137.184.180.203:22-139.178.89.65:60824.service - OpenSSH per-connection server daemon (139.178.89.65:60824). Oct 13 05:49:23.369579 systemd-logind[1497]: Removed session 5. Oct 13 05:49:23.431202 sshd[1739]: Accepted publickey for core from 139.178.89.65 port 60824 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:49:23.433019 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:49:23.438379 systemd-logind[1497]: New session 6 of user core. Oct 13 05:49:23.445201 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:49:23.502759 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:49:23.503455 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:49:23.509349 sudo[1744]: pam_unix(sudo:session): session closed for user root Oct 13 05:49:23.516221 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:49:23.516516 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:49:23.528919 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:49:23.578435 augenrules[1766]: No rules Oct 13 05:49:23.579770 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:49:23.579998 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:49:23.581579 sudo[1743]: pam_unix(sudo:session): session closed for user root Oct 13 05:49:23.586608 sshd[1742]: Connection closed by 139.178.89.65 port 60824 Oct 13 05:49:23.586108 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Oct 13 05:49:23.598548 systemd[1]: sshd@5-137.184.180.203:22-139.178.89.65:60824.service: Deactivated successfully. Oct 13 05:49:23.600759 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:49:23.602418 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:49:23.605814 systemd[1]: Started sshd@6-137.184.180.203:22-139.178.89.65:60838.service - OpenSSH per-connection server daemon (139.178.89.65:60838). Oct 13 05:49:23.607448 systemd-logind[1497]: Removed session 6. Oct 13 05:49:23.670335 sshd[1775]: Accepted publickey for core from 139.178.89.65 port 60838 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:49:23.672322 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:49:23.678361 systemd-logind[1497]: New session 7 of user core. Oct 13 05:49:23.684257 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:49:23.743512 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:49:23.744244 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:49:24.272994 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:49:24.302735 (dockerd)[1797]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:49:24.665893 dockerd[1797]: time="2025-10-13T05:49:24.665787039Z" level=info msg="Starting up" Oct 13 05:49:24.667260 dockerd[1797]: time="2025-10-13T05:49:24.667207743Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:49:24.690458 dockerd[1797]: time="2025-10-13T05:49:24.690283638Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:49:24.710382 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2540145541-merged.mount: Deactivated successfully. Oct 13 05:49:24.750796 dockerd[1797]: time="2025-10-13T05:49:24.750730379Z" level=info msg="Loading containers: start." Oct 13 05:49:24.760367 kernel: Initializing XFRM netlink socket Oct 13 05:49:25.055541 systemd-networkd[1439]: docker0: Link UP Oct 13 05:49:25.060334 dockerd[1797]: time="2025-10-13T05:49:25.060265421Z" level=info msg="Loading containers: done." Oct 13 05:49:25.076209 dockerd[1797]: time="2025-10-13T05:49:25.076148350Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:49:25.076409 dockerd[1797]: time="2025-10-13T05:49:25.076339227Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:49:25.076506 dockerd[1797]: time="2025-10-13T05:49:25.076476413Z" level=info msg="Initializing buildkit" Oct 13 05:49:25.104865 dockerd[1797]: time="2025-10-13T05:49:25.104647290Z" level=info msg="Completed buildkit initialization" Oct 13 05:49:25.109753 dockerd[1797]: time="2025-10-13T05:49:25.109695286Z" level=info msg="Daemon has completed initialization" Oct 13 05:49:25.110043 dockerd[1797]: time="2025-10-13T05:49:25.110007193Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:49:25.110986 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:49:25.839234 containerd[1527]: time="2025-10-13T05:49:25.839108897Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 13 05:49:26.461697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422050926.mount: Deactivated successfully. Oct 13 05:49:27.444443 containerd[1527]: time="2025-10-13T05:49:27.444351891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:27.445763 containerd[1527]: time="2025-10-13T05:49:27.445707168Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 13 05:49:27.446369 containerd[1527]: time="2025-10-13T05:49:27.446336624Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:27.449221 containerd[1527]: time="2025-10-13T05:49:27.449173392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:27.450894 containerd[1527]: time="2025-10-13T05:49:27.450843299Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.61167883s" Oct 13 05:49:27.450894 containerd[1527]: time="2025-10-13T05:49:27.450884131Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 13 05:49:27.451550 containerd[1527]: time="2025-10-13T05:49:27.451519128Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 13 05:49:28.852032 containerd[1527]: time="2025-10-13T05:49:28.851306312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:28.853291 containerd[1527]: time="2025-10-13T05:49:28.853247386Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 13 05:49:28.853529 containerd[1527]: time="2025-10-13T05:49:28.853501963Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:28.856125 containerd[1527]: time="2025-10-13T05:49:28.856085595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:28.857438 containerd[1527]: time="2025-10-13T05:49:28.857400418Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.405851105s" Oct 13 05:49:28.857438 containerd[1527]: time="2025-10-13T05:49:28.857436170Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 13 05:49:28.858067 containerd[1527]: time="2025-10-13T05:49:28.858037909Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 13 05:49:29.871638 containerd[1527]: time="2025-10-13T05:49:29.871155633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:29.872159 containerd[1527]: time="2025-10-13T05:49:29.871829851Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 13 05:49:29.873234 containerd[1527]: time="2025-10-13T05:49:29.873196427Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:29.876519 containerd[1527]: time="2025-10-13T05:49:29.876480948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:29.877456 containerd[1527]: time="2025-10-13T05:49:29.877420349Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.019348662s" Oct 13 05:49:29.877653 containerd[1527]: time="2025-10-13T05:49:29.877554544Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 13 05:49:29.878189 containerd[1527]: time="2025-10-13T05:49:29.878138070Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 13 05:49:30.080553 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:49:30.082359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:49:30.260594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:49:30.273653 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:49:30.352877 kubelet[2087]: E1013 05:49:30.352786 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:49:30.359427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:49:30.359635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:49:30.361458 systemd[1]: kubelet.service: Consumed 224ms CPU time, 110.8M memory peak. Oct 13 05:49:31.055945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901885410.mount: Deactivated successfully. Oct 13 05:49:31.403210 containerd[1527]: time="2025-10-13T05:49:31.403144869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:31.404004 containerd[1527]: time="2025-10-13T05:49:31.403948168Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 13 05:49:31.404999 containerd[1527]: time="2025-10-13T05:49:31.404483090Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:31.406089 containerd[1527]: time="2025-10-13T05:49:31.406033411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:31.406849 containerd[1527]: time="2025-10-13T05:49:31.406563723Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.528233328s" Oct 13 05:49:31.406849 containerd[1527]: time="2025-10-13T05:49:31.406617610Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 13 05:49:31.407093 containerd[1527]: time="2025-10-13T05:49:31.407075457Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 05:49:31.967946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount198063053.mount: Deactivated successfully. Oct 13 05:49:32.085525 systemd-resolved[1400]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Oct 13 05:49:33.011609 containerd[1527]: time="2025-10-13T05:49:33.011530465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:33.013942 containerd[1527]: time="2025-10-13T05:49:33.013127997Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 13 05:49:33.013942 containerd[1527]: time="2025-10-13T05:49:33.013251821Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:33.017091 containerd[1527]: time="2025-10-13T05:49:33.017023076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:33.018548 containerd[1527]: time="2025-10-13T05:49:33.018483137Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.611376386s" Oct 13 05:49:33.018548 containerd[1527]: time="2025-10-13T05:49:33.018541705Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 13 05:49:33.019285 containerd[1527]: time="2025-10-13T05:49:33.019234429Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 13 05:49:33.569467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151586531.mount: Deactivated successfully. Oct 13 05:49:33.572318 containerd[1527]: time="2025-10-13T05:49:33.572280717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:33.572912 containerd[1527]: time="2025-10-13T05:49:33.572830473Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 13 05:49:33.573882 containerd[1527]: time="2025-10-13T05:49:33.573856931Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:33.576146 containerd[1527]: time="2025-10-13T05:49:33.576112603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:33.576821 containerd[1527]: time="2025-10-13T05:49:33.576788613Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 557.508169ms" Oct 13 05:49:33.576892 containerd[1527]: time="2025-10-13T05:49:33.576823836Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 13 05:49:33.577646 containerd[1527]: time="2025-10-13T05:49:33.577473359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 13 05:49:35.145246 systemd-resolved[1400]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Oct 13 05:49:36.125939 containerd[1527]: time="2025-10-13T05:49:36.125871785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:36.126883 containerd[1527]: time="2025-10-13T05:49:36.126849874Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 13 05:49:36.127805 containerd[1527]: time="2025-10-13T05:49:36.127386696Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:36.130564 containerd[1527]: time="2025-10-13T05:49:36.130519043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:36.131585 containerd[1527]: time="2025-10-13T05:49:36.131552433Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.554051995s" Oct 13 05:49:36.131585 containerd[1527]: time="2025-10-13T05:49:36.131586729Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 13 05:49:40.128343 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:49:40.128522 systemd[1]: kubelet.service: Consumed 224ms CPU time, 110.8M memory peak. Oct 13 05:49:40.131660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:49:40.165882 systemd[1]: Reload requested from client PID 2228 ('systemctl') (unit session-7.scope)... Oct 13 05:49:40.165900 systemd[1]: Reloading... Oct 13 05:49:40.326036 zram_generator::config[2271]: No configuration found. Oct 13 05:49:40.639597 systemd[1]: Reloading finished in 472 ms. Oct 13 05:49:40.706958 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:49:40.707272 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:49:40.707850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:49:40.708073 systemd[1]: kubelet.service: Consumed 118ms CPU time, 98.2M memory peak. Oct 13 05:49:40.710924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:49:40.872391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:49:40.884911 (kubelet)[2325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:49:40.936228 kubelet[2325]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:49:40.936228 kubelet[2325]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:49:40.936665 kubelet[2325]: I1013 05:49:40.936189 2325 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:49:41.321419 kubelet[2325]: I1013 05:49:41.321174 2325 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:49:41.321419 kubelet[2325]: I1013 05:49:41.321212 2325 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:49:41.325042 kubelet[2325]: I1013 05:49:41.324055 2325 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:49:41.325042 kubelet[2325]: I1013 05:49:41.324113 2325 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:49:41.325042 kubelet[2325]: I1013 05:49:41.324405 2325 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:49:41.332401 kubelet[2325]: I1013 05:49:41.332358 2325 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:49:41.333603 kubelet[2325]: E1013 05:49:41.333469 2325 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://137.184.180.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.180.203:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:49:41.344311 kubelet[2325]: I1013 05:49:41.344283 2325 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:49:41.351865 kubelet[2325]: I1013 05:49:41.351359 2325 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:49:41.352935 kubelet[2325]: I1013 05:49:41.352882 2325 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:49:41.354746 kubelet[2325]: I1013 05:49:41.353072 2325 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-5-82d9fc1916","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:49:41.355047 kubelet[2325]: I1013 05:49:41.355029 2325 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:49:41.355115 kubelet[2325]: I1013 05:49:41.355108 2325 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:49:41.355285 kubelet[2325]: I1013 05:49:41.355274 2325 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:49:41.357491 kubelet[2325]: I1013 05:49:41.357464 2325 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:49:41.357900 kubelet[2325]: I1013 05:49:41.357882 2325 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:49:41.358016 kubelet[2325]: I1013 05:49:41.358003 2325 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:49:41.358097 kubelet[2325]: I1013 05:49:41.358088 2325 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:49:41.358181 kubelet[2325]: I1013 05:49:41.358172 2325 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:49:41.363341 kubelet[2325]: E1013 05:49:41.363281 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://137.184.180.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-5-82d9fc1916&limit=500&resourceVersion=0\": dial tcp 137.184.180.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:49:41.365995 kubelet[2325]: E1013 05:49:41.365028 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://137.184.180.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.180.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:49:41.367283 kubelet[2325]: I1013 05:49:41.367247 2325 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:49:41.367818 kubelet[2325]: I1013 05:49:41.367794 2325 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:49:41.367880 kubelet[2325]: I1013 05:49:41.367833 2325 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:49:41.367961 kubelet[2325]: W1013 05:49:41.367912 2325 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:49:41.372922 kubelet[2325]: I1013 05:49:41.372893 2325 server.go:1262] "Started kubelet" Oct 13 05:49:41.374408 kubelet[2325]: I1013 05:49:41.374377 2325 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:49:41.380485 kubelet[2325]: E1013 05:49:41.377078 2325 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.180.203:6443/api/v1/namespaces/default/events\": dial tcp 137.184.180.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-5-82d9fc1916.186df6f9a0907cf5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-5-82d9fc1916,UID:ci-4459.1.0-5-82d9fc1916,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-5-82d9fc1916,},FirstTimestamp:2025-10-13 05:49:41.372845301 +0000 UTC m=+0.481267139,LastTimestamp:2025-10-13 05:49:41.372845301 +0000 UTC m=+0.481267139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-5-82d9fc1916,}" Oct 13 05:49:41.380699 kubelet[2325]: I1013 05:49:41.380511 2325 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:49:41.381989 kubelet[2325]: I1013 05:49:41.381755 2325 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:49:41.386572 kubelet[2325]: I1013 05:49:41.386506 2325 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:49:41.386682 kubelet[2325]: I1013 05:49:41.386592 2325 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:49:41.386806 kubelet[2325]: I1013 05:49:41.386790 2325 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:49:41.387328 kubelet[2325]: I1013 05:49:41.387306 2325 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:49:41.387710 kubelet[2325]: E1013 05:49:41.387681 2325 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-5-82d9fc1916\" not found" Oct 13 05:49:41.387938 kubelet[2325]: I1013 05:49:41.387915 2325 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:49:41.390133 kubelet[2325]: E1013 05:49:41.390092 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.180.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-5-82d9fc1916?timeout=10s\": dial tcp 137.184.180.203:6443: connect: connection refused" interval="200ms" Oct 13 05:49:41.390566 kubelet[2325]: I1013 05:49:41.390543 2325 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:49:41.390639 kubelet[2325]: I1013 05:49:41.390583 2325 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:49:41.391723 kubelet[2325]: E1013 05:49:41.390953 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://137.184.180.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.180.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:49:41.393439 kubelet[2325]: I1013 05:49:41.392110 2325 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:49:41.393439 kubelet[2325]: I1013 05:49:41.392231 2325 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:49:41.393598 kubelet[2325]: I1013 05:49:41.393555 2325 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:49:41.401162 kubelet[2325]: I1013 05:49:41.401111 2325 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:49:41.403211 kubelet[2325]: I1013 05:49:41.403177 2325 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:49:41.403362 kubelet[2325]: I1013 05:49:41.403354 2325 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:49:41.403434 kubelet[2325]: I1013 05:49:41.403427 2325 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:49:41.403549 kubelet[2325]: E1013 05:49:41.403529 2325 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:49:41.416150 kubelet[2325]: E1013 05:49:41.416105 2325 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:49:41.416514 kubelet[2325]: E1013 05:49:41.416495 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://137.184.180.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.180.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:49:41.427611 kubelet[2325]: I1013 05:49:41.427579 2325 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:49:41.427611 kubelet[2325]: I1013 05:49:41.427597 2325 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:49:41.427611 kubelet[2325]: I1013 05:49:41.427618 2325 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:49:41.429175 kubelet[2325]: I1013 05:49:41.429151 2325 policy_none.go:49] "None policy: Start" Oct 13 05:49:41.429175 kubelet[2325]: I1013 05:49:41.429173 2325 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:49:41.429313 kubelet[2325]: I1013 05:49:41.429185 2325 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:49:41.429916 kubelet[2325]: I1013 05:49:41.429898 2325 policy_none.go:47] "Start" Oct 13 05:49:41.434986 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:49:41.445042 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:49:41.449170 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:49:41.460232 kubelet[2325]: E1013 05:49:41.460185 2325 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:49:41.460703 kubelet[2325]: I1013 05:49:41.460633 2325 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:49:41.460703 kubelet[2325]: I1013 05:49:41.460648 2325 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:49:41.461565 kubelet[2325]: I1013 05:49:41.461427 2325 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:49:41.462804 kubelet[2325]: E1013 05:49:41.462779 2325 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:49:41.462915 kubelet[2325]: E1013 05:49:41.462818 2325 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-5-82d9fc1916\" not found" Oct 13 05:49:41.516371 systemd[1]: Created slice kubepods-burstable-podca16d03c7a800d89879d97ef66b34275.slice - libcontainer container kubepods-burstable-podca16d03c7a800d89879d97ef66b34275.slice. Oct 13 05:49:41.528321 kubelet[2325]: E1013 05:49:41.528287 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-5-82d9fc1916\" not found" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.533231 systemd[1]: Created slice kubepods-burstable-pod3cc639de5618c86bef3b8a6f44953005.slice - libcontainer container kubepods-burstable-pod3cc639de5618c86bef3b8a6f44953005.slice. Oct 13 05:49:41.536591 kubelet[2325]: E1013 05:49:41.536335 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-5-82d9fc1916\" not found" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.538437 systemd[1]: Created slice kubepods-burstable-podca148928e08cdd94190b2c89cf481fbb.slice - libcontainer container kubepods-burstable-podca148928e08cdd94190b2c89cf481fbb.slice. Oct 13 05:49:41.541586 kubelet[2325]: E1013 05:49:41.541529 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-5-82d9fc1916\" not found" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.563178 kubelet[2325]: I1013 05:49:41.563117 2325 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.570001 kubelet[2325]: E1013 05:49:41.569128 2325 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.180.203:6443/api/v1/nodes\": dial tcp 137.184.180.203:6443: connect: connection refused" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.591263 kubelet[2325]: E1013 05:49:41.591205 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.180.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-5-82d9fc1916?timeout=10s\": dial tcp 137.184.180.203:6443: connect: connection refused" interval="400ms" Oct 13 05:49:41.592326 kubelet[2325]: I1013 05:49:41.592282 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3cc639de5618c86bef3b8a6f44953005-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-5-82d9fc1916\" (UID: \"3cc639de5618c86bef3b8a6f44953005\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.592433 kubelet[2325]: I1013 05:49:41.592336 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca16d03c7a800d89879d97ef66b34275-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-5-82d9fc1916\" (UID: \"ca16d03c7a800d89879d97ef66b34275\") " pod="kube-system/kube-apiserver-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.592433 kubelet[2325]: I1013 05:49:41.592361 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca16d03c7a800d89879d97ef66b34275-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-5-82d9fc1916\" (UID: \"ca16d03c7a800d89879d97ef66b34275\") " pod="kube-system/kube-apiserver-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.592433 kubelet[2325]: I1013 05:49:41.592378 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca16d03c7a800d89879d97ef66b34275-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-5-82d9fc1916\" (UID: \"ca16d03c7a800d89879d97ef66b34275\") " pod="kube-system/kube-apiserver-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.592433 kubelet[2325]: I1013 05:49:41.592395 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3cc639de5618c86bef3b8a6f44953005-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-5-82d9fc1916\" (UID: \"3cc639de5618c86bef3b8a6f44953005\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.592433 kubelet[2325]: I1013 05:49:41.592411 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cc639de5618c86bef3b8a6f44953005-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-5-82d9fc1916\" (UID: \"3cc639de5618c86bef3b8a6f44953005\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.592558 kubelet[2325]: I1013 05:49:41.592426 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca148928e08cdd94190b2c89cf481fbb-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-5-82d9fc1916\" (UID: \"ca148928e08cdd94190b2c89cf481fbb\") " pod="kube-system/kube-scheduler-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.592558 kubelet[2325]: I1013 05:49:41.592440 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3cc639de5618c86bef3b8a6f44953005-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-5-82d9fc1916\" (UID: \"3cc639de5618c86bef3b8a6f44953005\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.592558 kubelet[2325]: I1013 05:49:41.592455 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3cc639de5618c86bef3b8a6f44953005-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-5-82d9fc1916\" (UID: \"3cc639de5618c86bef3b8a6f44953005\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.771362 kubelet[2325]: I1013 05:49:41.771320 2325 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.771719 kubelet[2325]: E1013 05:49:41.771693 2325 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.180.203:6443/api/v1/nodes\": dial tcp 137.184.180.203:6443: connect: connection refused" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:41.792940 kubelet[2325]: E1013 05:49:41.792737 2325 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.180.203:6443/api/v1/namespaces/default/events\": dial tcp 137.184.180.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-5-82d9fc1916.186df6f9a0907cf5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-5-82d9fc1916,UID:ci-4459.1.0-5-82d9fc1916,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-5-82d9fc1916,},FirstTimestamp:2025-10-13 05:49:41.372845301 +0000 UTC m=+0.481267139,LastTimestamp:2025-10-13 05:49:41.372845301 +0000 UTC m=+0.481267139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-5-82d9fc1916,}" Oct 13 05:49:41.831189 kubelet[2325]: E1013 05:49:41.831138 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:41.833658 containerd[1527]: time="2025-10-13T05:49:41.832156789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-5-82d9fc1916,Uid:ca16d03c7a800d89879d97ef66b34275,Namespace:kube-system,Attempt:0,}" Oct 13 05:49:41.839097 kubelet[2325]: E1013 05:49:41.839054 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:41.846413 kubelet[2325]: E1013 05:49:41.845585 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:41.848317 systemd-resolved[1400]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Oct 13 05:49:41.849281 containerd[1527]: time="2025-10-13T05:49:41.849081021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-5-82d9fc1916,Uid:3cc639de5618c86bef3b8a6f44953005,Namespace:kube-system,Attempt:0,}" Oct 13 05:49:41.849624 containerd[1527]: time="2025-10-13T05:49:41.849596386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-5-82d9fc1916,Uid:ca148928e08cdd94190b2c89cf481fbb,Namespace:kube-system,Attempt:0,}" Oct 13 05:49:41.992644 kubelet[2325]: E1013 05:49:41.992601 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.180.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-5-82d9fc1916?timeout=10s\": dial tcp 137.184.180.203:6443: connect: connection refused" interval="800ms" Oct 13 05:49:42.174902 kubelet[2325]: I1013 05:49:42.174042 2325 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:42.175140 kubelet[2325]: E1013 05:49:42.174905 2325 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.180.203:6443/api/v1/nodes\": dial tcp 137.184.180.203:6443: connect: connection refused" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:42.248804 kubelet[2325]: E1013 05:49:42.248753 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://137.184.180.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-5-82d9fc1916&limit=500&resourceVersion=0\": dial tcp 137.184.180.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:49:42.320943 kubelet[2325]: E1013 05:49:42.320890 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://137.184.180.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.180.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:49:42.381011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1535455229.mount: Deactivated successfully. Oct 13 05:49:42.384601 containerd[1527]: time="2025-10-13T05:49:42.384516426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:49:42.386386 containerd[1527]: time="2025-10-13T05:49:42.386358519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 13 05:49:42.387328 containerd[1527]: time="2025-10-13T05:49:42.387299465Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:49:42.388554 containerd[1527]: time="2025-10-13T05:49:42.388518859Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:49:42.389519 containerd[1527]: time="2025-10-13T05:49:42.389468107Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:49:42.390426 containerd[1527]: time="2025-10-13T05:49:42.390398469Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:49:42.392549 containerd[1527]: time="2025-10-13T05:49:42.392521647Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:49:42.394554 containerd[1527]: time="2025-10-13T05:49:42.394524104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 541.93761ms" Oct 13 05:49:42.405419 containerd[1527]: time="2025-10-13T05:49:42.405208177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 552.615528ms" Oct 13 05:49:42.408732 containerd[1527]: time="2025-10-13T05:49:42.408688002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:49:42.417014 containerd[1527]: time="2025-10-13T05:49:42.415330241Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 568.633808ms" Oct 13 05:49:42.523450 containerd[1527]: time="2025-10-13T05:49:42.523290542Z" level=info msg="connecting to shim e49c76a3a211ff2d3f3b2a15d0b568bff1ea8dbbd45809d44ad749f6af618726" address="unix:///run/containerd/s/84ff6678632ae9b720168350e952e5502468fe142585fbb7a739d39897b44aa2" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:49:42.524344 containerd[1527]: time="2025-10-13T05:49:42.524295286Z" level=info msg="connecting to shim c86e65d87775f2ec69c9aee36d48b0feb0571990eed3398b77ec95b3dba6c481" address="unix:///run/containerd/s/e41eab49b2768e44063f0c9e39f3f161d8857c0e0acdcd32511bf9463841bad3" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:49:42.533656 containerd[1527]: time="2025-10-13T05:49:42.533584224Z" level=info msg="connecting to shim 9f62d80da2bca732f382199bc741a6bb6f9c71b9f1791f5d54e3bd0d03da9807" address="unix:///run/containerd/s/eeb1f4a6d11de85df046beff393d31644cc9db71ed5ee447c1c0bd9cc9988157" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:49:42.568351 kubelet[2325]: E1013 05:49:42.568274 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://137.184.180.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.180.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:49:42.571175 kubelet[2325]: E1013 05:49:42.571113 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://137.184.180.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.180.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:49:42.646328 systemd[1]: Started cri-containerd-e49c76a3a211ff2d3f3b2a15d0b568bff1ea8dbbd45809d44ad749f6af618726.scope - libcontainer container e49c76a3a211ff2d3f3b2a15d0b568bff1ea8dbbd45809d44ad749f6af618726. Oct 13 05:49:42.667736 systemd[1]: Started cri-containerd-9f62d80da2bca732f382199bc741a6bb6f9c71b9f1791f5d54e3bd0d03da9807.scope - libcontainer container 9f62d80da2bca732f382199bc741a6bb6f9c71b9f1791f5d54e3bd0d03da9807. Oct 13 05:49:42.671523 systemd[1]: Started cri-containerd-c86e65d87775f2ec69c9aee36d48b0feb0571990eed3398b77ec95b3dba6c481.scope - libcontainer container c86e65d87775f2ec69c9aee36d48b0feb0571990eed3398b77ec95b3dba6c481. Oct 13 05:49:42.776149 containerd[1527]: time="2025-10-13T05:49:42.775451531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-5-82d9fc1916,Uid:ca16d03c7a800d89879d97ef66b34275,Namespace:kube-system,Attempt:0,} returns sandbox id \"e49c76a3a211ff2d3f3b2a15d0b568bff1ea8dbbd45809d44ad749f6af618726\"" Oct 13 05:49:42.782124 kubelet[2325]: E1013 05:49:42.781541 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:42.790859 containerd[1527]: time="2025-10-13T05:49:42.789609366Z" level=info msg="CreateContainer within sandbox \"e49c76a3a211ff2d3f3b2a15d0b568bff1ea8dbbd45809d44ad749f6af618726\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:49:42.794688 kubelet[2325]: E1013 05:49:42.794634 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.180.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-5-82d9fc1916?timeout=10s\": dial tcp 137.184.180.203:6443: connect: connection refused" interval="1.6s" Oct 13 05:49:42.828479 containerd[1527]: time="2025-10-13T05:49:42.828410887Z" level=info msg="Container dea8b250a9a911ecfe23df968cf0de406e43abfd67c41f716a850d908154cf55: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:49:42.834585 containerd[1527]: time="2025-10-13T05:49:42.834487533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-5-82d9fc1916,Uid:3cc639de5618c86bef3b8a6f44953005,Namespace:kube-system,Attempt:0,} returns sandbox id \"c86e65d87775f2ec69c9aee36d48b0feb0571990eed3398b77ec95b3dba6c481\"" Oct 13 05:49:42.837655 kubelet[2325]: E1013 05:49:42.837589 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:42.842999 containerd[1527]: time="2025-10-13T05:49:42.842214013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-5-82d9fc1916,Uid:ca148928e08cdd94190b2c89cf481fbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f62d80da2bca732f382199bc741a6bb6f9c71b9f1791f5d54e3bd0d03da9807\"" Oct 13 05:49:42.847097 kubelet[2325]: E1013 05:49:42.847056 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:42.848020 containerd[1527]: time="2025-10-13T05:49:42.847782176Z" level=info msg="CreateContainer within sandbox \"c86e65d87775f2ec69c9aee36d48b0feb0571990eed3398b77ec95b3dba6c481\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:49:42.849940 containerd[1527]: time="2025-10-13T05:49:42.849888925Z" level=info msg="CreateContainer within sandbox \"e49c76a3a211ff2d3f3b2a15d0b568bff1ea8dbbd45809d44ad749f6af618726\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dea8b250a9a911ecfe23df968cf0de406e43abfd67c41f716a850d908154cf55\"" Oct 13 05:49:42.851880 containerd[1527]: time="2025-10-13T05:49:42.851827358Z" level=info msg="StartContainer for \"dea8b250a9a911ecfe23df968cf0de406e43abfd67c41f716a850d908154cf55\"" Oct 13 05:49:42.853903 containerd[1527]: time="2025-10-13T05:49:42.853790392Z" level=info msg="connecting to shim dea8b250a9a911ecfe23df968cf0de406e43abfd67c41f716a850d908154cf55" address="unix:///run/containerd/s/84ff6678632ae9b720168350e952e5502468fe142585fbb7a739d39897b44aa2" protocol=ttrpc version=3 Oct 13 05:49:42.857090 containerd[1527]: time="2025-10-13T05:49:42.857034142Z" level=info msg="CreateContainer within sandbox \"9f62d80da2bca732f382199bc741a6bb6f9c71b9f1791f5d54e3bd0d03da9807\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:49:42.863669 containerd[1527]: time="2025-10-13T05:49:42.863609221Z" level=info msg="Container 3eca25059e54d87d364010cf5436f9a3971e8da50e73004f2083ea187896a66f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:49:42.870032 containerd[1527]: time="2025-10-13T05:49:42.869730086Z" level=info msg="Container dcd7e45ef0d1eed256f25d5ff4b292ea92d95fb2a4ecf65c32dce89dbd8954f2: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:49:42.876640 containerd[1527]: time="2025-10-13T05:49:42.876578814Z" level=info msg="CreateContainer within sandbox \"9f62d80da2bca732f382199bc741a6bb6f9c71b9f1791f5d54e3bd0d03da9807\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dcd7e45ef0d1eed256f25d5ff4b292ea92d95fb2a4ecf65c32dce89dbd8954f2\"" Oct 13 05:49:42.878116 containerd[1527]: time="2025-10-13T05:49:42.878068559Z" level=info msg="StartContainer for \"dcd7e45ef0d1eed256f25d5ff4b292ea92d95fb2a4ecf65c32dce89dbd8954f2\"" Oct 13 05:49:42.878410 containerd[1527]: time="2025-10-13T05:49:42.878105545Z" level=info msg="CreateContainer within sandbox \"c86e65d87775f2ec69c9aee36d48b0feb0571990eed3398b77ec95b3dba6c481\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3eca25059e54d87d364010cf5436f9a3971e8da50e73004f2083ea187896a66f\"" Oct 13 05:49:42.879878 containerd[1527]: time="2025-10-13T05:49:42.879828901Z" level=info msg="connecting to shim dcd7e45ef0d1eed256f25d5ff4b292ea92d95fb2a4ecf65c32dce89dbd8954f2" address="unix:///run/containerd/s/eeb1f4a6d11de85df046beff393d31644cc9db71ed5ee447c1c0bd9cc9988157" protocol=ttrpc version=3 Oct 13 05:49:42.881358 containerd[1527]: time="2025-10-13T05:49:42.880584755Z" level=info msg="StartContainer for \"3eca25059e54d87d364010cf5436f9a3971e8da50e73004f2083ea187896a66f\"" Oct 13 05:49:42.883716 containerd[1527]: time="2025-10-13T05:49:42.883665177Z" level=info msg="connecting to shim 3eca25059e54d87d364010cf5436f9a3971e8da50e73004f2083ea187896a66f" address="unix:///run/containerd/s/e41eab49b2768e44063f0c9e39f3f161d8857c0e0acdcd32511bf9463841bad3" protocol=ttrpc version=3 Oct 13 05:49:42.901556 systemd[1]: Started cri-containerd-dea8b250a9a911ecfe23df968cf0de406e43abfd67c41f716a850d908154cf55.scope - libcontainer container dea8b250a9a911ecfe23df968cf0de406e43abfd67c41f716a850d908154cf55. Oct 13 05:49:42.934881 systemd[1]: Started cri-containerd-dcd7e45ef0d1eed256f25d5ff4b292ea92d95fb2a4ecf65c32dce89dbd8954f2.scope - libcontainer container dcd7e45ef0d1eed256f25d5ff4b292ea92d95fb2a4ecf65c32dce89dbd8954f2. Oct 13 05:49:42.948298 systemd[1]: Started cri-containerd-3eca25059e54d87d364010cf5436f9a3971e8da50e73004f2083ea187896a66f.scope - libcontainer container 3eca25059e54d87d364010cf5436f9a3971e8da50e73004f2083ea187896a66f. Oct 13 05:49:42.976335 kubelet[2325]: I1013 05:49:42.976303 2325 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:42.978203 kubelet[2325]: E1013 05:49:42.978154 2325 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.180.203:6443/api/v1/nodes\": dial tcp 137.184.180.203:6443: connect: connection refused" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:43.068371 containerd[1527]: time="2025-10-13T05:49:43.068206985Z" level=info msg="StartContainer for \"dea8b250a9a911ecfe23df968cf0de406e43abfd67c41f716a850d908154cf55\" returns successfully" Oct 13 05:49:43.076120 containerd[1527]: time="2025-10-13T05:49:43.076052331Z" level=info msg="StartContainer for \"3eca25059e54d87d364010cf5436f9a3971e8da50e73004f2083ea187896a66f\" returns successfully" Oct 13 05:49:43.129263 containerd[1527]: time="2025-10-13T05:49:43.129184809Z" level=info msg="StartContainer for \"dcd7e45ef0d1eed256f25d5ff4b292ea92d95fb2a4ecf65c32dce89dbd8954f2\" returns successfully" Oct 13 05:49:43.439861 kubelet[2325]: E1013 05:49:43.439261 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-5-82d9fc1916\" not found" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:43.439861 kubelet[2325]: E1013 05:49:43.439447 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:43.446170 kubelet[2325]: E1013 05:49:43.445699 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-5-82d9fc1916\" not found" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:43.446170 kubelet[2325]: E1013 05:49:43.445910 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:43.446752 kubelet[2325]: E1013 05:49:43.446726 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-5-82d9fc1916\" not found" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:43.447184 kubelet[2325]: E1013 05:49:43.447163 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:44.451348 kubelet[2325]: E1013 05:49:44.449928 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-5-82d9fc1916\" not found" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:44.451348 kubelet[2325]: E1013 05:49:44.450108 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:44.451348 kubelet[2325]: E1013 05:49:44.451157 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-5-82d9fc1916\" not found" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:44.452121 kubelet[2325]: E1013 05:49:44.452008 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:44.581161 kubelet[2325]: I1013 05:49:44.580298 2325 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:45.898431 kubelet[2325]: E1013 05:49:45.898380 2325 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.1.0-5-82d9fc1916\" not found" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:45.916523 kubelet[2325]: I1013 05:49:45.915998 2325 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:45.989151 kubelet[2325]: I1013 05:49:45.989101 2325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:46.000400 kubelet[2325]: E1013 05:49:46.000361 2325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-5-82d9fc1916\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:46.000809 kubelet[2325]: I1013 05:49:46.000588 2325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:46.005829 kubelet[2325]: E1013 05:49:46.005577 2325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.0-5-82d9fc1916\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:46.005829 kubelet[2325]: I1013 05:49:46.005610 2325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:46.008096 kubelet[2325]: E1013 05:49:46.008061 2325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-5-82d9fc1916\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:46.361349 kubelet[2325]: I1013 05:49:46.361146 2325 apiserver.go:52] "Watching apiserver" Oct 13 05:49:46.391585 kubelet[2325]: I1013 05:49:46.391482 2325 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:49:47.868131 systemd[1]: Reload requested from client PID 2612 ('systemctl') (unit session-7.scope)... Oct 13 05:49:47.868148 systemd[1]: Reloading... Oct 13 05:49:47.995003 zram_generator::config[2661]: No configuration found. Oct 13 05:49:48.245427 systemd[1]: Reloading finished in 376 ms. Oct 13 05:49:48.277519 kubelet[2325]: I1013 05:49:48.277101 2325 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:49:48.277957 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:49:48.295455 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:49:48.295689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:49:48.295755 systemd[1]: kubelet.service: Consumed 924ms CPU time, 120.5M memory peak. Oct 13 05:49:48.298798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:49:48.474786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:49:48.488032 (kubelet)[2706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:49:48.563499 kubelet[2706]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:49:48.563499 kubelet[2706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:49:48.563499 kubelet[2706]: I1013 05:49:48.562177 2706 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:49:48.571301 kubelet[2706]: I1013 05:49:48.571246 2706 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:49:48.571301 kubelet[2706]: I1013 05:49:48.571282 2706 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:49:48.571529 kubelet[2706]: I1013 05:49:48.571320 2706 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:49:48.571529 kubelet[2706]: I1013 05:49:48.571329 2706 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:49:48.571740 kubelet[2706]: I1013 05:49:48.571710 2706 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:49:48.573158 kubelet[2706]: I1013 05:49:48.573122 2706 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:49:48.577319 kubelet[2706]: I1013 05:49:48.577114 2706 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:49:48.585428 kubelet[2706]: I1013 05:49:48.585394 2706 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:49:48.592581 kubelet[2706]: I1013 05:49:48.592540 2706 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:49:48.596243 kubelet[2706]: I1013 05:49:48.596166 2706 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:49:48.596710 kubelet[2706]: I1013 05:49:48.596443 2706 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-5-82d9fc1916","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:49:48.596879 kubelet[2706]: I1013 05:49:48.596718 2706 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:49:48.596879 kubelet[2706]: I1013 05:49:48.596755 2706 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:49:48.596879 kubelet[2706]: I1013 05:49:48.596800 2706 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:49:48.598536 kubelet[2706]: I1013 05:49:48.598511 2706 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:49:48.598852 kubelet[2706]: I1013 05:49:48.598835 2706 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:49:48.598927 kubelet[2706]: I1013 05:49:48.598857 2706 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:49:48.599035 kubelet[2706]: I1013 05:49:48.598948 2706 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:49:48.599035 kubelet[2706]: I1013 05:49:48.599033 2706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:49:48.605002 kubelet[2706]: I1013 05:49:48.603734 2706 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:49:48.606298 kubelet[2706]: I1013 05:49:48.606257 2706 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:49:48.606455 kubelet[2706]: I1013 05:49:48.606300 2706 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:49:48.618501 kubelet[2706]: I1013 05:49:48.617124 2706 server.go:1262] "Started kubelet" Oct 13 05:49:48.627203 kubelet[2706]: I1013 05:49:48.627144 2706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:49:48.631731 kubelet[2706]: I1013 05:49:48.630834 2706 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:49:48.634006 kubelet[2706]: I1013 05:49:48.633960 2706 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:49:48.635130 kubelet[2706]: I1013 05:49:48.635091 2706 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:49:48.635236 kubelet[2706]: I1013 05:49:48.635144 2706 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:49:48.635344 kubelet[2706]: I1013 05:49:48.635330 2706 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:49:48.636226 kubelet[2706]: I1013 05:49:48.636202 2706 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:49:48.638384 kubelet[2706]: I1013 05:49:48.636326 2706 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:49:48.638905 kubelet[2706]: I1013 05:49:48.636332 2706 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:49:48.639286 kubelet[2706]: I1013 05:49:48.639201 2706 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:49:48.641998 kubelet[2706]: I1013 05:49:48.641884 2706 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:49:48.642115 kubelet[2706]: I1013 05:49:48.642039 2706 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:49:48.645509 kubelet[2706]: E1013 05:49:48.645427 2706 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:49:48.647019 kubelet[2706]: I1013 05:49:48.646275 2706 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:49:48.675823 kubelet[2706]: I1013 05:49:48.675781 2706 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:49:48.678119 kubelet[2706]: I1013 05:49:48.678077 2706 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:49:48.678119 kubelet[2706]: I1013 05:49:48.678111 2706 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:49:48.678321 kubelet[2706]: I1013 05:49:48.678145 2706 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:49:48.678321 kubelet[2706]: E1013 05:49:48.678212 2706 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:49:48.711329 kubelet[2706]: I1013 05:49:48.711290 2706 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:49:48.711329 kubelet[2706]: I1013 05:49:48.711316 2706 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:49:48.711329 kubelet[2706]: I1013 05:49:48.711344 2706 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:49:48.711603 kubelet[2706]: I1013 05:49:48.711531 2706 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:49:48.711603 kubelet[2706]: I1013 05:49:48.711546 2706 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:49:48.711679 kubelet[2706]: I1013 05:49:48.711612 2706 policy_none.go:49] "None policy: Start" Oct 13 05:49:48.711679 kubelet[2706]: I1013 05:49:48.711630 2706 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:49:48.711679 kubelet[2706]: I1013 05:49:48.711645 2706 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:49:48.714292 kubelet[2706]: I1013 05:49:48.714174 2706 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 13 05:49:48.714292 kubelet[2706]: I1013 05:49:48.714236 2706 policy_none.go:47] "Start" Oct 13 05:49:48.720581 kubelet[2706]: E1013 05:49:48.720524 2706 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:49:48.720827 kubelet[2706]: I1013 05:49:48.720797 2706 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:49:48.720909 kubelet[2706]: I1013 05:49:48.720817 2706 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:49:48.724490 kubelet[2706]: I1013 05:49:48.724428 2706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:49:48.727917 kubelet[2706]: E1013 05:49:48.727868 2706 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:49:48.780007 kubelet[2706]: I1013 05:49:48.779864 2706 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.780185 kubelet[2706]: I1013 05:49:48.780149 2706 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.780628 kubelet[2706]: I1013 05:49:48.780599 2706 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.790529 kubelet[2706]: I1013 05:49:48.790491 2706 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:49:48.791362 kubelet[2706]: I1013 05:49:48.791166 2706 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:49:48.792826 kubelet[2706]: I1013 05:49:48.791251 2706 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:49:48.831790 kubelet[2706]: I1013 05:49:48.831725 2706 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.840855 kubelet[2706]: I1013 05:49:48.840546 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca148928e08cdd94190b2c89cf481fbb-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-5-82d9fc1916\" (UID: \"ca148928e08cdd94190b2c89cf481fbb\") " pod="kube-system/kube-scheduler-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.840855 kubelet[2706]: I1013 05:49:48.840606 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca16d03c7a800d89879d97ef66b34275-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-5-82d9fc1916\" (UID: \"ca16d03c7a800d89879d97ef66b34275\") " pod="kube-system/kube-apiserver-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.840855 kubelet[2706]: I1013 05:49:48.840625 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3cc639de5618c86bef3b8a6f44953005-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-5-82d9fc1916\" (UID: \"3cc639de5618c86bef3b8a6f44953005\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.840855 kubelet[2706]: I1013 05:49:48.840642 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3cc639de5618c86bef3b8a6f44953005-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-5-82d9fc1916\" (UID: \"3cc639de5618c86bef3b8a6f44953005\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.840855 kubelet[2706]: I1013 05:49:48.840672 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca16d03c7a800d89879d97ef66b34275-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-5-82d9fc1916\" (UID: \"ca16d03c7a800d89879d97ef66b34275\") " pod="kube-system/kube-apiserver-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.842503 kubelet[2706]: I1013 05:49:48.840688 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca16d03c7a800d89879d97ef66b34275-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-5-82d9fc1916\" (UID: \"ca16d03c7a800d89879d97ef66b34275\") " pod="kube-system/kube-apiserver-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.842503 kubelet[2706]: I1013 05:49:48.840704 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3cc639de5618c86bef3b8a6f44953005-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-5-82d9fc1916\" (UID: \"3cc639de5618c86bef3b8a6f44953005\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.842503 kubelet[2706]: I1013 05:49:48.840722 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cc639de5618c86bef3b8a6f44953005-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-5-82d9fc1916\" (UID: \"3cc639de5618c86bef3b8a6f44953005\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.842503 kubelet[2706]: I1013 05:49:48.840738 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3cc639de5618c86bef3b8a6f44953005-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-5-82d9fc1916\" (UID: \"3cc639de5618c86bef3b8a6f44953005\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.845890 kubelet[2706]: I1013 05:49:48.845816 2706 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:48.846796 kubelet[2706]: I1013 05:49:48.846727 2706 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:49.095576 kubelet[2706]: E1013 05:49:49.093514 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:49.096026 kubelet[2706]: E1013 05:49:49.096002 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:49.096359 kubelet[2706]: E1013 05:49:49.096309 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:49.600942 kubelet[2706]: I1013 05:49:49.600321 2706 apiserver.go:52] "Watching apiserver" Oct 13 05:49:49.642015 kubelet[2706]: I1013 05:49:49.641918 2706 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:49:49.710776 kubelet[2706]: E1013 05:49:49.709758 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:49.711860 kubelet[2706]: I1013 05:49:49.711755 2706 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:49.716010 kubelet[2706]: E1013 05:49:49.715477 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:49.732585 kubelet[2706]: I1013 05:49:49.732138 2706 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:49:49.732585 kubelet[2706]: E1013 05:49:49.732225 2706 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-5-82d9fc1916\" already exists" pod="kube-system/kube-scheduler-ci-4459.1.0-5-82d9fc1916" Oct 13 05:49:49.732585 kubelet[2706]: E1013 05:49:49.732474 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:49.813790 kubelet[2706]: I1013 05:49:49.812449 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.1.0-5-82d9fc1916" podStartSLOduration=1.8124262020000002 podStartE2EDuration="1.812426202s" podCreationTimestamp="2025-10-13 05:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:49:49.768039357 +0000 UTC m=+1.273750106" watchObservedRunningTime="2025-10-13 05:49:49.812426202 +0000 UTC m=+1.318136951" Oct 13 05:49:49.815099 kubelet[2706]: I1013 05:49:49.814295 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.1.0-5-82d9fc1916" podStartSLOduration=1.814272414 podStartE2EDuration="1.814272414s" podCreationTimestamp="2025-10-13 05:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:49:49.806219507 +0000 UTC m=+1.311930257" watchObservedRunningTime="2025-10-13 05:49:49.814272414 +0000 UTC m=+1.319983169" Oct 13 05:49:50.711001 kubelet[2706]: E1013 05:49:50.710719 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:50.711001 kubelet[2706]: E1013 05:49:50.710903 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:53.478535 kubelet[2706]: E1013 05:49:53.478495 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:53.496079 kubelet[2706]: I1013 05:49:53.495967 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.1.0-5-82d9fc1916" podStartSLOduration=5.495950222 podStartE2EDuration="5.495950222s" podCreationTimestamp="2025-10-13 05:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:49:49.829244963 +0000 UTC m=+1.334955712" watchObservedRunningTime="2025-10-13 05:49:53.495950222 +0000 UTC m=+5.001660968" Oct 13 05:49:53.717708 kubelet[2706]: E1013 05:49:53.717215 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:54.647330 kubelet[2706]: I1013 05:49:54.647292 2706 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:49:54.647848 kubelet[2706]: I1013 05:49:54.647790 2706 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:49:54.647891 containerd[1527]: time="2025-10-13T05:49:54.647615490Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:49:54.720830 kubelet[2706]: E1013 05:49:54.720784 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:55.491382 kubelet[2706]: E1013 05:49:55.491202 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:55.668775 systemd[1]: Created slice kubepods-besteffort-pod9f757b99_9bcf_477c_b21b_74084338503a.slice - libcontainer container kubepods-besteffort-pod9f757b99_9bcf_477c_b21b_74084338503a.slice. Oct 13 05:49:55.689895 kubelet[2706]: I1013 05:49:55.689718 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzjvc\" (UniqueName: \"kubernetes.io/projected/9f757b99-9bcf-477c-b21b-74084338503a-kube-api-access-kzjvc\") pod \"kube-proxy-qcj5s\" (UID: \"9f757b99-9bcf-477c-b21b-74084338503a\") " pod="kube-system/kube-proxy-qcj5s" Oct 13 05:49:55.690346 kubelet[2706]: I1013 05:49:55.689923 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9f757b99-9bcf-477c-b21b-74084338503a-kube-proxy\") pod \"kube-proxy-qcj5s\" (UID: \"9f757b99-9bcf-477c-b21b-74084338503a\") " pod="kube-system/kube-proxy-qcj5s" Oct 13 05:49:55.690346 kubelet[2706]: I1013 05:49:55.689951 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f757b99-9bcf-477c-b21b-74084338503a-xtables-lock\") pod \"kube-proxy-qcj5s\" (UID: \"9f757b99-9bcf-477c-b21b-74084338503a\") " pod="kube-system/kube-proxy-qcj5s" Oct 13 05:49:55.690346 kubelet[2706]: I1013 05:49:55.689993 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f757b99-9bcf-477c-b21b-74084338503a-lib-modules\") pod \"kube-proxy-qcj5s\" (UID: \"9f757b99-9bcf-477c-b21b-74084338503a\") " pod="kube-system/kube-proxy-qcj5s" Oct 13 05:49:55.723134 kubelet[2706]: E1013 05:49:55.722675 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:55.882733 systemd[1]: Created slice kubepods-besteffort-podf8866abc_676b_47cc_be23_b0ba3e1e3581.slice - libcontainer container kubepods-besteffort-podf8866abc_676b_47cc_be23_b0ba3e1e3581.slice. Oct 13 05:49:55.891348 kubelet[2706]: I1013 05:49:55.891289 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-897st\" (UniqueName: \"kubernetes.io/projected/f8866abc-676b-47cc-be23-b0ba3e1e3581-kube-api-access-897st\") pod \"tigera-operator-db78d5bd4-stqhc\" (UID: \"f8866abc-676b-47cc-be23-b0ba3e1e3581\") " pod="tigera-operator/tigera-operator-db78d5bd4-stqhc" Oct 13 05:49:55.891348 kubelet[2706]: I1013 05:49:55.891338 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f8866abc-676b-47cc-be23-b0ba3e1e3581-var-lib-calico\") pod \"tigera-operator-db78d5bd4-stqhc\" (UID: \"f8866abc-676b-47cc-be23-b0ba3e1e3581\") " pod="tigera-operator/tigera-operator-db78d5bd4-stqhc" Oct 13 05:49:55.979123 kubelet[2706]: E1013 05:49:55.979063 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:55.980072 containerd[1527]: time="2025-10-13T05:49:55.980020271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qcj5s,Uid:9f757b99-9bcf-477c-b21b-74084338503a,Namespace:kube-system,Attempt:0,}" Oct 13 05:49:56.008878 containerd[1527]: time="2025-10-13T05:49:56.008823583Z" level=info msg="connecting to shim db06dd79479558aa5c8afe0025aa6ad607f585d9839da2f53d9aaf015f99246e" address="unix:///run/containerd/s/6522a55c784c82fc00a39d1fbdba1c8f0f5b153024bd7f05c9d66c6b40e1e46d" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:49:56.045527 systemd[1]: Started cri-containerd-db06dd79479558aa5c8afe0025aa6ad607f585d9839da2f53d9aaf015f99246e.scope - libcontainer container db06dd79479558aa5c8afe0025aa6ad607f585d9839da2f53d9aaf015f99246e. Oct 13 05:49:56.079745 containerd[1527]: time="2025-10-13T05:49:56.079681011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qcj5s,Uid:9f757b99-9bcf-477c-b21b-74084338503a,Namespace:kube-system,Attempt:0,} returns sandbox id \"db06dd79479558aa5c8afe0025aa6ad607f585d9839da2f53d9aaf015f99246e\"" Oct 13 05:49:56.081294 kubelet[2706]: E1013 05:49:56.081254 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:56.098169 containerd[1527]: time="2025-10-13T05:49:56.096810219Z" level=info msg="CreateContainer within sandbox \"db06dd79479558aa5c8afe0025aa6ad607f585d9839da2f53d9aaf015f99246e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:49:56.108624 containerd[1527]: time="2025-10-13T05:49:56.108573236Z" level=info msg="Container 5ec71cab33fecd4b9e7e9766665b3e2f386b322b501972d157a9303c8f61b183: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:49:56.125050 containerd[1527]: time="2025-10-13T05:49:56.124836078Z" level=info msg="CreateContainer within sandbox \"db06dd79479558aa5c8afe0025aa6ad607f585d9839da2f53d9aaf015f99246e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5ec71cab33fecd4b9e7e9766665b3e2f386b322b501972d157a9303c8f61b183\"" Oct 13 05:49:56.126964 containerd[1527]: time="2025-10-13T05:49:56.126480526Z" level=info msg="StartContainer for \"5ec71cab33fecd4b9e7e9766665b3e2f386b322b501972d157a9303c8f61b183\"" Oct 13 05:49:56.129465 containerd[1527]: time="2025-10-13T05:49:56.129358026Z" level=info msg="connecting to shim 5ec71cab33fecd4b9e7e9766665b3e2f386b322b501972d157a9303c8f61b183" address="unix:///run/containerd/s/6522a55c784c82fc00a39d1fbdba1c8f0f5b153024bd7f05c9d66c6b40e1e46d" protocol=ttrpc version=3 Oct 13 05:49:56.154215 systemd[1]: Started cri-containerd-5ec71cab33fecd4b9e7e9766665b3e2f386b322b501972d157a9303c8f61b183.scope - libcontainer container 5ec71cab33fecd4b9e7e9766665b3e2f386b322b501972d157a9303c8f61b183. Oct 13 05:49:56.192587 containerd[1527]: time="2025-10-13T05:49:56.192295293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-stqhc,Uid:f8866abc-676b-47cc-be23-b0ba3e1e3581,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:49:56.217066 containerd[1527]: time="2025-10-13T05:49:56.217007059Z" level=info msg="connecting to shim 6792d94de295f7850c1f25dd9f8bfc862eb17c261fda50408c081ecaea07da45" address="unix:///run/containerd/s/41e9ed980097041623c55267d799e635336ec0f378f0116c72d6a53b071e5b15" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:49:56.222586 containerd[1527]: time="2025-10-13T05:49:56.222509410Z" level=info msg="StartContainer for \"5ec71cab33fecd4b9e7e9766665b3e2f386b322b501972d157a9303c8f61b183\" returns successfully" Oct 13 05:49:56.272328 systemd[1]: Started cri-containerd-6792d94de295f7850c1f25dd9f8bfc862eb17c261fda50408c081ecaea07da45.scope - libcontainer container 6792d94de295f7850c1f25dd9f8bfc862eb17c261fda50408c081ecaea07da45. Oct 13 05:49:56.362234 containerd[1527]: time="2025-10-13T05:49:56.362072909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-stqhc,Uid:f8866abc-676b-47cc-be23-b0ba3e1e3581,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6792d94de295f7850c1f25dd9f8bfc862eb17c261fda50408c081ecaea07da45\"" Oct 13 05:49:56.366291 containerd[1527]: time="2025-10-13T05:49:56.366234788Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:49:56.732137 kubelet[2706]: E1013 05:49:56.732104 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:56.751113 kubelet[2706]: I1013 05:49:56.751035 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qcj5s" podStartSLOduration=1.7510146880000002 podStartE2EDuration="1.751014688s" podCreationTimestamp="2025-10-13 05:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:49:56.7500375 +0000 UTC m=+8.255748245" watchObservedRunningTime="2025-10-13 05:49:56.751014688 +0000 UTC m=+8.256725428" Oct 13 05:49:58.066242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3612574775.mount: Deactivated successfully. Oct 13 05:49:58.207002 kubelet[2706]: E1013 05:49:58.206610 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:58.743053 kubelet[2706]: E1013 05:49:58.742779 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:49:59.494523 containerd[1527]: time="2025-10-13T05:49:59.494462073Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:59.495962 containerd[1527]: time="2025-10-13T05:49:59.495920526Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:49:59.497185 containerd[1527]: time="2025-10-13T05:49:59.497134503Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:59.499028 containerd[1527]: time="2025-10-13T05:49:59.498989913Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:49:59.500425 containerd[1527]: time="2025-10-13T05:49:59.500388572Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.134107985s" Oct 13 05:49:59.500527 containerd[1527]: time="2025-10-13T05:49:59.500426842Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:49:59.505134 containerd[1527]: time="2025-10-13T05:49:59.505090325Z" level=info msg="CreateContainer within sandbox \"6792d94de295f7850c1f25dd9f8bfc862eb17c261fda50408c081ecaea07da45\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:49:59.514662 containerd[1527]: time="2025-10-13T05:49:59.514617015Z" level=info msg="Container 3bcc854ce710d83c4f6d8323ba13a5c54a0aa271175bca8b157e275f32eccab7: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:49:59.518000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657864931.mount: Deactivated successfully. Oct 13 05:49:59.523038 containerd[1527]: time="2025-10-13T05:49:59.522987936Z" level=info msg="CreateContainer within sandbox \"6792d94de295f7850c1f25dd9f8bfc862eb17c261fda50408c081ecaea07da45\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3bcc854ce710d83c4f6d8323ba13a5c54a0aa271175bca8b157e275f32eccab7\"" Oct 13 05:49:59.524185 containerd[1527]: time="2025-10-13T05:49:59.524137858Z" level=info msg="StartContainer for \"3bcc854ce710d83c4f6d8323ba13a5c54a0aa271175bca8b157e275f32eccab7\"" Oct 13 05:49:59.526307 containerd[1527]: time="2025-10-13T05:49:59.526197986Z" level=info msg="connecting to shim 3bcc854ce710d83c4f6d8323ba13a5c54a0aa271175bca8b157e275f32eccab7" address="unix:///run/containerd/s/41e9ed980097041623c55267d799e635336ec0f378f0116c72d6a53b071e5b15" protocol=ttrpc version=3 Oct 13 05:49:59.561357 systemd[1]: Started cri-containerd-3bcc854ce710d83c4f6d8323ba13a5c54a0aa271175bca8b157e275f32eccab7.scope - libcontainer container 3bcc854ce710d83c4f6d8323ba13a5c54a0aa271175bca8b157e275f32eccab7. Oct 13 05:49:59.603074 containerd[1527]: time="2025-10-13T05:49:59.602211541Z" level=info msg="StartContainer for \"3bcc854ce710d83c4f6d8323ba13a5c54a0aa271175bca8b157e275f32eccab7\" returns successfully" Oct 13 05:49:59.749427 kubelet[2706]: E1013 05:49:59.747762 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:02.321553 update_engine[1498]: I20251013 05:50:02.320511 1498 update_attempter.cc:509] Updating boot flags... Oct 13 05:50:08.170385 sudo[1779]: pam_unix(sudo:session): session closed for user root Oct 13 05:50:08.174321 sshd[1778]: Connection closed by 139.178.89.65 port 60838 Oct 13 05:50:08.176130 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:08.182634 systemd[1]: sshd@6-137.184.180.203:22-139.178.89.65:60838.service: Deactivated successfully. Oct 13 05:50:08.189366 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:50:08.190897 systemd[1]: session-7.scope: Consumed 6.771s CPU time, 167.5M memory peak. Oct 13 05:50:08.196791 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:50:08.201344 systemd-logind[1497]: Removed session 7. Oct 13 05:50:12.155125 kubelet[2706]: I1013 05:50:12.152123 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-db78d5bd4-stqhc" podStartSLOduration=14.016138999 podStartE2EDuration="17.152095431s" podCreationTimestamp="2025-10-13 05:49:55 +0000 UTC" firstStartedPulling="2025-10-13 05:49:56.36522263 +0000 UTC m=+7.870933362" lastFinishedPulling="2025-10-13 05:49:59.501179064 +0000 UTC m=+11.006889794" observedRunningTime="2025-10-13 05:49:59.760746546 +0000 UTC m=+11.266457294" watchObservedRunningTime="2025-10-13 05:50:12.152095431 +0000 UTC m=+23.657806182" Oct 13 05:50:12.175373 systemd[1]: Created slice kubepods-besteffort-pod04a385c3_43f5_458b_aaa2_86b4509a6308.slice - libcontainer container kubepods-besteffort-pod04a385c3_43f5_458b_aaa2_86b4509a6308.slice. Oct 13 05:50:12.212309 kubelet[2706]: I1013 05:50:12.212235 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/04a385c3-43f5-458b-aaa2-86b4509a6308-typha-certs\") pod \"calico-typha-7bcd7f948b-scvll\" (UID: \"04a385c3-43f5-458b-aaa2-86b4509a6308\") " pod="calico-system/calico-typha-7bcd7f948b-scvll" Oct 13 05:50:12.213304 kubelet[2706]: I1013 05:50:12.212677 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6txzj\" (UniqueName: \"kubernetes.io/projected/04a385c3-43f5-458b-aaa2-86b4509a6308-kube-api-access-6txzj\") pod \"calico-typha-7bcd7f948b-scvll\" (UID: \"04a385c3-43f5-458b-aaa2-86b4509a6308\") " pod="calico-system/calico-typha-7bcd7f948b-scvll" Oct 13 05:50:12.213304 kubelet[2706]: I1013 05:50:12.212772 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04a385c3-43f5-458b-aaa2-86b4509a6308-tigera-ca-bundle\") pod \"calico-typha-7bcd7f948b-scvll\" (UID: \"04a385c3-43f5-458b-aaa2-86b4509a6308\") " pod="calico-system/calico-typha-7bcd7f948b-scvll" Oct 13 05:50:12.413371 systemd[1]: Created slice kubepods-besteffort-pod9540f054_7a47_4cad_9f21_cbcbd37b9836.slice - libcontainer container kubepods-besteffort-pod9540f054_7a47_4cad_9f21_cbcbd37b9836.slice. Oct 13 05:50:12.414937 kubelet[2706]: I1013 05:50:12.414892 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9540f054-7a47-4cad-9f21-cbcbd37b9836-lib-modules\") pod \"calico-node-dpb47\" (UID: \"9540f054-7a47-4cad-9f21-cbcbd37b9836\") " pod="calico-system/calico-node-dpb47" Oct 13 05:50:12.414937 kubelet[2706]: I1013 05:50:12.414934 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9540f054-7a47-4cad-9f21-cbcbd37b9836-var-run-calico\") pod \"calico-node-dpb47\" (UID: \"9540f054-7a47-4cad-9f21-cbcbd37b9836\") " pod="calico-system/calico-node-dpb47" Oct 13 05:50:12.414937 kubelet[2706]: I1013 05:50:12.414958 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9540f054-7a47-4cad-9f21-cbcbd37b9836-cni-net-dir\") pod \"calico-node-dpb47\" (UID: \"9540f054-7a47-4cad-9f21-cbcbd37b9836\") " pod="calico-system/calico-node-dpb47" Oct 13 05:50:12.415371 kubelet[2706]: I1013 05:50:12.415074 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9540f054-7a47-4cad-9f21-cbcbd37b9836-policysync\") pod \"calico-node-dpb47\" (UID: \"9540f054-7a47-4cad-9f21-cbcbd37b9836\") " pod="calico-system/calico-node-dpb47" Oct 13 05:50:12.415371 kubelet[2706]: I1013 05:50:12.415130 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9540f054-7a47-4cad-9f21-cbcbd37b9836-tigera-ca-bundle\") pod \"calico-node-dpb47\" (UID: \"9540f054-7a47-4cad-9f21-cbcbd37b9836\") " pod="calico-system/calico-node-dpb47" Oct 13 05:50:12.415371 kubelet[2706]: I1013 05:50:12.415152 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9540f054-7a47-4cad-9f21-cbcbd37b9836-cni-bin-dir\") pod \"calico-node-dpb47\" (UID: \"9540f054-7a47-4cad-9f21-cbcbd37b9836\") " pod="calico-system/calico-node-dpb47" Oct 13 05:50:12.415371 kubelet[2706]: I1013 05:50:12.415182 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9540f054-7a47-4cad-9f21-cbcbd37b9836-flexvol-driver-host\") pod \"calico-node-dpb47\" (UID: \"9540f054-7a47-4cad-9f21-cbcbd37b9836\") " pod="calico-system/calico-node-dpb47" Oct 13 05:50:12.415371 kubelet[2706]: I1013 05:50:12.415202 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9540f054-7a47-4cad-9f21-cbcbd37b9836-xtables-lock\") pod \"calico-node-dpb47\" (UID: \"9540f054-7a47-4cad-9f21-cbcbd37b9836\") " pod="calico-system/calico-node-dpb47" Oct 13 05:50:12.415570 kubelet[2706]: I1013 05:50:12.415241 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9540f054-7a47-4cad-9f21-cbcbd37b9836-cni-log-dir\") pod \"calico-node-dpb47\" (UID: \"9540f054-7a47-4cad-9f21-cbcbd37b9836\") " pod="calico-system/calico-node-dpb47" Oct 13 05:50:12.415570 kubelet[2706]: I1013 05:50:12.415258 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9540f054-7a47-4cad-9f21-cbcbd37b9836-node-certs\") pod \"calico-node-dpb47\" (UID: \"9540f054-7a47-4cad-9f21-cbcbd37b9836\") " pod="calico-system/calico-node-dpb47" Oct 13 05:50:12.415570 kubelet[2706]: I1013 05:50:12.415291 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9540f054-7a47-4cad-9f21-cbcbd37b9836-var-lib-calico\") pod \"calico-node-dpb47\" (UID: \"9540f054-7a47-4cad-9f21-cbcbd37b9836\") " pod="calico-system/calico-node-dpb47" Oct 13 05:50:12.415570 kubelet[2706]: I1013 05:50:12.415309 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pvrr\" (UniqueName: \"kubernetes.io/projected/9540f054-7a47-4cad-9f21-cbcbd37b9836-kube-api-access-8pvrr\") pod \"calico-node-dpb47\" (UID: \"9540f054-7a47-4cad-9f21-cbcbd37b9836\") " pod="calico-system/calico-node-dpb47" Oct 13 05:50:12.497193 kubelet[2706]: E1013 05:50:12.496504 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:12.498719 containerd[1527]: time="2025-10-13T05:50:12.498563681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bcd7f948b-scvll,Uid:04a385c3-43f5-458b-aaa2-86b4509a6308,Namespace:calico-system,Attempt:0,}" Oct 13 05:50:12.555051 kubelet[2706]: E1013 05:50:12.555007 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.558099 kubelet[2706]: W1013 05:50:12.557205 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.558099 kubelet[2706]: E1013 05:50:12.557273 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.562162 containerd[1527]: time="2025-10-13T05:50:12.562071044Z" level=info msg="connecting to shim 5e3d27760b8ac1203b5f99b2aee6682e497c9a6c62d0d30ddefc2fff4b1ef0b1" address="unix:///run/containerd/s/bf9dd45bfa8ddbb9820834901e7deced871b0caba6c556581b2c44437cd4e564" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:50:12.567803 kubelet[2706]: E1013 05:50:12.567742 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.567803 kubelet[2706]: W1013 05:50:12.567770 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.567803 kubelet[2706]: E1013 05:50:12.567797 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.602602 systemd[1]: Started cri-containerd-5e3d27760b8ac1203b5f99b2aee6682e497c9a6c62d0d30ddefc2fff4b1ef0b1.scope - libcontainer container 5e3d27760b8ac1203b5f99b2aee6682e497c9a6c62d0d30ddefc2fff4b1ef0b1. Oct 13 05:50:12.700098 kubelet[2706]: E1013 05:50:12.699879 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rp8ql" podUID="231efcce-c47b-4a1a-8f52-94bd62eab694" Oct 13 05:50:12.709609 kubelet[2706]: E1013 05:50:12.709091 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.709609 kubelet[2706]: W1013 05:50:12.709126 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.709609 kubelet[2706]: E1013 05:50:12.709157 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.710777 kubelet[2706]: E1013 05:50:12.710321 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.710777 kubelet[2706]: W1013 05:50:12.710605 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.710777 kubelet[2706]: E1013 05:50:12.710634 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.712254 kubelet[2706]: E1013 05:50:12.712231 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.712422 kubelet[2706]: W1013 05:50:12.712397 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.712544 kubelet[2706]: E1013 05:50:12.712527 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.713183 kubelet[2706]: E1013 05:50:12.712949 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.713183 kubelet[2706]: W1013 05:50:12.712999 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.713183 kubelet[2706]: E1013 05:50:12.713022 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.713453 kubelet[2706]: E1013 05:50:12.713436 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.713995 kubelet[2706]: W1013 05:50:12.713528 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.713995 kubelet[2706]: E1013 05:50:12.713551 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.714252 kubelet[2706]: E1013 05:50:12.714165 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.714654 kubelet[2706]: W1013 05:50:12.714420 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.714654 kubelet[2706]: E1013 05:50:12.714448 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.715081 kubelet[2706]: E1013 05:50:12.715062 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.715595 kubelet[2706]: W1013 05:50:12.715164 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.715595 kubelet[2706]: E1013 05:50:12.715367 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.716177 kubelet[2706]: E1013 05:50:12.716157 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.716657 kubelet[2706]: W1013 05:50:12.716305 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.716657 kubelet[2706]: E1013 05:50:12.716334 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.718706 kubelet[2706]: E1013 05:50:12.718066 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.718706 kubelet[2706]: W1013 05:50:12.718086 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.718706 kubelet[2706]: E1013 05:50:12.718106 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.719390 kubelet[2706]: E1013 05:50:12.719276 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.719390 kubelet[2706]: W1013 05:50:12.719299 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.719390 kubelet[2706]: E1013 05:50:12.719318 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.720375 kubelet[2706]: E1013 05:50:12.720349 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.720375 kubelet[2706]: W1013 05:50:12.720369 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.720501 kubelet[2706]: E1013 05:50:12.720383 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.720666 kubelet[2706]: E1013 05:50:12.720630 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.720666 kubelet[2706]: W1013 05:50:12.720648 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.720666 kubelet[2706]: E1013 05:50:12.720660 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.720854 kubelet[2706]: E1013 05:50:12.720838 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.720854 kubelet[2706]: W1013 05:50:12.720849 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.721074 kubelet[2706]: E1013 05:50:12.720857 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.721209 kubelet[2706]: E1013 05:50:12.721196 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.721209 kubelet[2706]: W1013 05:50:12.721206 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.721311 kubelet[2706]: E1013 05:50:12.721215 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.723002 kubelet[2706]: E1013 05:50:12.722671 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.723002 kubelet[2706]: W1013 05:50:12.722689 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.723002 kubelet[2706]: E1013 05:50:12.722702 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.723002 kubelet[2706]: E1013 05:50:12.722886 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.723002 kubelet[2706]: W1013 05:50:12.722893 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.723002 kubelet[2706]: E1013 05:50:12.722900 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.726405 containerd[1527]: time="2025-10-13T05:50:12.724917151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dpb47,Uid:9540f054-7a47-4cad-9f21-cbcbd37b9836,Namespace:calico-system,Attempt:0,}" Oct 13 05:50:12.726554 kubelet[2706]: E1013 05:50:12.726225 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.726554 kubelet[2706]: W1013 05:50:12.726253 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.726554 kubelet[2706]: E1013 05:50:12.726281 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.726554 kubelet[2706]: I1013 05:50:12.726331 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/231efcce-c47b-4a1a-8f52-94bd62eab694-registration-dir\") pod \"csi-node-driver-rp8ql\" (UID: \"231efcce-c47b-4a1a-8f52-94bd62eab694\") " pod="calico-system/csi-node-driver-rp8ql" Oct 13 05:50:12.727044 kubelet[2706]: E1013 05:50:12.726986 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.727044 kubelet[2706]: W1013 05:50:12.727004 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.727044 kubelet[2706]: E1013 05:50:12.727022 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.727466 kubelet[2706]: E1013 05:50:12.727354 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.727466 kubelet[2706]: W1013 05:50:12.727367 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.727466 kubelet[2706]: E1013 05:50:12.727378 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.728449 kubelet[2706]: E1013 05:50:12.728428 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.728449 kubelet[2706]: W1013 05:50:12.728443 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.728806 kubelet[2706]: E1013 05:50:12.728455 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.728806 kubelet[2706]: E1013 05:50:12.728720 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.728806 kubelet[2706]: W1013 05:50:12.728728 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.728806 kubelet[2706]: E1013 05:50:12.728738 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.729587 kubelet[2706]: E1013 05:50:12.728941 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.729587 kubelet[2706]: W1013 05:50:12.728949 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.729587 kubelet[2706]: E1013 05:50:12.728958 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.729587 kubelet[2706]: E1013 05:50:12.729133 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.729587 kubelet[2706]: W1013 05:50:12.729140 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.729587 kubelet[2706]: E1013 05:50:12.729147 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.729587 kubelet[2706]: I1013 05:50:12.729233 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/231efcce-c47b-4a1a-8f52-94bd62eab694-kubelet-dir\") pod \"csi-node-driver-rp8ql\" (UID: \"231efcce-c47b-4a1a-8f52-94bd62eab694\") " pod="calico-system/csi-node-driver-rp8ql" Oct 13 05:50:12.731693 kubelet[2706]: E1013 05:50:12.731048 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.731957 kubelet[2706]: W1013 05:50:12.731200 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.731957 kubelet[2706]: E1013 05:50:12.731803 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.733119 kubelet[2706]: E1013 05:50:12.733043 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.733523 kubelet[2706]: W1013 05:50:12.733320 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.733523 kubelet[2706]: E1013 05:50:12.733351 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.733935 kubelet[2706]: E1013 05:50:12.733917 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.734260 kubelet[2706]: W1013 05:50:12.734234 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.734516 kubelet[2706]: E1013 05:50:12.734460 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.735613 kubelet[2706]: E1013 05:50:12.735428 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.736643 kubelet[2706]: W1013 05:50:12.735831 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.736643 kubelet[2706]: E1013 05:50:12.735860 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.736643 kubelet[2706]: I1013 05:50:12.735906 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/231efcce-c47b-4a1a-8f52-94bd62eab694-socket-dir\") pod \"csi-node-driver-rp8ql\" (UID: \"231efcce-c47b-4a1a-8f52-94bd62eab694\") " pod="calico-system/csi-node-driver-rp8ql" Oct 13 05:50:12.737686 kubelet[2706]: E1013 05:50:12.737662 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.738014 kubelet[2706]: W1013 05:50:12.737945 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.738477 kubelet[2706]: E1013 05:50:12.738455 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.738733 kubelet[2706]: I1013 05:50:12.738708 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/231efcce-c47b-4a1a-8f52-94bd62eab694-varrun\") pod \"csi-node-driver-rp8ql\" (UID: \"231efcce-c47b-4a1a-8f52-94bd62eab694\") " pod="calico-system/csi-node-driver-rp8ql" Oct 13 05:50:12.739361 kubelet[2706]: E1013 05:50:12.739206 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.739361 kubelet[2706]: W1013 05:50:12.739228 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.739361 kubelet[2706]: E1013 05:50:12.739244 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.739851 kubelet[2706]: E1013 05:50:12.739571 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.739851 kubelet[2706]: W1013 05:50:12.739585 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.739851 kubelet[2706]: E1013 05:50:12.739595 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.740396 kubelet[2706]: E1013 05:50:12.740123 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.740396 kubelet[2706]: W1013 05:50:12.740134 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.740396 kubelet[2706]: E1013 05:50:12.740145 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.740522 kubelet[2706]: E1013 05:50:12.740500 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.740522 kubelet[2706]: W1013 05:50:12.740510 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.740522 kubelet[2706]: E1013 05:50:12.740520 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.767939 containerd[1527]: time="2025-10-13T05:50:12.767861217Z" level=info msg="connecting to shim 0ca582f5c96b3a9fe472538cc9d84bc49f8de7fea578782c17eb6bbd26dec8cd" address="unix:///run/containerd/s/b583d28ee640cffc1849846fd9b64b96df92c8373b310ff51477e83313622a8c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:50:12.826947 systemd[1]: Started cri-containerd-0ca582f5c96b3a9fe472538cc9d84bc49f8de7fea578782c17eb6bbd26dec8cd.scope - libcontainer container 0ca582f5c96b3a9fe472538cc9d84bc49f8de7fea578782c17eb6bbd26dec8cd. Oct 13 05:50:12.841311 kubelet[2706]: E1013 05:50:12.841037 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.841311 kubelet[2706]: W1013 05:50:12.841075 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.841311 kubelet[2706]: E1013 05:50:12.841109 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.843350 kubelet[2706]: E1013 05:50:12.843223 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.843350 kubelet[2706]: W1013 05:50:12.843252 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.843350 kubelet[2706]: E1013 05:50:12.843279 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.844293 kubelet[2706]: E1013 05:50:12.844262 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.844293 kubelet[2706]: W1013 05:50:12.844288 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.844293 kubelet[2706]: E1013 05:50:12.844309 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.845138 kubelet[2706]: E1013 05:50:12.845105 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.845138 kubelet[2706]: W1013 05:50:12.845125 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.845138 kubelet[2706]: E1013 05:50:12.845138 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.845526 kubelet[2706]: E1013 05:50:12.845314 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.845526 kubelet[2706]: W1013 05:50:12.845325 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.845526 kubelet[2706]: E1013 05:50:12.845334 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.845795 kubelet[2706]: E1013 05:50:12.845637 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.845795 kubelet[2706]: W1013 05:50:12.845647 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.845795 kubelet[2706]: E1013 05:50:12.845659 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.847014 kubelet[2706]: E1013 05:50:12.846051 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.847014 kubelet[2706]: W1013 05:50:12.846066 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.847014 kubelet[2706]: E1013 05:50:12.846077 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.847194 kubelet[2706]: E1013 05:50:12.847116 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.847194 kubelet[2706]: W1013 05:50:12.847128 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.847194 kubelet[2706]: E1013 05:50:12.847139 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.848807 kubelet[2706]: E1013 05:50:12.847382 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.848807 kubelet[2706]: W1013 05:50:12.847394 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.848807 kubelet[2706]: E1013 05:50:12.847419 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.848807 kubelet[2706]: I1013 05:50:12.847441 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcn6d\" (UniqueName: \"kubernetes.io/projected/231efcce-c47b-4a1a-8f52-94bd62eab694-kube-api-access-mcn6d\") pod \"csi-node-driver-rp8ql\" (UID: \"231efcce-c47b-4a1a-8f52-94bd62eab694\") " pod="calico-system/csi-node-driver-rp8ql" Oct 13 05:50:12.849463 kubelet[2706]: E1013 05:50:12.849249 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.849463 kubelet[2706]: W1013 05:50:12.849270 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.849463 kubelet[2706]: E1013 05:50:12.849291 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.849699 kubelet[2706]: E1013 05:50:12.849684 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.849772 kubelet[2706]: W1013 05:50:12.849759 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.849854 kubelet[2706]: E1013 05:50:12.849837 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.850335 kubelet[2706]: E1013 05:50:12.850155 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.850335 kubelet[2706]: W1013 05:50:12.850171 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.850335 kubelet[2706]: E1013 05:50:12.850199 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.850634 kubelet[2706]: E1013 05:50:12.850618 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.850720 kubelet[2706]: W1013 05:50:12.850706 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.850801 kubelet[2706]: E1013 05:50:12.850787 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.852276 kubelet[2706]: E1013 05:50:12.852085 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.852276 kubelet[2706]: W1013 05:50:12.852127 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.852276 kubelet[2706]: E1013 05:50:12.852146 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.852553 kubelet[2706]: E1013 05:50:12.852537 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.852881 kubelet[2706]: W1013 05:50:12.852670 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.852881 kubelet[2706]: E1013 05:50:12.852692 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.853356 kubelet[2706]: E1013 05:50:12.853142 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.853356 kubelet[2706]: W1013 05:50:12.853182 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.853356 kubelet[2706]: E1013 05:50:12.853198 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.854999 kubelet[2706]: E1013 05:50:12.853736 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.855298 kubelet[2706]: W1013 05:50:12.855132 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.855298 kubelet[2706]: E1013 05:50:12.855160 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.855692 kubelet[2706]: E1013 05:50:12.855554 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.855692 kubelet[2706]: W1013 05:50:12.855570 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.855692 kubelet[2706]: E1013 05:50:12.855585 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.855926 kubelet[2706]: E1013 05:50:12.855911 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.856030 kubelet[2706]: W1013 05:50:12.856015 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.856244 kubelet[2706]: E1013 05:50:12.856091 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.856492 kubelet[2706]: E1013 05:50:12.856474 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.856584 kubelet[2706]: W1013 05:50:12.856569 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.856752 kubelet[2706]: E1013 05:50:12.856667 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.857286 kubelet[2706]: E1013 05:50:12.857260 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.857286 kubelet[2706]: W1013 05:50:12.857283 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.857400 kubelet[2706]: E1013 05:50:12.857302 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.857590 kubelet[2706]: E1013 05:50:12.857573 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.857590 kubelet[2706]: W1013 05:50:12.857588 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.857702 kubelet[2706]: E1013 05:50:12.857602 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.859127 kubelet[2706]: E1013 05:50:12.859107 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.859127 kubelet[2706]: W1013 05:50:12.859121 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.859127 kubelet[2706]: E1013 05:50:12.859132 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.948507 kubelet[2706]: E1013 05:50:12.948467 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.948507 kubelet[2706]: W1013 05:50:12.948492 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.948507 kubelet[2706]: E1013 05:50:12.948514 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.949068 kubelet[2706]: E1013 05:50:12.949051 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.949068 kubelet[2706]: W1013 05:50:12.949066 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.949174 kubelet[2706]: E1013 05:50:12.949081 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.951381 kubelet[2706]: E1013 05:50:12.951253 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.951381 kubelet[2706]: W1013 05:50:12.951272 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.951381 kubelet[2706]: E1013 05:50:12.951291 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.951715 kubelet[2706]: E1013 05:50:12.951517 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.951715 kubelet[2706]: W1013 05:50:12.951524 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.951715 kubelet[2706]: E1013 05:50:12.951533 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.951829 kubelet[2706]: E1013 05:50:12.951732 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.951829 kubelet[2706]: W1013 05:50:12.951739 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.951829 kubelet[2706]: E1013 05:50:12.951746 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:12.969210 kubelet[2706]: E1013 05:50:12.969088 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:50:12.969210 kubelet[2706]: W1013 05:50:12.969121 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:50:12.969210 kubelet[2706]: E1013 05:50:12.969150 2706 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:50:13.065479 containerd[1527]: time="2025-10-13T05:50:13.065374680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dpb47,Uid:9540f054-7a47-4cad-9f21-cbcbd37b9836,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ca582f5c96b3a9fe472538cc9d84bc49f8de7fea578782c17eb6bbd26dec8cd\"" Oct 13 05:50:13.071166 containerd[1527]: time="2025-10-13T05:50:13.071054603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:50:13.109655 containerd[1527]: time="2025-10-13T05:50:13.109594488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bcd7f948b-scvll,Uid:04a385c3-43f5-458b-aaa2-86b4509a6308,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e3d27760b8ac1203b5f99b2aee6682e497c9a6c62d0d30ddefc2fff4b1ef0b1\"" Oct 13 05:50:13.110846 kubelet[2706]: E1013 05:50:13.110801 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:14.559774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688865819.mount: Deactivated successfully. Oct 13 05:50:14.678208 containerd[1527]: time="2025-10-13T05:50:14.677959566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:14.679547 kubelet[2706]: E1013 05:50:14.679478 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rp8ql" podUID="231efcce-c47b-4a1a-8f52-94bd62eab694" Oct 13 05:50:14.681079 containerd[1527]: time="2025-10-13T05:50:14.681034160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Oct 13 05:50:14.683402 containerd[1527]: time="2025-10-13T05:50:14.682055399Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:14.690353 containerd[1527]: time="2025-10-13T05:50:14.690303003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:14.694756 containerd[1527]: time="2025-10-13T05:50:14.694705743Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.623543565s" Oct 13 05:50:14.694995 containerd[1527]: time="2025-10-13T05:50:14.694962754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:50:14.697288 containerd[1527]: time="2025-10-13T05:50:14.697248478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:50:14.701954 containerd[1527]: time="2025-10-13T05:50:14.701906498Z" level=info msg="CreateContainer within sandbox \"0ca582f5c96b3a9fe472538cc9d84bc49f8de7fea578782c17eb6bbd26dec8cd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:50:14.713430 containerd[1527]: time="2025-10-13T05:50:14.713306869Z" level=info msg="Container 935178afda3cfa049d9a8584173ae776e8395f96a68835b2b4d2df970deff423: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:14.721795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543149355.mount: Deactivated successfully. Oct 13 05:50:14.728998 containerd[1527]: time="2025-10-13T05:50:14.727416841Z" level=info msg="CreateContainer within sandbox \"0ca582f5c96b3a9fe472538cc9d84bc49f8de7fea578782c17eb6bbd26dec8cd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"935178afda3cfa049d9a8584173ae776e8395f96a68835b2b4d2df970deff423\"" Oct 13 05:50:14.730994 containerd[1527]: time="2025-10-13T05:50:14.729935770Z" level=info msg="StartContainer for \"935178afda3cfa049d9a8584173ae776e8395f96a68835b2b4d2df970deff423\"" Oct 13 05:50:14.732669 containerd[1527]: time="2025-10-13T05:50:14.732617117Z" level=info msg="connecting to shim 935178afda3cfa049d9a8584173ae776e8395f96a68835b2b4d2df970deff423" address="unix:///run/containerd/s/b583d28ee640cffc1849846fd9b64b96df92c8373b310ff51477e83313622a8c" protocol=ttrpc version=3 Oct 13 05:50:14.766420 systemd[1]: Started cri-containerd-935178afda3cfa049d9a8584173ae776e8395f96a68835b2b4d2df970deff423.scope - libcontainer container 935178afda3cfa049d9a8584173ae776e8395f96a68835b2b4d2df970deff423. Oct 13 05:50:14.834473 containerd[1527]: time="2025-10-13T05:50:14.834404513Z" level=info msg="StartContainer for \"935178afda3cfa049d9a8584173ae776e8395f96a68835b2b4d2df970deff423\" returns successfully" Oct 13 05:50:14.863789 systemd[1]: cri-containerd-935178afda3cfa049d9a8584173ae776e8395f96a68835b2b4d2df970deff423.scope: Deactivated successfully. Oct 13 05:50:14.875324 containerd[1527]: time="2025-10-13T05:50:14.875227849Z" level=info msg="received exit event container_id:\"935178afda3cfa049d9a8584173ae776e8395f96a68835b2b4d2df970deff423\" id:\"935178afda3cfa049d9a8584173ae776e8395f96a68835b2b4d2df970deff423\" pid:3313 exited_at:{seconds:1760334614 nanos:874572105}" Oct 13 05:50:14.875673 containerd[1527]: time="2025-10-13T05:50:14.875641969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"935178afda3cfa049d9a8584173ae776e8395f96a68835b2b4d2df970deff423\" id:\"935178afda3cfa049d9a8584173ae776e8395f96a68835b2b4d2df970deff423\" pid:3313 exited_at:{seconds:1760334614 nanos:874572105}" Oct 13 05:50:15.501873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-935178afda3cfa049d9a8584173ae776e8395f96a68835b2b4d2df970deff423-rootfs.mount: Deactivated successfully. Oct 13 05:50:16.688013 kubelet[2706]: E1013 05:50:16.687148 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rp8ql" podUID="231efcce-c47b-4a1a-8f52-94bd62eab694" Oct 13 05:50:17.206645 containerd[1527]: time="2025-10-13T05:50:17.206585145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:17.207549 containerd[1527]: time="2025-10-13T05:50:17.207398985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Oct 13 05:50:17.208044 containerd[1527]: time="2025-10-13T05:50:17.208023317Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:17.209733 containerd[1527]: time="2025-10-13T05:50:17.209699737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:17.210559 containerd[1527]: time="2025-10-13T05:50:17.210297059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.513005418s" Oct 13 05:50:17.210559 containerd[1527]: time="2025-10-13T05:50:17.210375923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:50:17.211755 containerd[1527]: time="2025-10-13T05:50:17.211733059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:50:17.237333 containerd[1527]: time="2025-10-13T05:50:17.236794408Z" level=info msg="CreateContainer within sandbox \"5e3d27760b8ac1203b5f99b2aee6682e497c9a6c62d0d30ddefc2fff4b1ef0b1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:50:17.250003 containerd[1527]: time="2025-10-13T05:50:17.247222551Z" level=info msg="Container 49ce2b40212145f5078d83348c611ae563989b8da3477319ada68a3d818dcf76: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:17.252999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1140897376.mount: Deactivated successfully. Oct 13 05:50:17.262425 containerd[1527]: time="2025-10-13T05:50:17.262101537Z" level=info msg="CreateContainer within sandbox \"5e3d27760b8ac1203b5f99b2aee6682e497c9a6c62d0d30ddefc2fff4b1ef0b1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"49ce2b40212145f5078d83348c611ae563989b8da3477319ada68a3d818dcf76\"" Oct 13 05:50:17.263874 containerd[1527]: time="2025-10-13T05:50:17.263825089Z" level=info msg="StartContainer for \"49ce2b40212145f5078d83348c611ae563989b8da3477319ada68a3d818dcf76\"" Oct 13 05:50:17.265962 containerd[1527]: time="2025-10-13T05:50:17.265823588Z" level=info msg="connecting to shim 49ce2b40212145f5078d83348c611ae563989b8da3477319ada68a3d818dcf76" address="unix:///run/containerd/s/bf9dd45bfa8ddbb9820834901e7deced871b0caba6c556581b2c44437cd4e564" protocol=ttrpc version=3 Oct 13 05:50:17.307329 systemd[1]: Started cri-containerd-49ce2b40212145f5078d83348c611ae563989b8da3477319ada68a3d818dcf76.scope - libcontainer container 49ce2b40212145f5078d83348c611ae563989b8da3477319ada68a3d818dcf76. Oct 13 05:50:17.376497 containerd[1527]: time="2025-10-13T05:50:17.376425073Z" level=info msg="StartContainer for \"49ce2b40212145f5078d83348c611ae563989b8da3477319ada68a3d818dcf76\" returns successfully" Oct 13 05:50:17.838123 kubelet[2706]: E1013 05:50:17.838086 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:18.683882 kubelet[2706]: E1013 05:50:18.683797 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rp8ql" podUID="231efcce-c47b-4a1a-8f52-94bd62eab694" Oct 13 05:50:18.844829 kubelet[2706]: I1013 05:50:18.844777 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:50:18.845911 kubelet[2706]: E1013 05:50:18.845878 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:20.679591 kubelet[2706]: E1013 05:50:20.679529 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rp8ql" podUID="231efcce-c47b-4a1a-8f52-94bd62eab694" Oct 13 05:50:21.361141 containerd[1527]: time="2025-10-13T05:50:21.361080863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:21.362926 containerd[1527]: time="2025-10-13T05:50:21.362877012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:50:21.363420 containerd[1527]: time="2025-10-13T05:50:21.363389764Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:21.366186 containerd[1527]: time="2025-10-13T05:50:21.366131813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:21.367853 containerd[1527]: time="2025-10-13T05:50:21.367684816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.155818807s" Oct 13 05:50:21.367853 containerd[1527]: time="2025-10-13T05:50:21.367723736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:50:21.375416 containerd[1527]: time="2025-10-13T05:50:21.375306246Z" level=info msg="CreateContainer within sandbox \"0ca582f5c96b3a9fe472538cc9d84bc49f8de7fea578782c17eb6bbd26dec8cd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:50:21.391172 containerd[1527]: time="2025-10-13T05:50:21.391094666Z" level=info msg="Container 50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:21.405133 containerd[1527]: time="2025-10-13T05:50:21.405033796Z" level=info msg="CreateContainer within sandbox \"0ca582f5c96b3a9fe472538cc9d84bc49f8de7fea578782c17eb6bbd26dec8cd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da\"" Oct 13 05:50:21.408450 containerd[1527]: time="2025-10-13T05:50:21.407379303Z" level=info msg="StartContainer for \"50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da\"" Oct 13 05:50:21.410408 containerd[1527]: time="2025-10-13T05:50:21.410364608Z" level=info msg="connecting to shim 50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da" address="unix:///run/containerd/s/b583d28ee640cffc1849846fd9b64b96df92c8373b310ff51477e83313622a8c" protocol=ttrpc version=3 Oct 13 05:50:21.445832 systemd[1]: Started cri-containerd-50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da.scope - libcontainer container 50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da. Oct 13 05:50:21.513159 containerd[1527]: time="2025-10-13T05:50:21.513097490Z" level=info msg="StartContainer for \"50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da\" returns successfully" Oct 13 05:50:21.897412 kubelet[2706]: I1013 05:50:21.897290 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bcd7f948b-scvll" podStartSLOduration=5.798409775 podStartE2EDuration="9.897263461s" podCreationTimestamp="2025-10-13 05:50:12 +0000 UTC" firstStartedPulling="2025-10-13 05:50:13.112562722 +0000 UTC m=+24.618273449" lastFinishedPulling="2025-10-13 05:50:17.211416408 +0000 UTC m=+28.717127135" observedRunningTime="2025-10-13 05:50:17.866001788 +0000 UTC m=+29.371712534" watchObservedRunningTime="2025-10-13 05:50:21.897263461 +0000 UTC m=+33.402974203" Oct 13 05:50:22.203775 systemd[1]: cri-containerd-50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da.scope: Deactivated successfully. Oct 13 05:50:22.204267 systemd[1]: cri-containerd-50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da.scope: Consumed 660ms CPU time, 159.8M memory peak, 7.8M read from disk, 171.3M written to disk. Oct 13 05:50:22.209022 containerd[1527]: time="2025-10-13T05:50:22.207526808Z" level=info msg="received exit event container_id:\"50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da\" id:\"50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da\" pid:3413 exited_at:{seconds:1760334622 nanos:206890779}" Oct 13 05:50:22.211436 containerd[1527]: time="2025-10-13T05:50:22.209792884Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da\" id:\"50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da\" pid:3413 exited_at:{seconds:1760334622 nanos:206890779}" Oct 13 05:50:22.281092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50da2aad3ceafea851aef1d33aba629e2accffff5af7a2f470f15ed1887c49da-rootfs.mount: Deactivated successfully. Oct 13 05:50:22.322440 kubelet[2706]: I1013 05:50:22.322401 2706 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 05:50:22.382410 systemd[1]: Created slice kubepods-burstable-pod95c84384_08f9_4081_baf0_5af6d2526999.slice - libcontainer container kubepods-burstable-pod95c84384_08f9_4081_baf0_5af6d2526999.slice. Oct 13 05:50:22.393449 systemd[1]: Created slice kubepods-burstable-pod761616b9_6d45_4378_96fc_c3c6dd8d530b.slice - libcontainer container kubepods-burstable-pod761616b9_6d45_4378_96fc_c3c6dd8d530b.slice. Oct 13 05:50:22.423379 systemd[1]: Created slice kubepods-besteffort-pod1108956f_278d_4133_bdcf_9491cc4fe979.slice - libcontainer container kubepods-besteffort-pod1108956f_278d_4133_bdcf_9491cc4fe979.slice. Oct 13 05:50:22.436479 kubelet[2706]: I1013 05:50:22.435857 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/761616b9-6d45-4378-96fc-c3c6dd8d530b-config-volume\") pod \"coredns-66bc5c9577-7lpb5\" (UID: \"761616b9-6d45-4378-96fc-c3c6dd8d530b\") " pod="kube-system/coredns-66bc5c9577-7lpb5" Oct 13 05:50:22.438405 kubelet[2706]: I1013 05:50:22.438261 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx5wc\" (UniqueName: \"kubernetes.io/projected/761616b9-6d45-4378-96fc-c3c6dd8d530b-kube-api-access-rx5wc\") pod \"coredns-66bc5c9577-7lpb5\" (UID: \"761616b9-6d45-4378-96fc-c3c6dd8d530b\") " pod="kube-system/coredns-66bc5c9577-7lpb5" Oct 13 05:50:22.440686 kubelet[2706]: I1013 05:50:22.440639 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c84384-08f9-4081-baf0-5af6d2526999-config-volume\") pod \"coredns-66bc5c9577-dp622\" (UID: \"95c84384-08f9-4081-baf0-5af6d2526999\") " pod="kube-system/coredns-66bc5c9577-dp622" Oct 13 05:50:22.440686 kubelet[2706]: I1013 05:50:22.440685 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1108956f-278d-4133-bdcf-9491cc4fe979-tigera-ca-bundle\") pod \"calico-kube-controllers-84f6bbddc4-4gjtm\" (UID: \"1108956f-278d-4133-bdcf-9491cc4fe979\") " pod="calico-system/calico-kube-controllers-84f6bbddc4-4gjtm" Oct 13 05:50:22.440895 kubelet[2706]: I1013 05:50:22.440718 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjplt\" (UniqueName: \"kubernetes.io/projected/95c84384-08f9-4081-baf0-5af6d2526999-kube-api-access-jjplt\") pod \"coredns-66bc5c9577-dp622\" (UID: \"95c84384-08f9-4081-baf0-5af6d2526999\") " pod="kube-system/coredns-66bc5c9577-dp622" Oct 13 05:50:22.440895 kubelet[2706]: I1013 05:50:22.440748 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brv9g\" (UniqueName: \"kubernetes.io/projected/1108956f-278d-4133-bdcf-9491cc4fe979-kube-api-access-brv9g\") pod \"calico-kube-controllers-84f6bbddc4-4gjtm\" (UID: \"1108956f-278d-4133-bdcf-9491cc4fe979\") " pod="calico-system/calico-kube-controllers-84f6bbddc4-4gjtm" Oct 13 05:50:22.462521 systemd[1]: Created slice kubepods-besteffort-podf96336ff_8765_4a8c_9987_b1ec94233d7e.slice - libcontainer container kubepods-besteffort-podf96336ff_8765_4a8c_9987_b1ec94233d7e.slice. Oct 13 05:50:22.476329 systemd[1]: Created slice kubepods-besteffort-podbbf88e0c_65f2_4474_8006_a06f10ea6a87.slice - libcontainer container kubepods-besteffort-podbbf88e0c_65f2_4474_8006_a06f10ea6a87.slice. Oct 13 05:50:22.489019 systemd[1]: Created slice kubepods-besteffort-pod6bdd28f7_3520_4575_b733_44e93394fe34.slice - libcontainer container kubepods-besteffort-pod6bdd28f7_3520_4575_b733_44e93394fe34.slice. Oct 13 05:50:22.502788 systemd[1]: Created slice kubepods-besteffort-pod625d1f05_914e_4b92_8eda_0f0088321193.slice - libcontainer container kubepods-besteffort-pod625d1f05_914e_4b92_8eda_0f0088321193.slice. Oct 13 05:50:22.518904 systemd[1]: Created slice kubepods-besteffort-pod454979fa_f118_4d73_9724_b8e6b15a0083.slice - libcontainer container kubepods-besteffort-pod454979fa_f118_4d73_9724_b8e6b15a0083.slice. Oct 13 05:50:22.543709 kubelet[2706]: I1013 05:50:22.542314 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6bdd28f7-3520-4575-b733-44e93394fe34-goldmane-key-pair\") pod \"goldmane-854f97d977-8rn64\" (UID: \"6bdd28f7-3520-4575-b733-44e93394fe34\") " pod="calico-system/goldmane-854f97d977-8rn64" Oct 13 05:50:22.543709 kubelet[2706]: I1013 05:50:22.542379 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bbf88e0c-65f2-4474-8006-a06f10ea6a87-calico-apiserver-certs\") pod \"calico-apiserver-6cd79d768-qqxmr\" (UID: \"bbf88e0c-65f2-4474-8006-a06f10ea6a87\") " pod="calico-apiserver/calico-apiserver-6cd79d768-qqxmr" Oct 13 05:50:22.543709 kubelet[2706]: I1013 05:50:22.542566 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/454979fa-f118-4d73-9724-b8e6b15a0083-whisker-backend-key-pair\") pod \"whisker-67b6d5ccfd-z7p4d\" (UID: \"454979fa-f118-4d73-9724-b8e6b15a0083\") " pod="calico-system/whisker-67b6d5ccfd-z7p4d" Oct 13 05:50:22.543709 kubelet[2706]: I1013 05:50:22.542600 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmtxd\" (UniqueName: \"kubernetes.io/projected/bbf88e0c-65f2-4474-8006-a06f10ea6a87-kube-api-access-cmtxd\") pod \"calico-apiserver-6cd79d768-qqxmr\" (UID: \"bbf88e0c-65f2-4474-8006-a06f10ea6a87\") " pod="calico-apiserver/calico-apiserver-6cd79d768-qqxmr" Oct 13 05:50:22.543709 kubelet[2706]: I1013 05:50:22.542660 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bdd28f7-3520-4575-b733-44e93394fe34-goldmane-ca-bundle\") pod \"goldmane-854f97d977-8rn64\" (UID: \"6bdd28f7-3520-4575-b733-44e93394fe34\") " pod="calico-system/goldmane-854f97d977-8rn64" Oct 13 05:50:22.544026 kubelet[2706]: I1013 05:50:22.542721 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dkbj\" (UniqueName: \"kubernetes.io/projected/454979fa-f118-4d73-9724-b8e6b15a0083-kube-api-access-7dkbj\") pod \"whisker-67b6d5ccfd-z7p4d\" (UID: \"454979fa-f118-4d73-9724-b8e6b15a0083\") " pod="calico-system/whisker-67b6d5ccfd-z7p4d" Oct 13 05:50:22.544026 kubelet[2706]: I1013 05:50:22.543652 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f96336ff-8765-4a8c-9987-b1ec94233d7e-calico-apiserver-certs\") pod \"calico-apiserver-f9868759c-srwzw\" (UID: \"f96336ff-8765-4a8c-9987-b1ec94233d7e\") " pod="calico-apiserver/calico-apiserver-f9868759c-srwzw" Oct 13 05:50:22.544026 kubelet[2706]: I1013 05:50:22.543830 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bdd28f7-3520-4575-b733-44e93394fe34-config\") pod \"goldmane-854f97d977-8rn64\" (UID: \"6bdd28f7-3520-4575-b733-44e93394fe34\") " pod="calico-system/goldmane-854f97d977-8rn64" Oct 13 05:50:22.544109 kubelet[2706]: I1013 05:50:22.544025 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/454979fa-f118-4d73-9724-b8e6b15a0083-whisker-ca-bundle\") pod \"whisker-67b6d5ccfd-z7p4d\" (UID: \"454979fa-f118-4d73-9724-b8e6b15a0083\") " pod="calico-system/whisker-67b6d5ccfd-z7p4d" Oct 13 05:50:22.544295 kubelet[2706]: I1013 05:50:22.544205 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4dc\" (UniqueName: \"kubernetes.io/projected/f96336ff-8765-4a8c-9987-b1ec94233d7e-kube-api-access-lc4dc\") pod \"calico-apiserver-f9868759c-srwzw\" (UID: \"f96336ff-8765-4a8c-9987-b1ec94233d7e\") " pod="calico-apiserver/calico-apiserver-f9868759c-srwzw" Oct 13 05:50:22.544295 kubelet[2706]: I1013 05:50:22.544284 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs92n\" (UniqueName: \"kubernetes.io/projected/6bdd28f7-3520-4575-b733-44e93394fe34-kube-api-access-fs92n\") pod \"goldmane-854f97d977-8rn64\" (UID: \"6bdd28f7-3520-4575-b733-44e93394fe34\") " pod="calico-system/goldmane-854f97d977-8rn64" Oct 13 05:50:22.544463 kubelet[2706]: I1013 05:50:22.544363 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/625d1f05-914e-4b92-8eda-0f0088321193-calico-apiserver-certs\") pod \"calico-apiserver-f9868759c-jzd69\" (UID: \"625d1f05-914e-4b92-8eda-0f0088321193\") " pod="calico-apiserver/calico-apiserver-f9868759c-jzd69" Oct 13 05:50:22.544463 kubelet[2706]: I1013 05:50:22.544392 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5sdm\" (UniqueName: \"kubernetes.io/projected/625d1f05-914e-4b92-8eda-0f0088321193-kube-api-access-p5sdm\") pod \"calico-apiserver-f9868759c-jzd69\" (UID: \"625d1f05-914e-4b92-8eda-0f0088321193\") " pod="calico-apiserver/calico-apiserver-f9868759c-jzd69" Oct 13 05:50:22.694796 kubelet[2706]: E1013 05:50:22.693308 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:22.703422 containerd[1527]: time="2025-10-13T05:50:22.703316066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dp622,Uid:95c84384-08f9-4081-baf0-5af6d2526999,Namespace:kube-system,Attempt:0,}" Oct 13 05:50:22.710997 kubelet[2706]: E1013 05:50:22.709426 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:22.714093 containerd[1527]: time="2025-10-13T05:50:22.712597257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7lpb5,Uid:761616b9-6d45-4378-96fc-c3c6dd8d530b,Namespace:kube-system,Attempt:0,}" Oct 13 05:50:22.725850 systemd[1]: Created slice kubepods-besteffort-pod231efcce_c47b_4a1a_8f52_94bd62eab694.slice - libcontainer container kubepods-besteffort-pod231efcce_c47b_4a1a_8f52_94bd62eab694.slice. Oct 13 05:50:22.775684 containerd[1527]: time="2025-10-13T05:50:22.766815947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rp8ql,Uid:231efcce-c47b-4a1a-8f52-94bd62eab694,Namespace:calico-system,Attempt:0,}" Oct 13 05:50:22.789298 containerd[1527]: time="2025-10-13T05:50:22.789152210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd79d768-qqxmr,Uid:bbf88e0c-65f2-4474-8006-a06f10ea6a87,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:50:22.802003 containerd[1527]: time="2025-10-13T05:50:22.801270872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84f6bbddc4-4gjtm,Uid:1108956f-278d-4133-bdcf-9491cc4fe979,Namespace:calico-system,Attempt:0,}" Oct 13 05:50:22.815855 containerd[1527]: time="2025-10-13T05:50:22.815798079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9868759c-jzd69,Uid:625d1f05-914e-4b92-8eda-0f0088321193,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:50:22.817052 containerd[1527]: time="2025-10-13T05:50:22.816953009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-8rn64,Uid:6bdd28f7-3520-4575-b733-44e93394fe34,Namespace:calico-system,Attempt:0,}" Oct 13 05:50:22.837179 containerd[1527]: time="2025-10-13T05:50:22.836607874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9868759c-srwzw,Uid:f96336ff-8765-4a8c-9987-b1ec94233d7e,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:50:22.992369 containerd[1527]: time="2025-10-13T05:50:22.992192693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67b6d5ccfd-z7p4d,Uid:454979fa-f118-4d73-9724-b8e6b15a0083,Namespace:calico-system,Attempt:0,}" Oct 13 05:50:23.123289 containerd[1527]: time="2025-10-13T05:50:23.122597230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:50:23.309858 containerd[1527]: time="2025-10-13T05:50:23.309641960Z" level=error msg="Failed to destroy network for sandbox \"d21a6c3433046760898caa252325117e679570d7cf91bee7ac9d97c78546c6d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.313554 containerd[1527]: time="2025-10-13T05:50:23.313482335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dp622,Uid:95c84384-08f9-4081-baf0-5af6d2526999,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d21a6c3433046760898caa252325117e679570d7cf91bee7ac9d97c78546c6d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.319532 kubelet[2706]: E1013 05:50:23.319447 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d21a6c3433046760898caa252325117e679570d7cf91bee7ac9d97c78546c6d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.320214 kubelet[2706]: E1013 05:50:23.319569 2706 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d21a6c3433046760898caa252325117e679570d7cf91bee7ac9d97c78546c6d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dp622" Oct 13 05:50:23.320214 kubelet[2706]: E1013 05:50:23.319592 2706 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d21a6c3433046760898caa252325117e679570d7cf91bee7ac9d97c78546c6d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dp622" Oct 13 05:50:23.320214 kubelet[2706]: E1013 05:50:23.320028 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dp622_kube-system(95c84384-08f9-4081-baf0-5af6d2526999)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dp622_kube-system(95c84384-08f9-4081-baf0-5af6d2526999)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d21a6c3433046760898caa252325117e679570d7cf91bee7ac9d97c78546c6d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dp622" podUID="95c84384-08f9-4081-baf0-5af6d2526999" Oct 13 05:50:23.330017 containerd[1527]: time="2025-10-13T05:50:23.329862984Z" level=error msg="Failed to destroy network for sandbox \"195e91b83ba67c55efe2a8a48fea655ad534d865d676252effbc553e259f5e2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.336957 containerd[1527]: time="2025-10-13T05:50:23.336758350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84f6bbddc4-4gjtm,Uid:1108956f-278d-4133-bdcf-9491cc4fe979,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"195e91b83ba67c55efe2a8a48fea655ad534d865d676252effbc553e259f5e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.337198 kubelet[2706]: E1013 05:50:23.337082 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195e91b83ba67c55efe2a8a48fea655ad534d865d676252effbc553e259f5e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.337198 kubelet[2706]: E1013 05:50:23.337140 2706 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195e91b83ba67c55efe2a8a48fea655ad534d865d676252effbc553e259f5e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84f6bbddc4-4gjtm" Oct 13 05:50:23.337198 kubelet[2706]: E1013 05:50:23.337161 2706 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195e91b83ba67c55efe2a8a48fea655ad534d865d676252effbc553e259f5e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84f6bbddc4-4gjtm" Oct 13 05:50:23.337332 kubelet[2706]: E1013 05:50:23.337215 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84f6bbddc4-4gjtm_calico-system(1108956f-278d-4133-bdcf-9491cc4fe979)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84f6bbddc4-4gjtm_calico-system(1108956f-278d-4133-bdcf-9491cc4fe979)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"195e91b83ba67c55efe2a8a48fea655ad534d865d676252effbc553e259f5e2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84f6bbddc4-4gjtm" podUID="1108956f-278d-4133-bdcf-9491cc4fe979" Oct 13 05:50:23.383654 containerd[1527]: time="2025-10-13T05:50:23.383593491Z" level=error msg="Failed to destroy network for sandbox \"6c03c23b3faee8e8c6714d13da97355daab7121bdd8cba4e9eb7617930850f49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.385068 containerd[1527]: time="2025-10-13T05:50:23.384919913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rp8ql,Uid:231efcce-c47b-4a1a-8f52-94bd62eab694,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c03c23b3faee8e8c6714d13da97355daab7121bdd8cba4e9eb7617930850f49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.386874 kubelet[2706]: E1013 05:50:23.386824 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c03c23b3faee8e8c6714d13da97355daab7121bdd8cba4e9eb7617930850f49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.386874 kubelet[2706]: E1013 05:50:23.386936 2706 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c03c23b3faee8e8c6714d13da97355daab7121bdd8cba4e9eb7617930850f49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rp8ql" Oct 13 05:50:23.387984 kubelet[2706]: E1013 05:50:23.387638 2706 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c03c23b3faee8e8c6714d13da97355daab7121bdd8cba4e9eb7617930850f49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rp8ql" Oct 13 05:50:23.388150 kubelet[2706]: E1013 05:50:23.388086 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rp8ql_calico-system(231efcce-c47b-4a1a-8f52-94bd62eab694)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rp8ql_calico-system(231efcce-c47b-4a1a-8f52-94bd62eab694)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c03c23b3faee8e8c6714d13da97355daab7121bdd8cba4e9eb7617930850f49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rp8ql" podUID="231efcce-c47b-4a1a-8f52-94bd62eab694" Oct 13 05:50:23.390511 containerd[1527]: time="2025-10-13T05:50:23.390432058Z" level=error msg="Failed to destroy network for sandbox \"c45bb2f20199b231ad352d9957296cbc47e20b631e9531a767d5db8c552222f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.396717 containerd[1527]: time="2025-10-13T05:50:23.396656653Z" level=error msg="Failed to destroy network for sandbox \"a85d0f1196afc5ed346abb681169f6aed5d4503c7feebca1b4242babf25fe422\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.400361 containerd[1527]: time="2025-10-13T05:50:23.400227700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7lpb5,Uid:761616b9-6d45-4378-96fc-c3c6dd8d530b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45bb2f20199b231ad352d9957296cbc47e20b631e9531a767d5db8c552222f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.401362 kubelet[2706]: E1013 05:50:23.400880 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45bb2f20199b231ad352d9957296cbc47e20b631e9531a767d5db8c552222f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.401362 kubelet[2706]: E1013 05:50:23.401043 2706 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45bb2f20199b231ad352d9957296cbc47e20b631e9531a767d5db8c552222f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7lpb5" Oct 13 05:50:23.401362 kubelet[2706]: E1013 05:50:23.401070 2706 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45bb2f20199b231ad352d9957296cbc47e20b631e9531a767d5db8c552222f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7lpb5" Oct 13 05:50:23.401566 kubelet[2706]: E1013 05:50:23.401165 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7lpb5_kube-system(761616b9-6d45-4378-96fc-c3c6dd8d530b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7lpb5_kube-system(761616b9-6d45-4378-96fc-c3c6dd8d530b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c45bb2f20199b231ad352d9957296cbc47e20b631e9531a767d5db8c552222f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7lpb5" podUID="761616b9-6d45-4378-96fc-c3c6dd8d530b" Oct 13 05:50:23.401638 containerd[1527]: time="2025-10-13T05:50:23.401553628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9868759c-jzd69,Uid:625d1f05-914e-4b92-8eda-0f0088321193,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85d0f1196afc5ed346abb681169f6aed5d4503c7feebca1b4242babf25fe422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.404477 kubelet[2706]: E1013 05:50:23.401955 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85d0f1196afc5ed346abb681169f6aed5d4503c7feebca1b4242babf25fe422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.404477 kubelet[2706]: E1013 05:50:23.404087 2706 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85d0f1196afc5ed346abb681169f6aed5d4503c7feebca1b4242babf25fe422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f9868759c-jzd69" Oct 13 05:50:23.404477 kubelet[2706]: E1013 05:50:23.404125 2706 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85d0f1196afc5ed346abb681169f6aed5d4503c7feebca1b4242babf25fe422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f9868759c-jzd69" Oct 13 05:50:23.404812 kubelet[2706]: E1013 05:50:23.404186 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f9868759c-jzd69_calico-apiserver(625d1f05-914e-4b92-8eda-0f0088321193)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f9868759c-jzd69_calico-apiserver(625d1f05-914e-4b92-8eda-0f0088321193)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a85d0f1196afc5ed346abb681169f6aed5d4503c7feebca1b4242babf25fe422\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f9868759c-jzd69" podUID="625d1f05-914e-4b92-8eda-0f0088321193" Oct 13 05:50:23.422022 containerd[1527]: time="2025-10-13T05:50:23.421849300Z" level=error msg="Failed to destroy network for sandbox \"8a0cca396f223cdad818e6ef646c247d8c9ee65768fbba6828bcf923790efc6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.426439 containerd[1527]: time="2025-10-13T05:50:23.426006016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd79d768-qqxmr,Uid:bbf88e0c-65f2-4474-8006-a06f10ea6a87,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a0cca396f223cdad818e6ef646c247d8c9ee65768fbba6828bcf923790efc6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.428016 kubelet[2706]: E1013 05:50:23.426560 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a0cca396f223cdad818e6ef646c247d8c9ee65768fbba6828bcf923790efc6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.428016 kubelet[2706]: E1013 05:50:23.426636 2706 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a0cca396f223cdad818e6ef646c247d8c9ee65768fbba6828bcf923790efc6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd79d768-qqxmr" Oct 13 05:50:23.428016 kubelet[2706]: E1013 05:50:23.426657 2706 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a0cca396f223cdad818e6ef646c247d8c9ee65768fbba6828bcf923790efc6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd79d768-qqxmr" Oct 13 05:50:23.428275 kubelet[2706]: E1013 05:50:23.426725 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd79d768-qqxmr_calico-apiserver(bbf88e0c-65f2-4474-8006-a06f10ea6a87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd79d768-qqxmr_calico-apiserver(bbf88e0c-65f2-4474-8006-a06f10ea6a87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a0cca396f223cdad818e6ef646c247d8c9ee65768fbba6828bcf923790efc6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd79d768-qqxmr" podUID="bbf88e0c-65f2-4474-8006-a06f10ea6a87" Oct 13 05:50:23.434390 containerd[1527]: time="2025-10-13T05:50:23.434304385Z" level=error msg="Failed to destroy network for sandbox \"c178d5181f9fce2ff9a8b3ad9dd4e497d71ce3f1c3d6502fff67eeff24e37999\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.435693 containerd[1527]: time="2025-10-13T05:50:23.435642471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9868759c-srwzw,Uid:f96336ff-8765-4a8c-9987-b1ec94233d7e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c178d5181f9fce2ff9a8b3ad9dd4e497d71ce3f1c3d6502fff67eeff24e37999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.436456 kubelet[2706]: E1013 05:50:23.435923 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c178d5181f9fce2ff9a8b3ad9dd4e497d71ce3f1c3d6502fff67eeff24e37999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.436456 kubelet[2706]: E1013 05:50:23.436044 2706 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c178d5181f9fce2ff9a8b3ad9dd4e497d71ce3f1c3d6502fff67eeff24e37999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f9868759c-srwzw" Oct 13 05:50:23.436456 kubelet[2706]: E1013 05:50:23.436073 2706 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c178d5181f9fce2ff9a8b3ad9dd4e497d71ce3f1c3d6502fff67eeff24e37999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f9868759c-srwzw" Oct 13 05:50:23.437810 kubelet[2706]: E1013 05:50:23.436142 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f9868759c-srwzw_calico-apiserver(f96336ff-8765-4a8c-9987-b1ec94233d7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f9868759c-srwzw_calico-apiserver(f96336ff-8765-4a8c-9987-b1ec94233d7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c178d5181f9fce2ff9a8b3ad9dd4e497d71ce3f1c3d6502fff67eeff24e37999\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f9868759c-srwzw" podUID="f96336ff-8765-4a8c-9987-b1ec94233d7e" Oct 13 05:50:23.448026 containerd[1527]: time="2025-10-13T05:50:23.447714382Z" level=error msg="Failed to destroy network for sandbox \"9a5a2a5bb53c9c6b03729895eaeb65fb41af066f4b9b308aaf5bdd585fe59948\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.453521 containerd[1527]: time="2025-10-13T05:50:23.451829881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67b6d5ccfd-z7p4d,Uid:454979fa-f118-4d73-9724-b8e6b15a0083,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5a2a5bb53c9c6b03729895eaeb65fb41af066f4b9b308aaf5bdd585fe59948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.454521 kubelet[2706]: E1013 05:50:23.454189 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5a2a5bb53c9c6b03729895eaeb65fb41af066f4b9b308aaf5bdd585fe59948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.454521 kubelet[2706]: E1013 05:50:23.454260 2706 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5a2a5bb53c9c6b03729895eaeb65fb41af066f4b9b308aaf5bdd585fe59948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67b6d5ccfd-z7p4d" Oct 13 05:50:23.454521 kubelet[2706]: E1013 05:50:23.454281 2706 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5a2a5bb53c9c6b03729895eaeb65fb41af066f4b9b308aaf5bdd585fe59948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67b6d5ccfd-z7p4d" Oct 13 05:50:23.454857 kubelet[2706]: E1013 05:50:23.454338 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67b6d5ccfd-z7p4d_calico-system(454979fa-f118-4d73-9724-b8e6b15a0083)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67b6d5ccfd-z7p4d_calico-system(454979fa-f118-4d73-9724-b8e6b15a0083)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a5a2a5bb53c9c6b03729895eaeb65fb41af066f4b9b308aaf5bdd585fe59948\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67b6d5ccfd-z7p4d" podUID="454979fa-f118-4d73-9724-b8e6b15a0083" Oct 13 05:50:23.456786 containerd[1527]: time="2025-10-13T05:50:23.456706444Z" level=error msg="Failed to destroy network for sandbox \"64b6e85d22959ba4f6bdb3814420533088ac05da38ce589bfd23ccf233dc8525\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.458595 containerd[1527]: time="2025-10-13T05:50:23.458452932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-8rn64,Uid:6bdd28f7-3520-4575-b733-44e93394fe34,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b6e85d22959ba4f6bdb3814420533088ac05da38ce589bfd23ccf233dc8525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.459061 kubelet[2706]: E1013 05:50:23.459018 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b6e85d22959ba4f6bdb3814420533088ac05da38ce589bfd23ccf233dc8525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:50:23.459266 kubelet[2706]: E1013 05:50:23.459079 2706 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b6e85d22959ba4f6bdb3814420533088ac05da38ce589bfd23ccf233dc8525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-8rn64" Oct 13 05:50:23.459266 kubelet[2706]: E1013 05:50:23.459097 2706 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b6e85d22959ba4f6bdb3814420533088ac05da38ce589bfd23ccf233dc8525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-8rn64" Oct 13 05:50:23.459266 kubelet[2706]: E1013 05:50:23.459151 2706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-854f97d977-8rn64_calico-system(6bdd28f7-3520-4575-b733-44e93394fe34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-854f97d977-8rn64_calico-system(6bdd28f7-3520-4575-b733-44e93394fe34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64b6e85d22959ba4f6bdb3814420533088ac05da38ce589bfd23ccf233dc8525\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-854f97d977-8rn64" podUID="6bdd28f7-3520-4575-b733-44e93394fe34" Oct 13 05:50:30.163256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950920230.mount: Deactivated successfully. Oct 13 05:50:30.202013 containerd[1527]: time="2025-10-13T05:50:30.201840010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:30.203481 containerd[1527]: time="2025-10-13T05:50:30.203260886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:50:30.204763 containerd[1527]: time="2025-10-13T05:50:30.204730769Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:30.209007 containerd[1527]: time="2025-10-13T05:50:30.208142840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:30.209007 containerd[1527]: time="2025-10-13T05:50:30.208814313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.086164858s" Oct 13 05:50:30.209007 containerd[1527]: time="2025-10-13T05:50:30.208848411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:50:30.256714 containerd[1527]: time="2025-10-13T05:50:30.256655003Z" level=info msg="CreateContainer within sandbox \"0ca582f5c96b3a9fe472538cc9d84bc49f8de7fea578782c17eb6bbd26dec8cd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:50:30.309751 containerd[1527]: time="2025-10-13T05:50:30.309685690Z" level=info msg="Container 77e21e581b51f2088b8a348c155ed30defcaea59224842646326434ac1ae6a95: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:30.311040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4082040849.mount: Deactivated successfully. Oct 13 05:50:30.341482 containerd[1527]: time="2025-10-13T05:50:30.341413852Z" level=info msg="CreateContainer within sandbox \"0ca582f5c96b3a9fe472538cc9d84bc49f8de7fea578782c17eb6bbd26dec8cd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"77e21e581b51f2088b8a348c155ed30defcaea59224842646326434ac1ae6a95\"" Oct 13 05:50:30.343929 containerd[1527]: time="2025-10-13T05:50:30.342571152Z" level=info msg="StartContainer for \"77e21e581b51f2088b8a348c155ed30defcaea59224842646326434ac1ae6a95\"" Oct 13 05:50:30.344887 containerd[1527]: time="2025-10-13T05:50:30.344852147Z" level=info msg="connecting to shim 77e21e581b51f2088b8a348c155ed30defcaea59224842646326434ac1ae6a95" address="unix:///run/containerd/s/b583d28ee640cffc1849846fd9b64b96df92c8373b310ff51477e83313622a8c" protocol=ttrpc version=3 Oct 13 05:50:30.458250 systemd[1]: Started cri-containerd-77e21e581b51f2088b8a348c155ed30defcaea59224842646326434ac1ae6a95.scope - libcontainer container 77e21e581b51f2088b8a348c155ed30defcaea59224842646326434ac1ae6a95. Oct 13 05:50:30.558249 containerd[1527]: time="2025-10-13T05:50:30.558203079Z" level=info msg="StartContainer for \"77e21e581b51f2088b8a348c155ed30defcaea59224842646326434ac1ae6a95\" returns successfully" Oct 13 05:50:30.662295 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:50:30.662462 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:50:30.925291 kubelet[2706]: I1013 05:50:30.925231 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/454979fa-f118-4d73-9724-b8e6b15a0083-whisker-backend-key-pair\") pod \"454979fa-f118-4d73-9724-b8e6b15a0083\" (UID: \"454979fa-f118-4d73-9724-b8e6b15a0083\") " Oct 13 05:50:30.925291 kubelet[2706]: I1013 05:50:30.925282 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dkbj\" (UniqueName: \"kubernetes.io/projected/454979fa-f118-4d73-9724-b8e6b15a0083-kube-api-access-7dkbj\") pod \"454979fa-f118-4d73-9724-b8e6b15a0083\" (UID: \"454979fa-f118-4d73-9724-b8e6b15a0083\") " Oct 13 05:50:30.925291 kubelet[2706]: I1013 05:50:30.925304 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/454979fa-f118-4d73-9724-b8e6b15a0083-whisker-ca-bundle\") pod \"454979fa-f118-4d73-9724-b8e6b15a0083\" (UID: \"454979fa-f118-4d73-9724-b8e6b15a0083\") " Oct 13 05:50:30.925893 kubelet[2706]: I1013 05:50:30.925718 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454979fa-f118-4d73-9724-b8e6b15a0083-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "454979fa-f118-4d73-9724-b8e6b15a0083" (UID: "454979fa-f118-4d73-9724-b8e6b15a0083"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:50:30.933563 kubelet[2706]: I1013 05:50:30.933508 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/454979fa-f118-4d73-9724-b8e6b15a0083-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "454979fa-f118-4d73-9724-b8e6b15a0083" (UID: "454979fa-f118-4d73-9724-b8e6b15a0083"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:50:30.933726 kubelet[2706]: I1013 05:50:30.933615 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/454979fa-f118-4d73-9724-b8e6b15a0083-kube-api-access-7dkbj" (OuterVolumeSpecName: "kube-api-access-7dkbj") pod "454979fa-f118-4d73-9724-b8e6b15a0083" (UID: "454979fa-f118-4d73-9724-b8e6b15a0083"). InnerVolumeSpecName "kube-api-access-7dkbj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:50:31.025916 kubelet[2706]: I1013 05:50:31.025857 2706 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/454979fa-f118-4d73-9724-b8e6b15a0083-whisker-backend-key-pair\") on node \"ci-4459.1.0-5-82d9fc1916\" DevicePath \"\"" Oct 13 05:50:31.025916 kubelet[2706]: I1013 05:50:31.025906 2706 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7dkbj\" (UniqueName: \"kubernetes.io/projected/454979fa-f118-4d73-9724-b8e6b15a0083-kube-api-access-7dkbj\") on node \"ci-4459.1.0-5-82d9fc1916\" DevicePath \"\"" Oct 13 05:50:31.025916 kubelet[2706]: I1013 05:50:31.025922 2706 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/454979fa-f118-4d73-9724-b8e6b15a0083-whisker-ca-bundle\") on node \"ci-4459.1.0-5-82d9fc1916\" DevicePath \"\"" Oct 13 05:50:31.145341 systemd[1]: Removed slice kubepods-besteffort-pod454979fa_f118_4d73_9724_b8e6b15a0083.slice - libcontainer container kubepods-besteffort-pod454979fa_f118_4d73_9724_b8e6b15a0083.slice. Oct 13 05:50:31.171618 systemd[1]: var-lib-kubelet-pods-454979fa\x2df118\x2d4d73\x2d9724\x2db8e6b15a0083-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7dkbj.mount: Deactivated successfully. Oct 13 05:50:31.171833 systemd[1]: var-lib-kubelet-pods-454979fa\x2df118\x2d4d73\x2d9724\x2db8e6b15a0083-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:50:31.215236 kubelet[2706]: I1013 05:50:31.214342 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dpb47" podStartSLOduration=2.072834707 podStartE2EDuration="19.214323193s" podCreationTimestamp="2025-10-13 05:50:12 +0000 UTC" firstStartedPulling="2025-10-13 05:50:13.068710761 +0000 UTC m=+24.574421502" lastFinishedPulling="2025-10-13 05:50:30.21019926 +0000 UTC m=+41.715909988" observedRunningTime="2025-10-13 05:50:31.165659983 +0000 UTC m=+42.671370731" watchObservedRunningTime="2025-10-13 05:50:31.214323193 +0000 UTC m=+42.720033955" Oct 13 05:50:31.321728 systemd[1]: Created slice kubepods-besteffort-poddfb9c589_52c1_42a7_9ca0_b3b1837ec692.slice - libcontainer container kubepods-besteffort-poddfb9c589_52c1_42a7_9ca0_b3b1837ec692.slice. Oct 13 05:50:31.424127 containerd[1527]: time="2025-10-13T05:50:31.424075684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77e21e581b51f2088b8a348c155ed30defcaea59224842646326434ac1ae6a95\" id:\"3dbb5afba098d4a548e054e72fbe3cf5f03ccd7512969186013e67889dfd2129\" pid:3781 exit_status:1 exited_at:{seconds:1760334631 nanos:423468074}" Oct 13 05:50:31.436171 kubelet[2706]: I1013 05:50:31.436029 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqthn\" (UniqueName: \"kubernetes.io/projected/dfb9c589-52c1-42a7-9ca0-b3b1837ec692-kube-api-access-lqthn\") pod \"whisker-7c69f4cfbf-rz76g\" (UID: \"dfb9c589-52c1-42a7-9ca0-b3b1837ec692\") " pod="calico-system/whisker-7c69f4cfbf-rz76g" Oct 13 05:50:31.436594 kubelet[2706]: I1013 05:50:31.436427 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dfb9c589-52c1-42a7-9ca0-b3b1837ec692-whisker-backend-key-pair\") pod \"whisker-7c69f4cfbf-rz76g\" (UID: \"dfb9c589-52c1-42a7-9ca0-b3b1837ec692\") " pod="calico-system/whisker-7c69f4cfbf-rz76g" Oct 13 05:50:31.436594 kubelet[2706]: I1013 05:50:31.436524 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb9c589-52c1-42a7-9ca0-b3b1837ec692-whisker-ca-bundle\") pod \"whisker-7c69f4cfbf-rz76g\" (UID: \"dfb9c589-52c1-42a7-9ca0-b3b1837ec692\") " pod="calico-system/whisker-7c69f4cfbf-rz76g" Oct 13 05:50:31.629679 containerd[1527]: time="2025-10-13T05:50:31.629636052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c69f4cfbf-rz76g,Uid:dfb9c589-52c1-42a7-9ca0-b3b1837ec692,Namespace:calico-system,Attempt:0,}" Oct 13 05:50:31.936411 systemd-networkd[1439]: cali05f41e68989: Link UP Oct 13 05:50:31.937845 systemd-networkd[1439]: cali05f41e68989: Gained carrier Oct 13 05:50:31.965110 containerd[1527]: 2025-10-13 05:50:31.667 [INFO][3794] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:50:31.965110 containerd[1527]: 2025-10-13 05:50:31.694 [INFO][3794] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-eth0 whisker-7c69f4cfbf- calico-system dfb9c589-52c1-42a7-9ca0-b3b1837ec692 923 0 2025-10-13 05:50:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c69f4cfbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.1.0-5-82d9fc1916 whisker-7c69f4cfbf-rz76g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali05f41e68989 [] [] }} ContainerID="b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" Namespace="calico-system" Pod="whisker-7c69f4cfbf-rz76g" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-" Oct 13 05:50:31.965110 containerd[1527]: 2025-10-13 05:50:31.694 [INFO][3794] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" Namespace="calico-system" Pod="whisker-7c69f4cfbf-rz76g" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-eth0" Oct 13 05:50:31.965110 containerd[1527]: 2025-10-13 05:50:31.856 [INFO][3806] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" HandleID="k8s-pod-network.b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" Workload="ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-eth0" Oct 13 05:50:31.965447 containerd[1527]: 2025-10-13 05:50:31.858 [INFO][3806] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" HandleID="k8s-pod-network.b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" Workload="ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032bc70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-5-82d9fc1916", "pod":"whisker-7c69f4cfbf-rz76g", "timestamp":"2025-10-13 05:50:31.856326975 +0000 UTC"}, Hostname:"ci-4459.1.0-5-82d9fc1916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:50:31.965447 containerd[1527]: 2025-10-13 05:50:31.858 [INFO][3806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:50:31.965447 containerd[1527]: 2025-10-13 05:50:31.858 [INFO][3806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:50:31.965447 containerd[1527]: 2025-10-13 05:50:31.859 [INFO][3806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-5-82d9fc1916' Oct 13 05:50:31.965447 containerd[1527]: 2025-10-13 05:50:31.874 [INFO][3806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:31.965447 containerd[1527]: 2025-10-13 05:50:31.886 [INFO][3806] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:31.965447 containerd[1527]: 2025-10-13 05:50:31.895 [INFO][3806] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:31.965447 containerd[1527]: 2025-10-13 05:50:31.898 [INFO][3806] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:31.965447 containerd[1527]: 2025-10-13 05:50:31.902 [INFO][3806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:31.966210 containerd[1527]: 2025-10-13 05:50:31.902 [INFO][3806] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:31.966210 containerd[1527]: 2025-10-13 05:50:31.905 [INFO][3806] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a Oct 13 05:50:31.966210 containerd[1527]: 2025-10-13 05:50:31.909 [INFO][3806] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:31.966210 containerd[1527]: 2025-10-13 05:50:31.916 [INFO][3806] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.65/26] block=192.168.50.64/26 handle="k8s-pod-network.b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:31.966210 containerd[1527]: 2025-10-13 05:50:31.916 [INFO][3806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.65/26] handle="k8s-pod-network.b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:31.966210 containerd[1527]: 2025-10-13 05:50:31.916 [INFO][3806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:50:31.966210 containerd[1527]: 2025-10-13 05:50:31.916 [INFO][3806] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.65/26] IPv6=[] ContainerID="b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" HandleID="k8s-pod-network.b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" Workload="ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-eth0" Oct 13 05:50:31.966462 containerd[1527]: 2025-10-13 05:50:31.921 [INFO][3794] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" Namespace="calico-system" Pod="whisker-7c69f4cfbf-rz76g" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-eth0", GenerateName:"whisker-7c69f4cfbf-", Namespace:"calico-system", SelfLink:"", UID:"dfb9c589-52c1-42a7-9ca0-b3b1837ec692", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c69f4cfbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"", Pod:"whisker-7c69f4cfbf-rz76g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali05f41e68989", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:31.966462 containerd[1527]: 2025-10-13 05:50:31.921 [INFO][3794] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.65/32] ContainerID="b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" Namespace="calico-system" Pod="whisker-7c69f4cfbf-rz76g" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-eth0" Oct 13 05:50:31.966592 containerd[1527]: 2025-10-13 05:50:31.921 [INFO][3794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05f41e68989 ContainerID="b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" Namespace="calico-system" Pod="whisker-7c69f4cfbf-rz76g" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-eth0" Oct 13 05:50:31.966592 containerd[1527]: 2025-10-13 05:50:31.938 [INFO][3794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" Namespace="calico-system" Pod="whisker-7c69f4cfbf-rz76g" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-eth0" Oct 13 05:50:31.966736 containerd[1527]: 2025-10-13 05:50:31.939 [INFO][3794] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" Namespace="calico-system" Pod="whisker-7c69f4cfbf-rz76g" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-eth0", GenerateName:"whisker-7c69f4cfbf-", Namespace:"calico-system", SelfLink:"", UID:"dfb9c589-52c1-42a7-9ca0-b3b1837ec692", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c69f4cfbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a", Pod:"whisker-7c69f4cfbf-rz76g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali05f41e68989", MAC:"02:ce:b1:f2:82:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:31.966811 containerd[1527]: 2025-10-13 05:50:31.955 [INFO][3794] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" Namespace="calico-system" Pod="whisker-7c69f4cfbf-rz76g" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-whisker--7c69f4cfbf--rz76g-eth0" Oct 13 05:50:32.045194 containerd[1527]: time="2025-10-13T05:50:32.045123040Z" level=info msg="connecting to shim b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a" address="unix:///run/containerd/s/3a68361a5d555986d32152854c03efb8d49bb7d87495bf17210ffd4f19e4ee4e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:50:32.078253 systemd[1]: Started cri-containerd-b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a.scope - libcontainer container b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a. Oct 13 05:50:32.152344 containerd[1527]: time="2025-10-13T05:50:32.152302741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c69f4cfbf-rz76g,Uid:dfb9c589-52c1-42a7-9ca0-b3b1837ec692,Namespace:calico-system,Attempt:0,} returns sandbox id \"b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a\"" Oct 13 05:50:32.159128 containerd[1527]: time="2025-10-13T05:50:32.159036232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:50:32.517072 containerd[1527]: time="2025-10-13T05:50:32.517014103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77e21e581b51f2088b8a348c155ed30defcaea59224842646326434ac1ae6a95\" id:\"f8571db5d2fb8526a5ba3b05cafd3a17f2cfedeb0ce60a6159e414b55ba195d0\" pid:3909 exit_status:1 exited_at:{seconds:1760334632 nanos:516537301}" Oct 13 05:50:32.683272 kubelet[2706]: I1013 05:50:32.683056 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="454979fa-f118-4d73-9724-b8e6b15a0083" path="/var/lib/kubelet/pods/454979fa-f118-4d73-9724-b8e6b15a0083/volumes" Oct 13 05:50:33.193215 systemd-networkd[1439]: cali05f41e68989: Gained IPv6LL Oct 13 05:50:33.889331 containerd[1527]: time="2025-10-13T05:50:33.889275201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:33.890893 containerd[1527]: time="2025-10-13T05:50:33.890847663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:50:33.891433 containerd[1527]: time="2025-10-13T05:50:33.891401294Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:33.895320 containerd[1527]: time="2025-10-13T05:50:33.895274458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:33.906829 containerd[1527]: time="2025-10-13T05:50:33.906759143Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.747671152s" Oct 13 05:50:33.906829 containerd[1527]: time="2025-10-13T05:50:33.906813304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:50:33.914313 containerd[1527]: time="2025-10-13T05:50:33.914251683Z" level=info msg="CreateContainer within sandbox \"b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:50:33.924129 containerd[1527]: time="2025-10-13T05:50:33.923144735Z" level=info msg="Container 64465baf412359510e4bd5369826a3f32dc8d0796683189397f309cd4b6d7fac: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:33.934370 containerd[1527]: time="2025-10-13T05:50:33.934310919Z" level=info msg="CreateContainer within sandbox \"b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"64465baf412359510e4bd5369826a3f32dc8d0796683189397f309cd4b6d7fac\"" Oct 13 05:50:33.936855 containerd[1527]: time="2025-10-13T05:50:33.935604439Z" level=info msg="StartContainer for \"64465baf412359510e4bd5369826a3f32dc8d0796683189397f309cd4b6d7fac\"" Oct 13 05:50:33.939003 containerd[1527]: time="2025-10-13T05:50:33.938926946Z" level=info msg="connecting to shim 64465baf412359510e4bd5369826a3f32dc8d0796683189397f309cd4b6d7fac" address="unix:///run/containerd/s/3a68361a5d555986d32152854c03efb8d49bb7d87495bf17210ffd4f19e4ee4e" protocol=ttrpc version=3 Oct 13 05:50:33.975389 systemd[1]: Started cri-containerd-64465baf412359510e4bd5369826a3f32dc8d0796683189397f309cd4b6d7fac.scope - libcontainer container 64465baf412359510e4bd5369826a3f32dc8d0796683189397f309cd4b6d7fac. Oct 13 05:50:34.045047 containerd[1527]: time="2025-10-13T05:50:34.044366604Z" level=info msg="StartContainer for \"64465baf412359510e4bd5369826a3f32dc8d0796683189397f309cd4b6d7fac\" returns successfully" Oct 13 05:50:34.047437 containerd[1527]: time="2025-10-13T05:50:34.047313416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:50:34.684574 containerd[1527]: time="2025-10-13T05:50:34.684498265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84f6bbddc4-4gjtm,Uid:1108956f-278d-4133-bdcf-9491cc4fe979,Namespace:calico-system,Attempt:0,}" Oct 13 05:50:34.686554 containerd[1527]: time="2025-10-13T05:50:34.686493228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-8rn64,Uid:6bdd28f7-3520-4575-b733-44e93394fe34,Namespace:calico-system,Attempt:0,}" Oct 13 05:50:34.926486 systemd-networkd[1439]: calicd15932e7d9: Link UP Oct 13 05:50:34.928219 systemd-networkd[1439]: calicd15932e7d9: Gained carrier Oct 13 05:50:34.958534 containerd[1527]: 2025-10-13 05:50:34.750 [INFO][4052] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:50:34.958534 containerd[1527]: 2025-10-13 05:50:34.778 [INFO][4052] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-eth0 goldmane-854f97d977- calico-system 6bdd28f7-3520-4575-b733-44e93394fe34 853 0 2025-10-13 05:50:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:854f97d977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.1.0-5-82d9fc1916 goldmane-854f97d977-8rn64 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicd15932e7d9 [] [] }} ContainerID="778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" Namespace="calico-system" Pod="goldmane-854f97d977-8rn64" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-" Oct 13 05:50:34.958534 containerd[1527]: 2025-10-13 05:50:34.779 [INFO][4052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" Namespace="calico-system" Pod="goldmane-854f97d977-8rn64" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-eth0" Oct 13 05:50:34.958534 containerd[1527]: 2025-10-13 05:50:34.860 [INFO][4075] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" HandleID="k8s-pod-network.778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" Workload="ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-eth0" Oct 13 05:50:34.960193 containerd[1527]: 2025-10-13 05:50:34.861 [INFO][4075] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" HandleID="k8s-pod-network.778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" Workload="ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000352df0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-5-82d9fc1916", "pod":"goldmane-854f97d977-8rn64", "timestamp":"2025-10-13 05:50:34.860807548 +0000 UTC"}, Hostname:"ci-4459.1.0-5-82d9fc1916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:50:34.960193 containerd[1527]: 2025-10-13 05:50:34.861 [INFO][4075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:50:34.960193 containerd[1527]: 2025-10-13 05:50:34.861 [INFO][4075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:50:34.960193 containerd[1527]: 2025-10-13 05:50:34.861 [INFO][4075] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-5-82d9fc1916' Oct 13 05:50:34.960193 containerd[1527]: 2025-10-13 05:50:34.871 [INFO][4075] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:34.960193 containerd[1527]: 2025-10-13 05:50:34.880 [INFO][4075] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:34.960193 containerd[1527]: 2025-10-13 05:50:34.886 [INFO][4075] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:34.960193 containerd[1527]: 2025-10-13 05:50:34.889 [INFO][4075] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:34.960193 containerd[1527]: 2025-10-13 05:50:34.892 [INFO][4075] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:34.960425 containerd[1527]: 2025-10-13 05:50:34.892 [INFO][4075] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:34.960425 containerd[1527]: 2025-10-13 05:50:34.894 [INFO][4075] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7 Oct 13 05:50:34.960425 containerd[1527]: 2025-10-13 05:50:34.899 [INFO][4075] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:34.960425 containerd[1527]: 2025-10-13 05:50:34.906 [INFO][4075] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.66/26] block=192.168.50.64/26 handle="k8s-pod-network.778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:34.960425 containerd[1527]: 2025-10-13 05:50:34.906 [INFO][4075] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.66/26] handle="k8s-pod-network.778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:34.960425 containerd[1527]: 2025-10-13 05:50:34.906 [INFO][4075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:50:34.960425 containerd[1527]: 2025-10-13 05:50:34.907 [INFO][4075] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.66/26] IPv6=[] ContainerID="778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" HandleID="k8s-pod-network.778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" Workload="ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-eth0" Oct 13 05:50:34.960621 containerd[1527]: 2025-10-13 05:50:34.915 [INFO][4052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" Namespace="calico-system" Pod="goldmane-854f97d977-8rn64" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"6bdd28f7-3520-4575-b733-44e93394fe34", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"", Pod:"goldmane-854f97d977-8rn64", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicd15932e7d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:34.960621 containerd[1527]: 2025-10-13 05:50:34.915 [INFO][4052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.66/32] ContainerID="778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" Namespace="calico-system" Pod="goldmane-854f97d977-8rn64" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-eth0" Oct 13 05:50:34.960704 containerd[1527]: 2025-10-13 05:50:34.916 [INFO][4052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd15932e7d9 ContainerID="778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" Namespace="calico-system" Pod="goldmane-854f97d977-8rn64" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-eth0" Oct 13 05:50:34.960704 containerd[1527]: 2025-10-13 05:50:34.928 [INFO][4052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" Namespace="calico-system" Pod="goldmane-854f97d977-8rn64" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-eth0" Oct 13 05:50:34.960756 containerd[1527]: 2025-10-13 05:50:34.934 [INFO][4052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" Namespace="calico-system" Pod="goldmane-854f97d977-8rn64" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"6bdd28f7-3520-4575-b733-44e93394fe34", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7", Pod:"goldmane-854f97d977-8rn64", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicd15932e7d9", MAC:"0e:23:3a:93:79:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:34.960839 containerd[1527]: 2025-10-13 05:50:34.954 [INFO][4052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" Namespace="calico-system" Pod="goldmane-854f97d977-8rn64" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-goldmane--854f97d977--8rn64-eth0" Oct 13 05:50:35.004107 containerd[1527]: time="2025-10-13T05:50:35.004029648Z" level=info msg="connecting to shim 778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7" address="unix:///run/containerd/s/63cab8375bfc65a266903f8d247317ccabfafd011eaa1eddecf40e0676e4a740" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:50:35.067679 systemd[1]: Started cri-containerd-778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7.scope - libcontainer container 778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7. Oct 13 05:50:35.069109 systemd-networkd[1439]: cali238f0a0a3ba: Link UP Oct 13 05:50:35.069338 systemd-networkd[1439]: cali238f0a0a3ba: Gained carrier Oct 13 05:50:35.110571 containerd[1527]: 2025-10-13 05:50:34.752 [INFO][4050] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:50:35.110571 containerd[1527]: 2025-10-13 05:50:34.779 [INFO][4050] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-eth0 calico-kube-controllers-84f6bbddc4- calico-system 1108956f-278d-4133-bdcf-9491cc4fe979 852 0 2025-10-13 05:50:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84f6bbddc4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.1.0-5-82d9fc1916 calico-kube-controllers-84f6bbddc4-4gjtm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali238f0a0a3ba [] [] }} ContainerID="4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" Namespace="calico-system" Pod="calico-kube-controllers-84f6bbddc4-4gjtm" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-" Oct 13 05:50:35.110571 containerd[1527]: 2025-10-13 05:50:34.779 [INFO][4050] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" Namespace="calico-system" Pod="calico-kube-controllers-84f6bbddc4-4gjtm" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-eth0" Oct 13 05:50:35.110571 containerd[1527]: 2025-10-13 05:50:34.860 [INFO][4076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" HandleID="k8s-pod-network.4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-eth0" Oct 13 05:50:35.110918 containerd[1527]: 2025-10-13 05:50:34.861 [INFO][4076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" HandleID="k8s-pod-network.4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003298a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-5-82d9fc1916", "pod":"calico-kube-controllers-84f6bbddc4-4gjtm", "timestamp":"2025-10-13 05:50:34.860927307 +0000 UTC"}, Hostname:"ci-4459.1.0-5-82d9fc1916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:50:35.110918 containerd[1527]: 2025-10-13 05:50:34.862 [INFO][4076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:50:35.110918 containerd[1527]: 2025-10-13 05:50:34.907 [INFO][4076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:50:35.110918 containerd[1527]: 2025-10-13 05:50:34.908 [INFO][4076] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-5-82d9fc1916' Oct 13 05:50:35.110918 containerd[1527]: 2025-10-13 05:50:34.971 [INFO][4076] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:35.110918 containerd[1527]: 2025-10-13 05:50:34.988 [INFO][4076] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:35.110918 containerd[1527]: 2025-10-13 05:50:34.998 [INFO][4076] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:35.110918 containerd[1527]: 2025-10-13 05:50:35.008 [INFO][4076] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:35.110918 containerd[1527]: 2025-10-13 05:50:35.012 [INFO][4076] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:35.111798 containerd[1527]: 2025-10-13 05:50:35.013 [INFO][4076] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:35.111798 containerd[1527]: 2025-10-13 05:50:35.018 [INFO][4076] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf Oct 13 05:50:35.111798 containerd[1527]: 2025-10-13 05:50:35.032 [INFO][4076] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:35.111798 containerd[1527]: 2025-10-13 05:50:35.044 [INFO][4076] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.67/26] block=192.168.50.64/26 handle="k8s-pod-network.4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:35.111798 containerd[1527]: 2025-10-13 05:50:35.044 [INFO][4076] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.67/26] handle="k8s-pod-network.4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:35.111798 containerd[1527]: 2025-10-13 05:50:35.044 [INFO][4076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:50:35.111798 containerd[1527]: 2025-10-13 05:50:35.044 [INFO][4076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.67/26] IPv6=[] ContainerID="4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" HandleID="k8s-pod-network.4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-eth0" Oct 13 05:50:35.112952 containerd[1527]: 2025-10-13 05:50:35.059 [INFO][4050] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" Namespace="calico-system" Pod="calico-kube-controllers-84f6bbddc4-4gjtm" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-eth0", GenerateName:"calico-kube-controllers-84f6bbddc4-", Namespace:"calico-system", SelfLink:"", UID:"1108956f-278d-4133-bdcf-9491cc4fe979", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84f6bbddc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"", Pod:"calico-kube-controllers-84f6bbddc4-4gjtm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali238f0a0a3ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:35.113291 containerd[1527]: 2025-10-13 05:50:35.060 [INFO][4050] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.67/32] ContainerID="4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" Namespace="calico-system" Pod="calico-kube-controllers-84f6bbddc4-4gjtm" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-eth0" Oct 13 05:50:35.113291 containerd[1527]: 2025-10-13 05:50:35.060 [INFO][4050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali238f0a0a3ba ContainerID="4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" Namespace="calico-system" Pod="calico-kube-controllers-84f6bbddc4-4gjtm" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-eth0" Oct 13 05:50:35.113291 containerd[1527]: 2025-10-13 05:50:35.073 [INFO][4050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" Namespace="calico-system" Pod="calico-kube-controllers-84f6bbddc4-4gjtm" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-eth0" Oct 13 05:50:35.113440 containerd[1527]: 2025-10-13 05:50:35.078 [INFO][4050] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" Namespace="calico-system" Pod="calico-kube-controllers-84f6bbddc4-4gjtm" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-eth0", GenerateName:"calico-kube-controllers-84f6bbddc4-", Namespace:"calico-system", SelfLink:"", UID:"1108956f-278d-4133-bdcf-9491cc4fe979", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84f6bbddc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf", Pod:"calico-kube-controllers-84f6bbddc4-4gjtm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali238f0a0a3ba", MAC:"ce:3d:d3:e5:65:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:35.113505 containerd[1527]: 2025-10-13 05:50:35.106 [INFO][4050] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" Namespace="calico-system" Pod="calico-kube-controllers-84f6bbddc4-4gjtm" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--kube--controllers--84f6bbddc4--4gjtm-eth0" Oct 13 05:50:35.217010 containerd[1527]: time="2025-10-13T05:50:35.216263551Z" level=info msg="connecting to shim 4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf" address="unix:///run/containerd/s/2507fc2d5ec6b9ea2a401bb6c82bc17c64cd7741a5ff4bb136793638797260e3" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:50:35.257563 containerd[1527]: time="2025-10-13T05:50:35.257503828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-8rn64,Uid:6bdd28f7-3520-4575-b733-44e93394fe34,Namespace:calico-system,Attempt:0,} returns sandbox id \"778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7\"" Oct 13 05:50:35.266271 systemd[1]: Started cri-containerd-4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf.scope - libcontainer container 4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf. Oct 13 05:50:35.335592 containerd[1527]: time="2025-10-13T05:50:35.335545738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84f6bbddc4-4gjtm,Uid:1108956f-278d-4133-bdcf-9491cc4fe979,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf\"" Oct 13 05:50:35.682233 containerd[1527]: time="2025-10-13T05:50:35.682110120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd79d768-qqxmr,Uid:bbf88e0c-65f2-4474-8006-a06f10ea6a87,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:50:35.982129 systemd-networkd[1439]: cali104a7505993: Link UP Oct 13 05:50:35.986441 systemd-networkd[1439]: cali104a7505993: Gained carrier Oct 13 05:50:36.074815 containerd[1527]: 2025-10-13 05:50:35.729 [INFO][4208] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:50:36.074815 containerd[1527]: 2025-10-13 05:50:35.744 [INFO][4208] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-eth0 calico-apiserver-6cd79d768- calico-apiserver bbf88e0c-65f2-4474-8006-a06f10ea6a87 851 0 2025-10-13 05:50:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd79d768 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-5-82d9fc1916 calico-apiserver-6cd79d768-qqxmr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali104a7505993 [] [] }} ContainerID="ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-qqxmr" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-" Oct 13 05:50:36.074815 containerd[1527]: 2025-10-13 05:50:35.744 [INFO][4208] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-qqxmr" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-eth0" Oct 13 05:50:36.074815 containerd[1527]: 2025-10-13 05:50:35.817 [INFO][4219] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" HandleID="k8s-pod-network.ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-eth0" Oct 13 05:50:36.075590 containerd[1527]: 2025-10-13 05:50:35.817 [INFO][4219] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" HandleID="k8s-pod-network.ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-5-82d9fc1916", "pod":"calico-apiserver-6cd79d768-qqxmr", "timestamp":"2025-10-13 05:50:35.817697788 +0000 UTC"}, Hostname:"ci-4459.1.0-5-82d9fc1916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:50:36.075590 containerd[1527]: 2025-10-13 05:50:35.818 [INFO][4219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:50:36.075590 containerd[1527]: 2025-10-13 05:50:35.818 [INFO][4219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:50:36.075590 containerd[1527]: 2025-10-13 05:50:35.818 [INFO][4219] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-5-82d9fc1916' Oct 13 05:50:36.075590 containerd[1527]: 2025-10-13 05:50:35.830 [INFO][4219] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.075590 containerd[1527]: 2025-10-13 05:50:35.883 [INFO][4219] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.075590 containerd[1527]: 2025-10-13 05:50:35.895 [INFO][4219] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.075590 containerd[1527]: 2025-10-13 05:50:35.899 [INFO][4219] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.075590 containerd[1527]: 2025-10-13 05:50:35.906 [INFO][4219] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.079071 containerd[1527]: 2025-10-13 05:50:35.906 [INFO][4219] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.079071 containerd[1527]: 2025-10-13 05:50:35.918 [INFO][4219] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed Oct 13 05:50:36.079071 containerd[1527]: 2025-10-13 05:50:35.928 [INFO][4219] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.079071 containerd[1527]: 2025-10-13 05:50:35.953 [INFO][4219] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.68/26] block=192.168.50.64/26 handle="k8s-pod-network.ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.079071 containerd[1527]: 2025-10-13 05:50:35.953 [INFO][4219] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.68/26] handle="k8s-pod-network.ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.079071 containerd[1527]: 2025-10-13 05:50:35.953 [INFO][4219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:50:36.079071 containerd[1527]: 2025-10-13 05:50:35.953 [INFO][4219] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.68/26] IPv6=[] ContainerID="ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" HandleID="k8s-pod-network.ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-eth0" Oct 13 05:50:36.079445 containerd[1527]: 2025-10-13 05:50:35.966 [INFO][4208] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-qqxmr" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-eth0", GenerateName:"calico-apiserver-6cd79d768-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbf88e0c-65f2-4474-8006-a06f10ea6a87", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd79d768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"", Pod:"calico-apiserver-6cd79d768-qqxmr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali104a7505993", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:36.079581 containerd[1527]: 2025-10-13 05:50:35.966 [INFO][4208] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.68/32] ContainerID="ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-qqxmr" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-eth0" Oct 13 05:50:36.079581 containerd[1527]: 2025-10-13 05:50:35.966 [INFO][4208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali104a7505993 ContainerID="ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-qqxmr" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-eth0" Oct 13 05:50:36.079581 containerd[1527]: 2025-10-13 05:50:35.992 [INFO][4208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-qqxmr" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-eth0" Oct 13 05:50:36.079710 containerd[1527]: 2025-10-13 05:50:35.993 [INFO][4208] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-qqxmr" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-eth0", GenerateName:"calico-apiserver-6cd79d768-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbf88e0c-65f2-4474-8006-a06f10ea6a87", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd79d768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed", Pod:"calico-apiserver-6cd79d768-qqxmr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali104a7505993", MAC:"e6:53:f8:7e:19:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:36.079807 containerd[1527]: 2025-10-13 05:50:36.062 [INFO][4208] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-qqxmr" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--qqxmr-eth0" Oct 13 05:50:36.139066 systemd-networkd[1439]: calicd15932e7d9: Gained IPv6LL Oct 13 05:50:36.181285 containerd[1527]: time="2025-10-13T05:50:36.181188535Z" level=info msg="connecting to shim ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed" address="unix:///run/containerd/s/1b30dfff9da1e9b0b72b4e74d21aee016833ab355ddbb6a1f02174e3f823100b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:50:36.301040 systemd[1]: Started cri-containerd-ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed.scope - libcontainer container ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed. Oct 13 05:50:36.466614 containerd[1527]: time="2025-10-13T05:50:36.466550630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd79d768-qqxmr,Uid:bbf88e0c-65f2-4474-8006-a06f10ea6a87,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed\"" Oct 13 05:50:36.695169 containerd[1527]: time="2025-10-13T05:50:36.694021509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9868759c-srwzw,Uid:f96336ff-8765-4a8c-9987-b1ec94233d7e,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:50:36.969597 systemd-networkd[1439]: calie28215c6f81: Link UP Oct 13 05:50:36.972213 systemd-networkd[1439]: cali238f0a0a3ba: Gained IPv6LL Oct 13 05:50:36.972568 systemd-networkd[1439]: calie28215c6f81: Gained carrier Oct 13 05:50:36.997784 containerd[1527]: 2025-10-13 05:50:36.749 [INFO][4301] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:50:36.997784 containerd[1527]: 2025-10-13 05:50:36.771 [INFO][4301] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0 calico-apiserver-f9868759c- calico-apiserver f96336ff-8765-4a8c-9987-b1ec94233d7e 850 0 2025-10-13 05:50:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f9868759c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-5-82d9fc1916 calico-apiserver-f9868759c-srwzw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie28215c6f81 [] [] }} ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-srwzw" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-" Oct 13 05:50:36.997784 containerd[1527]: 2025-10-13 05:50:36.771 [INFO][4301] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-srwzw" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0" Oct 13 05:50:36.997784 containerd[1527]: 2025-10-13 05:50:36.851 [INFO][4313] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" HandleID="k8s-pod-network.5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0" Oct 13 05:50:36.998336 containerd[1527]: 2025-10-13 05:50:36.851 [INFO][4313] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" HandleID="k8s-pod-network.5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf220), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-5-82d9fc1916", "pod":"calico-apiserver-f9868759c-srwzw", "timestamp":"2025-10-13 05:50:36.850941273 +0000 UTC"}, Hostname:"ci-4459.1.0-5-82d9fc1916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:50:36.998336 containerd[1527]: 2025-10-13 05:50:36.852 [INFO][4313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:50:36.998336 containerd[1527]: 2025-10-13 05:50:36.852 [INFO][4313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:50:36.998336 containerd[1527]: 2025-10-13 05:50:36.852 [INFO][4313] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-5-82d9fc1916' Oct 13 05:50:36.998336 containerd[1527]: 2025-10-13 05:50:36.870 [INFO][4313] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.998336 containerd[1527]: 2025-10-13 05:50:36.883 [INFO][4313] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.998336 containerd[1527]: 2025-10-13 05:50:36.897 [INFO][4313] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.998336 containerd[1527]: 2025-10-13 05:50:36.904 [INFO][4313] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.998336 containerd[1527]: 2025-10-13 05:50:36.910 [INFO][4313] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.998740 containerd[1527]: 2025-10-13 05:50:36.910 [INFO][4313] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.998740 containerd[1527]: 2025-10-13 05:50:36.914 [INFO][4313] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d Oct 13 05:50:36.998740 containerd[1527]: 2025-10-13 05:50:36.927 [INFO][4313] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.998740 containerd[1527]: 2025-10-13 05:50:36.942 [INFO][4313] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.69/26] block=192.168.50.64/26 handle="k8s-pod-network.5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.998740 containerd[1527]: 2025-10-13 05:50:36.942 [INFO][4313] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.69/26] handle="k8s-pod-network.5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:36.998740 containerd[1527]: 2025-10-13 05:50:36.945 [INFO][4313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:50:36.998740 containerd[1527]: 2025-10-13 05:50:36.946 [INFO][4313] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.69/26] IPv6=[] ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" HandleID="k8s-pod-network.5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0" Oct 13 05:50:36.999887 containerd[1527]: 2025-10-13 05:50:36.957 [INFO][4301] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-srwzw" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0", GenerateName:"calico-apiserver-f9868759c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f96336ff-8765-4a8c-9987-b1ec94233d7e", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9868759c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"", Pod:"calico-apiserver-f9868759c-srwzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie28215c6f81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:37.000086 containerd[1527]: 2025-10-13 05:50:36.958 [INFO][4301] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.69/32] ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-srwzw" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0" Oct 13 05:50:37.000086 containerd[1527]: 2025-10-13 05:50:36.958 [INFO][4301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie28215c6f81 ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-srwzw" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0" Oct 13 05:50:37.000086 containerd[1527]: 2025-10-13 05:50:36.970 [INFO][4301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-srwzw" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0" Oct 13 05:50:37.000212 containerd[1527]: 2025-10-13 05:50:36.970 [INFO][4301] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-srwzw" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0", GenerateName:"calico-apiserver-f9868759c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f96336ff-8765-4a8c-9987-b1ec94233d7e", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9868759c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d", Pod:"calico-apiserver-f9868759c-srwzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie28215c6f81", MAC:"3a:e1:b7:8b:7a:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:37.000307 containerd[1527]: 2025-10-13 05:50:36.994 [INFO][4301] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-srwzw" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0" Oct 13 05:50:37.056583 containerd[1527]: time="2025-10-13T05:50:37.056471936Z" level=info msg="connecting to shim 5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" address="unix:///run/containerd/s/813a55f6d5f995e41d940b888c13c635324b5bec6d12e4664f3a9d3f9c6d8415" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:50:37.121612 systemd[1]: Started cri-containerd-5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d.scope - libcontainer container 5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d. Oct 13 05:50:37.355121 systemd-networkd[1439]: cali104a7505993: Gained IPv6LL Oct 13 05:50:37.386742 containerd[1527]: time="2025-10-13T05:50:37.383146845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9868759c-srwzw,Uid:f96336ff-8765-4a8c-9987-b1ec94233d7e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d\"" Oct 13 05:50:37.449376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2691994835.mount: Deactivated successfully. Oct 13 05:50:37.465838 containerd[1527]: time="2025-10-13T05:50:37.465765706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:37.469292 containerd[1527]: time="2025-10-13T05:50:37.468957401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:50:37.469687 containerd[1527]: time="2025-10-13T05:50:37.469642770Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:37.473326 containerd[1527]: time="2025-10-13T05:50:37.473233770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:37.474630 containerd[1527]: time="2025-10-13T05:50:37.474473470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.426849807s" Oct 13 05:50:37.474630 containerd[1527]: time="2025-10-13T05:50:37.474515175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:50:37.478190 containerd[1527]: time="2025-10-13T05:50:37.478149971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:50:37.482732 containerd[1527]: time="2025-10-13T05:50:37.482666707Z" level=info msg="CreateContainer within sandbox \"b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:50:37.491625 containerd[1527]: time="2025-10-13T05:50:37.491483089Z" level=info msg="Container 12c5f8423d583cdd330424889a29b1e08dcda3ca2af32c1bd44ee0975dbac03d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:37.500734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1075142113.mount: Deactivated successfully. Oct 13 05:50:37.507172 containerd[1527]: time="2025-10-13T05:50:37.507009304Z" level=info msg="CreateContainer within sandbox \"b315862b471ab217f86da821cb2e3a8670a7826352dcc0ca71ba1136eaa6094a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"12c5f8423d583cdd330424889a29b1e08dcda3ca2af32c1bd44ee0975dbac03d\"" Oct 13 05:50:37.508276 containerd[1527]: time="2025-10-13T05:50:37.508060226Z" level=info msg="StartContainer for \"12c5f8423d583cdd330424889a29b1e08dcda3ca2af32c1bd44ee0975dbac03d\"" Oct 13 05:50:37.511421 containerd[1527]: time="2025-10-13T05:50:37.511350064Z" level=info msg="connecting to shim 12c5f8423d583cdd330424889a29b1e08dcda3ca2af32c1bd44ee0975dbac03d" address="unix:///run/containerd/s/3a68361a5d555986d32152854c03efb8d49bb7d87495bf17210ffd4f19e4ee4e" protocol=ttrpc version=3 Oct 13 05:50:37.556297 systemd[1]: Started cri-containerd-12c5f8423d583cdd330424889a29b1e08dcda3ca2af32c1bd44ee0975dbac03d.scope - libcontainer container 12c5f8423d583cdd330424889a29b1e08dcda3ca2af32c1bd44ee0975dbac03d. Oct 13 05:50:37.690031 containerd[1527]: time="2025-10-13T05:50:37.688716564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rp8ql,Uid:231efcce-c47b-4a1a-8f52-94bd62eab694,Namespace:calico-system,Attempt:0,}" Oct 13 05:50:37.692771 containerd[1527]: time="2025-10-13T05:50:37.691913290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dp622,Uid:95c84384-08f9-4081-baf0-5af6d2526999,Namespace:kube-system,Attempt:0,}" Oct 13 05:50:37.692897 kubelet[2706]: E1013 05:50:37.690216 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:37.774635 containerd[1527]: time="2025-10-13T05:50:37.773549287Z" level=info msg="StartContainer for \"12c5f8423d583cdd330424889a29b1e08dcda3ca2af32c1bd44ee0975dbac03d\" returns successfully" Oct 13 05:50:38.068655 systemd-networkd[1439]: cali55f11bef220: Link UP Oct 13 05:50:38.070174 systemd-networkd[1439]: cali55f11bef220: Gained carrier Oct 13 05:50:38.111336 containerd[1527]: 2025-10-13 05:50:37.794 [INFO][4415] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:50:38.111336 containerd[1527]: 2025-10-13 05:50:37.828 [INFO][4415] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-eth0 csi-node-driver- calico-system 231efcce-c47b-4a1a-8f52-94bd62eab694 738 0 2025-10-13 05:50:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:f8549cf5c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.1.0-5-82d9fc1916 csi-node-driver-rp8ql eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali55f11bef220 [] [] }} ContainerID="19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" Namespace="calico-system" Pod="csi-node-driver-rp8ql" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-" Oct 13 05:50:38.111336 containerd[1527]: 2025-10-13 05:50:37.828 [INFO][4415] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" Namespace="calico-system" Pod="csi-node-driver-rp8ql" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-eth0" Oct 13 05:50:38.111336 containerd[1527]: 2025-10-13 05:50:37.967 [INFO][4448] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" HandleID="k8s-pod-network.19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" Workload="ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-eth0" Oct 13 05:50:38.111766 containerd[1527]: 2025-10-13 05:50:37.967 [INFO][4448] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" HandleID="k8s-pod-network.19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" Workload="ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00060a280), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-5-82d9fc1916", "pod":"csi-node-driver-rp8ql", "timestamp":"2025-10-13 05:50:37.96686762 +0000 UTC"}, Hostname:"ci-4459.1.0-5-82d9fc1916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:50:38.111766 containerd[1527]: 2025-10-13 05:50:37.967 [INFO][4448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:50:38.111766 containerd[1527]: 2025-10-13 05:50:37.967 [INFO][4448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:50:38.111766 containerd[1527]: 2025-10-13 05:50:37.967 [INFO][4448] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-5-82d9fc1916' Oct 13 05:50:38.111766 containerd[1527]: 2025-10-13 05:50:37.990 [INFO][4448] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.111766 containerd[1527]: 2025-10-13 05:50:38.001 [INFO][4448] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.111766 containerd[1527]: 2025-10-13 05:50:38.009 [INFO][4448] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.111766 containerd[1527]: 2025-10-13 05:50:38.014 [INFO][4448] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.111766 containerd[1527]: 2025-10-13 05:50:38.019 [INFO][4448] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.113183 containerd[1527]: 2025-10-13 05:50:38.020 [INFO][4448] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.113183 containerd[1527]: 2025-10-13 05:50:38.025 [INFO][4448] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2 Oct 13 05:50:38.113183 containerd[1527]: 2025-10-13 05:50:38.038 [INFO][4448] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.113183 containerd[1527]: 2025-10-13 05:50:38.056 [INFO][4448] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.70/26] block=192.168.50.64/26 handle="k8s-pod-network.19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.113183 containerd[1527]: 2025-10-13 05:50:38.056 [INFO][4448] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.70/26] handle="k8s-pod-network.19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.113183 containerd[1527]: 2025-10-13 05:50:38.056 [INFO][4448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:50:38.113183 containerd[1527]: 2025-10-13 05:50:38.056 [INFO][4448] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.70/26] IPv6=[] ContainerID="19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" HandleID="k8s-pod-network.19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" Workload="ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-eth0" Oct 13 05:50:38.114004 containerd[1527]: 2025-10-13 05:50:38.063 [INFO][4415] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" Namespace="calico-system" Pod="csi-node-driver-rp8ql" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"231efcce-c47b-4a1a-8f52-94bd62eab694", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"", Pod:"csi-node-driver-rp8ql", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali55f11bef220", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:38.114190 containerd[1527]: 2025-10-13 05:50:38.063 [INFO][4415] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.70/32] ContainerID="19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" Namespace="calico-system" Pod="csi-node-driver-rp8ql" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-eth0" Oct 13 05:50:38.114190 containerd[1527]: 2025-10-13 05:50:38.064 [INFO][4415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55f11bef220 ContainerID="19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" Namespace="calico-system" Pod="csi-node-driver-rp8ql" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-eth0" Oct 13 05:50:38.114190 containerd[1527]: 2025-10-13 05:50:38.071 [INFO][4415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" Namespace="calico-system" Pod="csi-node-driver-rp8ql" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-eth0" Oct 13 05:50:38.114437 containerd[1527]: 2025-10-13 05:50:38.077 [INFO][4415] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" Namespace="calico-system" Pod="csi-node-driver-rp8ql" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"231efcce-c47b-4a1a-8f52-94bd62eab694", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2", Pod:"csi-node-driver-rp8ql", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali55f11bef220", MAC:"d6:f7:e5:ca:c1:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:38.115257 containerd[1527]: 2025-10-13 05:50:38.104 [INFO][4415] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" Namespace="calico-system" Pod="csi-node-driver-rp8ql" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-csi--node--driver--rp8ql-eth0" Oct 13 05:50:38.161749 containerd[1527]: time="2025-10-13T05:50:38.161630795Z" level=info msg="connecting to shim 19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2" address="unix:///run/containerd/s/ab60e2fea7f65d3b6b89719d2f986c1c72ec4d1a87de811713906c04ec6e6590" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:50:38.186095 systemd-networkd[1439]: calie28215c6f81: Gained IPv6LL Oct 13 05:50:38.206473 systemd[1]: Started cri-containerd-19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2.scope - libcontainer container 19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2. Oct 13 05:50:38.253145 systemd-networkd[1439]: cali7debb9fb4de: Link UP Oct 13 05:50:38.258760 systemd-networkd[1439]: cali7debb9fb4de: Gained carrier Oct 13 05:50:38.283995 kubelet[2706]: I1013 05:50:38.283715 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7c69f4cfbf-rz76g" podStartSLOduration=1.9650683930000001 podStartE2EDuration="7.283690684s" podCreationTimestamp="2025-10-13 05:50:31 +0000 UTC" firstStartedPulling="2025-10-13 05:50:32.158606129 +0000 UTC m=+43.664316865" lastFinishedPulling="2025-10-13 05:50:37.477228412 +0000 UTC m=+48.982939156" observedRunningTime="2025-10-13 05:50:38.27795593 +0000 UTC m=+49.783666678" watchObservedRunningTime="2025-10-13 05:50:38.283690684 +0000 UTC m=+49.789401429" Oct 13 05:50:38.296072 kubelet[2706]: I1013 05:50:38.296016 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:50:38.296594 kubelet[2706]: E1013 05:50:38.296569 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:38.316016 containerd[1527]: 2025-10-13 05:50:37.827 [INFO][4420] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:50:38.316016 containerd[1527]: 2025-10-13 05:50:37.875 [INFO][4420] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-eth0 coredns-66bc5c9577- kube-system 95c84384-08f9-4081-baf0-5af6d2526999 846 0 2025-10-13 05:49:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-5-82d9fc1916 coredns-66bc5c9577-dp622 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7debb9fb4de [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" Namespace="kube-system" Pod="coredns-66bc5c9577-dp622" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-" Oct 13 05:50:38.316016 containerd[1527]: 2025-10-13 05:50:37.876 [INFO][4420] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" Namespace="kube-system" Pod="coredns-66bc5c9577-dp622" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-eth0" Oct 13 05:50:38.316016 containerd[1527]: 2025-10-13 05:50:38.013 [INFO][4453] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" HandleID="k8s-pod-network.a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" Workload="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-eth0" Oct 13 05:50:38.317233 containerd[1527]: 2025-10-13 05:50:38.015 [INFO][4453] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" HandleID="k8s-pod-network.a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" Workload="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033d740), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-5-82d9fc1916", "pod":"coredns-66bc5c9577-dp622", "timestamp":"2025-10-13 05:50:38.013444035 +0000 UTC"}, Hostname:"ci-4459.1.0-5-82d9fc1916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:50:38.317233 containerd[1527]: 2025-10-13 05:50:38.015 [INFO][4453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:50:38.317233 containerd[1527]: 2025-10-13 05:50:38.056 [INFO][4453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:50:38.317233 containerd[1527]: 2025-10-13 05:50:38.056 [INFO][4453] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-5-82d9fc1916' Oct 13 05:50:38.317233 containerd[1527]: 2025-10-13 05:50:38.090 [INFO][4453] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.317233 containerd[1527]: 2025-10-13 05:50:38.118 [INFO][4453] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.317233 containerd[1527]: 2025-10-13 05:50:38.138 [INFO][4453] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.317233 containerd[1527]: 2025-10-13 05:50:38.159 [INFO][4453] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.317233 containerd[1527]: 2025-10-13 05:50:38.164 [INFO][4453] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.318451 containerd[1527]: 2025-10-13 05:50:38.164 [INFO][4453] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.318451 containerd[1527]: 2025-10-13 05:50:38.167 [INFO][4453] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d Oct 13 05:50:38.318451 containerd[1527]: 2025-10-13 05:50:38.174 [INFO][4453] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.318451 containerd[1527]: 2025-10-13 05:50:38.215 [INFO][4453] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.71/26] block=192.168.50.64/26 handle="k8s-pod-network.a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.318451 containerd[1527]: 2025-10-13 05:50:38.215 [INFO][4453] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.71/26] handle="k8s-pod-network.a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:38.318451 containerd[1527]: 2025-10-13 05:50:38.215 [INFO][4453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:50:38.318451 containerd[1527]: 2025-10-13 05:50:38.215 [INFO][4453] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.71/26] IPv6=[] ContainerID="a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" HandleID="k8s-pod-network.a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" Workload="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-eth0" Oct 13 05:50:38.318659 containerd[1527]: 2025-10-13 05:50:38.234 [INFO][4420] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" Namespace="kube-system" Pod="coredns-66bc5c9577-dp622" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"95c84384-08f9-4081-baf0-5af6d2526999", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 49, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"", Pod:"coredns-66bc5c9577-dp622", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7debb9fb4de", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:38.318659 containerd[1527]: 2025-10-13 05:50:38.234 [INFO][4420] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.71/32] ContainerID="a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" Namespace="kube-system" Pod="coredns-66bc5c9577-dp622" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-eth0" Oct 13 05:50:38.318659 containerd[1527]: 2025-10-13 05:50:38.234 [INFO][4420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7debb9fb4de ContainerID="a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" Namespace="kube-system" Pod="coredns-66bc5c9577-dp622" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-eth0" Oct 13 05:50:38.318659 containerd[1527]: 2025-10-13 05:50:38.267 [INFO][4420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" Namespace="kube-system" Pod="coredns-66bc5c9577-dp622" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-eth0" Oct 13 05:50:38.318659 containerd[1527]: 2025-10-13 05:50:38.274 [INFO][4420] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" Namespace="kube-system" Pod="coredns-66bc5c9577-dp622" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"95c84384-08f9-4081-baf0-5af6d2526999", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 49, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d", Pod:"coredns-66bc5c9577-dp622", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7debb9fb4de", MAC:"86:ac:ee:bf:8d:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:38.320252 containerd[1527]: 2025-10-13 05:50:38.305 [INFO][4420] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" Namespace="kube-system" Pod="coredns-66bc5c9577-dp622" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--dp622-eth0" Oct 13 05:50:38.339035 containerd[1527]: time="2025-10-13T05:50:38.337870597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rp8ql,Uid:231efcce-c47b-4a1a-8f52-94bd62eab694,Namespace:calico-system,Attempt:0,} returns sandbox id \"19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2\"" Oct 13 05:50:38.379574 containerd[1527]: time="2025-10-13T05:50:38.379328423Z" level=info msg="connecting to shim a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d" address="unix:///run/containerd/s/4fe1f30dadabcc97cedf50f4745bf051dc741b8b5c47390103b32ff3db5de373" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:50:38.452269 systemd[1]: Started cri-containerd-a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d.scope - libcontainer container a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d. Oct 13 05:50:38.557573 containerd[1527]: time="2025-10-13T05:50:38.557512430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dp622,Uid:95c84384-08f9-4081-baf0-5af6d2526999,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d\"" Oct 13 05:50:38.560112 kubelet[2706]: E1013 05:50:38.560073 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:38.568289 containerd[1527]: time="2025-10-13T05:50:38.568150364Z" level=info msg="CreateContainer within sandbox \"a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:50:38.609142 containerd[1527]: time="2025-10-13T05:50:38.608828109Z" level=info msg="Container 1e8123c70b7122e9230853167413a896561bf3ec49dfac7eb85073a33234ffd1: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:38.616552 containerd[1527]: time="2025-10-13T05:50:38.616485082Z" level=info msg="CreateContainer within sandbox \"a2fed1d713aa07cc546fd1c4fb574e4bf6d8a82ad590cdb15fc550cda37cb76d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e8123c70b7122e9230853167413a896561bf3ec49dfac7eb85073a33234ffd1\"" Oct 13 05:50:38.618853 containerd[1527]: time="2025-10-13T05:50:38.618770952Z" level=info msg="StartContainer for \"1e8123c70b7122e9230853167413a896561bf3ec49dfac7eb85073a33234ffd1\"" Oct 13 05:50:38.622108 containerd[1527]: time="2025-10-13T05:50:38.621727371Z" level=info msg="connecting to shim 1e8123c70b7122e9230853167413a896561bf3ec49dfac7eb85073a33234ffd1" address="unix:///run/containerd/s/4fe1f30dadabcc97cedf50f4745bf051dc741b8b5c47390103b32ff3db5de373" protocol=ttrpc version=3 Oct 13 05:50:38.658222 systemd[1]: Started cri-containerd-1e8123c70b7122e9230853167413a896561bf3ec49dfac7eb85073a33234ffd1.scope - libcontainer container 1e8123c70b7122e9230853167413a896561bf3ec49dfac7eb85073a33234ffd1. Oct 13 05:50:38.683904 kubelet[2706]: E1013 05:50:38.683825 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:38.687940 containerd[1527]: time="2025-10-13T05:50:38.687477643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7lpb5,Uid:761616b9-6d45-4378-96fc-c3c6dd8d530b,Namespace:kube-system,Attempt:0,}" Oct 13 05:50:38.688655 containerd[1527]: time="2025-10-13T05:50:38.688599257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9868759c-jzd69,Uid:625d1f05-914e-4b92-8eda-0f0088321193,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:50:38.839794 containerd[1527]: time="2025-10-13T05:50:38.839709036Z" level=info msg="StartContainer for \"1e8123c70b7122e9230853167413a896561bf3ec49dfac7eb85073a33234ffd1\" returns successfully" Oct 13 05:50:39.263870 kubelet[2706]: E1013 05:50:39.263827 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:39.266115 kubelet[2706]: E1013 05:50:39.266079 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:39.273172 systemd-networkd[1439]: cali55f11bef220: Gained IPv6LL Oct 13 05:50:39.377778 systemd-networkd[1439]: cali7013cbd75bb: Link UP Oct 13 05:50:39.380473 systemd-networkd[1439]: cali7013cbd75bb: Gained carrier Oct 13 05:50:39.430809 kubelet[2706]: I1013 05:50:39.429226 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dp622" podStartSLOduration=44.429200132 podStartE2EDuration="44.429200132s" podCreationTimestamp="2025-10-13 05:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:50:39.341322423 +0000 UTC m=+50.847033172" watchObservedRunningTime="2025-10-13 05:50:39.429200132 +0000 UTC m=+50.934910880" Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:38.895 [INFO][4598] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:38.917 [INFO][4598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-eth0 coredns-66bc5c9577- kube-system 761616b9-6d45-4378-96fc-c3c6dd8d530b 849 0 2025-10-13 05:49:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-5-82d9fc1916 coredns-66bc5c9577-7lpb5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7013cbd75bb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" Namespace="kube-system" Pod="coredns-66bc5c9577-7lpb5" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-" Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:38.917 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" Namespace="kube-system" Pod="coredns-66bc5c9577-7lpb5" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-eth0" Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.109 [INFO][4639] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" HandleID="k8s-pod-network.3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" Workload="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-eth0" Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.109 [INFO][4639] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" HandleID="k8s-pod-network.3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" Workload="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec7d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-5-82d9fc1916", "pod":"coredns-66bc5c9577-7lpb5", "timestamp":"2025-10-13 05:50:39.10913157 +0000 UTC"}, Hostname:"ci-4459.1.0-5-82d9fc1916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.109 [INFO][4639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.109 [INFO][4639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.112 [INFO][4639] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-5-82d9fc1916' Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.141 [INFO][4639] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.181 [INFO][4639] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.211 [INFO][4639] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.228 [INFO][4639] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.261 [INFO][4639] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.261 [INFO][4639] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.289 [INFO][4639] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737 Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.321 [INFO][4639] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.348 [INFO][4639] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.72/26] block=192.168.50.64/26 handle="k8s-pod-network.3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.349 [INFO][4639] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.72/26] handle="k8s-pod-network.3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.349 [INFO][4639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:50:39.443804 containerd[1527]: 2025-10-13 05:50:39.349 [INFO][4639] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.72/26] IPv6=[] ContainerID="3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" HandleID="k8s-pod-network.3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" Workload="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-eth0" Oct 13 05:50:39.447805 containerd[1527]: 2025-10-13 05:50:39.354 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" Namespace="kube-system" Pod="coredns-66bc5c9577-7lpb5" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"761616b9-6d45-4378-96fc-c3c6dd8d530b", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 49, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"", Pod:"coredns-66bc5c9577-7lpb5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7013cbd75bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:39.447805 containerd[1527]: 2025-10-13 05:50:39.354 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.72/32] ContainerID="3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" Namespace="kube-system" Pod="coredns-66bc5c9577-7lpb5" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-eth0" Oct 13 05:50:39.447805 containerd[1527]: 2025-10-13 05:50:39.354 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7013cbd75bb ContainerID="3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" Namespace="kube-system" Pod="coredns-66bc5c9577-7lpb5" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-eth0" Oct 13 05:50:39.447805 containerd[1527]: 2025-10-13 05:50:39.398 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" Namespace="kube-system" Pod="coredns-66bc5c9577-7lpb5" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-eth0" Oct 13 05:50:39.447805 containerd[1527]: 2025-10-13 05:50:39.399 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" Namespace="kube-system" Pod="coredns-66bc5c9577-7lpb5" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"761616b9-6d45-4378-96fc-c3c6dd8d530b", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 49, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737", Pod:"coredns-66bc5c9577-7lpb5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7013cbd75bb", MAC:"a6:2b:8c:07:bb:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:39.448753 containerd[1527]: 2025-10-13 05:50:39.423 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" Namespace="kube-system" Pod="coredns-66bc5c9577-7lpb5" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-coredns--66bc5c9577--7lpb5-eth0" Oct 13 05:50:39.526744 systemd-networkd[1439]: cali8cb0c0b34dc: Link UP Oct 13 05:50:39.528734 systemd-networkd[1439]: cali8cb0c0b34dc: Gained carrier Oct 13 05:50:39.533077 systemd-networkd[1439]: cali7debb9fb4de: Gained IPv6LL Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:38.882 [INFO][4595] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:38.920 [INFO][4595] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0 calico-apiserver-f9868759c- calico-apiserver 625d1f05-914e-4b92-8eda-0f0088321193 854 0 2025-10-13 05:50:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f9868759c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-5-82d9fc1916 calico-apiserver-f9868759c-jzd69 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8cb0c0b34dc [] [] }} ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-jzd69" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-" Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:38.920 [INFO][4595] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-jzd69" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0" Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.189 [INFO][4638] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" HandleID="k8s-pod-network.63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0" Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.189 [INFO][4638] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" HandleID="k8s-pod-network.63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-5-82d9fc1916", "pod":"calico-apiserver-f9868759c-jzd69", "timestamp":"2025-10-13 05:50:39.189533591 +0000 UTC"}, Hostname:"ci-4459.1.0-5-82d9fc1916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.189 [INFO][4638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.349 [INFO][4638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.350 [INFO][4638] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-5-82d9fc1916' Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.382 [INFO][4638] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.407 [INFO][4638] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.438 [INFO][4638] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.447 [INFO][4638] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.456 [INFO][4638] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.457 [INFO][4638] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.465 [INFO][4638] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83 Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.476 [INFO][4638] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.493 [INFO][4638] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.73/26] block=192.168.50.64/26 handle="k8s-pod-network.63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.493 [INFO][4638] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.73/26] handle="k8s-pod-network.63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.493 [INFO][4638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:50:39.594507 containerd[1527]: 2025-10-13 05:50:39.494 [INFO][4638] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.73/26] IPv6=[] ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" HandleID="k8s-pod-network.63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0" Oct 13 05:50:39.601705 containerd[1527]: 2025-10-13 05:50:39.517 [INFO][4595] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-jzd69" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0", GenerateName:"calico-apiserver-f9868759c-", Namespace:"calico-apiserver", SelfLink:"", UID:"625d1f05-914e-4b92-8eda-0f0088321193", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9868759c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"", Pod:"calico-apiserver-f9868759c-jzd69", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cb0c0b34dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:39.601705 containerd[1527]: 2025-10-13 05:50:39.520 [INFO][4595] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.73/32] ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-jzd69" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0" Oct 13 05:50:39.601705 containerd[1527]: 2025-10-13 05:50:39.520 [INFO][4595] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cb0c0b34dc ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-jzd69" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0" Oct 13 05:50:39.601705 containerd[1527]: 2025-10-13 05:50:39.531 [INFO][4595] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-jzd69" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0" Oct 13 05:50:39.601705 containerd[1527]: 2025-10-13 05:50:39.536 [INFO][4595] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-jzd69" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0", GenerateName:"calico-apiserver-f9868759c-", Namespace:"calico-apiserver", SelfLink:"", UID:"625d1f05-914e-4b92-8eda-0f0088321193", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9868759c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83", Pod:"calico-apiserver-f9868759c-jzd69", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cb0c0b34dc", MAC:"ca:cc:7d:34:26:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:39.601705 containerd[1527]: 2025-10-13 05:50:39.566 [INFO][4595] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Namespace="calico-apiserver" Pod="calico-apiserver-f9868759c-jzd69" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0" Oct 13 05:50:39.656666 containerd[1527]: time="2025-10-13T05:50:39.656583139Z" level=info msg="connecting to shim 3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737" address="unix:///run/containerd/s/013bca3f059d45df95a2d9f9ef1503067c1087b15c3dfa39d4619849fb52011c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:50:39.713858 containerd[1527]: time="2025-10-13T05:50:39.713794933Z" level=info msg="connecting to shim 63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" address="unix:///run/containerd/s/fe046e015ea094ecb8ba7b95444def53f2c5ed0c6fa85150a58338a9281c681c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:50:39.787746 systemd[1]: Started cri-containerd-3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737.scope - libcontainer container 3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737. Oct 13 05:50:39.837838 systemd[1]: Started cri-containerd-63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83.scope - libcontainer container 63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83. Oct 13 05:50:39.975798 containerd[1527]: time="2025-10-13T05:50:39.975715057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7lpb5,Uid:761616b9-6d45-4378-96fc-c3c6dd8d530b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737\"" Oct 13 05:50:40.022160 kubelet[2706]: E1013 05:50:40.021872 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:40.080103 containerd[1527]: time="2025-10-13T05:50:40.077344497Z" level=info msg="CreateContainer within sandbox \"3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:50:40.130988 containerd[1527]: time="2025-10-13T05:50:40.129176948Z" level=info msg="Container e6b36700fa0262a5abe30780cfe3600957c4a46627a5108b490b034db6893c34: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:40.159446 containerd[1527]: time="2025-10-13T05:50:40.159382528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9868759c-jzd69,Uid:625d1f05-914e-4b92-8eda-0f0088321193,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83\"" Oct 13 05:50:40.184063 containerd[1527]: time="2025-10-13T05:50:40.184010366Z" level=info msg="CreateContainer within sandbox \"3a154a78af3038d47c445e2dfb0579cfe4d4f9a99f5540935ba6550fa6a07737\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6b36700fa0262a5abe30780cfe3600957c4a46627a5108b490b034db6893c34\"" Oct 13 05:50:40.186474 containerd[1527]: time="2025-10-13T05:50:40.186298578Z" level=info msg="StartContainer for \"e6b36700fa0262a5abe30780cfe3600957c4a46627a5108b490b034db6893c34\"" Oct 13 05:50:40.188558 containerd[1527]: time="2025-10-13T05:50:40.187774390Z" level=info msg="connecting to shim e6b36700fa0262a5abe30780cfe3600957c4a46627a5108b490b034db6893c34" address="unix:///run/containerd/s/013bca3f059d45df95a2d9f9ef1503067c1087b15c3dfa39d4619849fb52011c" protocol=ttrpc version=3 Oct 13 05:50:40.281347 systemd[1]: Started cri-containerd-e6b36700fa0262a5abe30780cfe3600957c4a46627a5108b490b034db6893c34.scope - libcontainer container e6b36700fa0262a5abe30780cfe3600957c4a46627a5108b490b034db6893c34. Oct 13 05:50:40.298388 kubelet[2706]: E1013 05:50:40.293620 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:40.439002 containerd[1527]: time="2025-10-13T05:50:40.438656625Z" level=info msg="StartContainer for \"e6b36700fa0262a5abe30780cfe3600957c4a46627a5108b490b034db6893c34\" returns successfully" Oct 13 05:50:41.129199 systemd-networkd[1439]: cali7013cbd75bb: Gained IPv6LL Oct 13 05:50:41.310156 kubelet[2706]: E1013 05:50:41.310043 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:41.312141 kubelet[2706]: E1013 05:50:41.312110 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:41.364819 kubelet[2706]: I1013 05:50:41.364453 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7lpb5" podStartSLOduration=46.36443554 podStartE2EDuration="46.36443554s" podCreationTimestamp="2025-10-13 05:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:50:41.335350716 +0000 UTC m=+52.841061467" watchObservedRunningTime="2025-10-13 05:50:41.36443554 +0000 UTC m=+52.870146289" Oct 13 05:50:41.385339 systemd-networkd[1439]: cali8cb0c0b34dc: Gained IPv6LL Oct 13 05:50:41.533302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679304768.mount: Deactivated successfully. Oct 13 05:50:41.798994 systemd-networkd[1439]: vxlan.calico: Link UP Oct 13 05:50:41.799004 systemd-networkd[1439]: vxlan.calico: Gained carrier Oct 13 05:50:42.313919 kubelet[2706]: E1013 05:50:42.313867 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:42.592451 containerd[1527]: time="2025-10-13T05:50:42.592377743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:42.595116 containerd[1527]: time="2025-10-13T05:50:42.595051780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:50:42.602559 containerd[1527]: time="2025-10-13T05:50:42.601850876Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:42.605175 containerd[1527]: time="2025-10-13T05:50:42.605113660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:42.606037 containerd[1527]: time="2025-10-13T05:50:42.605963586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.127738949s" Oct 13 05:50:42.606261 containerd[1527]: time="2025-10-13T05:50:42.606234096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:50:42.608374 containerd[1527]: time="2025-10-13T05:50:42.608325977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:50:42.615323 containerd[1527]: time="2025-10-13T05:50:42.615220557Z" level=info msg="CreateContainer within sandbox \"778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:50:42.625149 containerd[1527]: time="2025-10-13T05:50:42.624221251Z" level=info msg="Container 9e16b02efe718ed8eb1ed06238a68e910b0d526e1cf0b78741e52011feea26aa: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:42.646641 containerd[1527]: time="2025-10-13T05:50:42.646549693Z" level=info msg="CreateContainer within sandbox \"778ce3e5a8dbc33e2f0ff29c32ab0bccc71489916facf1ef1e1ed1b5567152c7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9e16b02efe718ed8eb1ed06238a68e910b0d526e1cf0b78741e52011feea26aa\"" Oct 13 05:50:42.648166 containerd[1527]: time="2025-10-13T05:50:42.648124164Z" level=info msg="StartContainer for \"9e16b02efe718ed8eb1ed06238a68e910b0d526e1cf0b78741e52011feea26aa\"" Oct 13 05:50:42.651152 containerd[1527]: time="2025-10-13T05:50:42.651092695Z" level=info msg="connecting to shim 9e16b02efe718ed8eb1ed06238a68e910b0d526e1cf0b78741e52011feea26aa" address="unix:///run/containerd/s/63cab8375bfc65a266903f8d247317ccabfafd011eaa1eddecf40e0676e4a740" protocol=ttrpc version=3 Oct 13 05:50:42.694283 systemd[1]: Started cri-containerd-9e16b02efe718ed8eb1ed06238a68e910b0d526e1cf0b78741e52011feea26aa.scope - libcontainer container 9e16b02efe718ed8eb1ed06238a68e910b0d526e1cf0b78741e52011feea26aa. Oct 13 05:50:42.775914 containerd[1527]: time="2025-10-13T05:50:42.775772003Z" level=info msg="StartContainer for \"9e16b02efe718ed8eb1ed06238a68e910b0d526e1cf0b78741e52011feea26aa\" returns successfully" Oct 13 05:50:43.319603 kubelet[2706]: E1013 05:50:43.319480 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:50:43.334202 kubelet[2706]: I1013 05:50:43.333727 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-854f97d977-8rn64" podStartSLOduration=24.988760157 podStartE2EDuration="32.333706753s" podCreationTimestamp="2025-10-13 05:50:11 +0000 UTC" firstStartedPulling="2025-10-13 05:50:35.26277766 +0000 UTC m=+46.768488390" lastFinishedPulling="2025-10-13 05:50:42.607724243 +0000 UTC m=+54.113434986" observedRunningTime="2025-10-13 05:50:43.332542127 +0000 UTC m=+54.838252875" watchObservedRunningTime="2025-10-13 05:50:43.333706753 +0000 UTC m=+54.839417501" Oct 13 05:50:43.497245 systemd-networkd[1439]: vxlan.calico: Gained IPv6LL Oct 13 05:50:43.613892 containerd[1527]: time="2025-10-13T05:50:43.613817138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e16b02efe718ed8eb1ed06238a68e910b0d526e1cf0b78741e52011feea26aa\" id:\"1ebd1c00fd8a8d30aed690bb69ddd245a0560523d26317c918d70ffc896a6c1b\" pid:4991 exit_status:1 exited_at:{seconds:1760334643 nanos:590495240}" Oct 13 05:50:44.458719 containerd[1527]: time="2025-10-13T05:50:44.458574683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e16b02efe718ed8eb1ed06238a68e910b0d526e1cf0b78741e52011feea26aa\" id:\"da2125f694d9af454bd0882dae07f55f3013eaa6a6912acd588bfcab4db91848\" pid:5017 exit_status:1 exited_at:{seconds:1760334644 nanos:456934969}" Oct 13 05:50:45.838082 containerd[1527]: time="2025-10-13T05:50:45.837732455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:45.839742 containerd[1527]: time="2025-10-13T05:50:45.839347398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:50:45.841436 containerd[1527]: time="2025-10-13T05:50:45.841380665Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:45.843856 containerd[1527]: time="2025-10-13T05:50:45.843764398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:45.845501 containerd[1527]: time="2025-10-13T05:50:45.845468102Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.237101721s" Oct 13 05:50:45.845730 containerd[1527]: time="2025-10-13T05:50:45.845626394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:50:45.847623 containerd[1527]: time="2025-10-13T05:50:45.847174310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:50:45.908738 containerd[1527]: time="2025-10-13T05:50:45.908693941Z" level=info msg="CreateContainer within sandbox \"4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:50:45.917783 containerd[1527]: time="2025-10-13T05:50:45.917673952Z" level=info msg="Container 4e0716120953b75976b3dbfe96e5aed5366f84ba3b2f45aba74e00f686d61fd4: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:45.929500 containerd[1527]: time="2025-10-13T05:50:45.929413214Z" level=info msg="CreateContainer within sandbox \"4c05b133c6d2cdecb004f7692706bc86215abe29d4b5595f617c126a018185bf\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4e0716120953b75976b3dbfe96e5aed5366f84ba3b2f45aba74e00f686d61fd4\"" Oct 13 05:50:45.930487 containerd[1527]: time="2025-10-13T05:50:45.930332653Z" level=info msg="StartContainer for \"4e0716120953b75976b3dbfe96e5aed5366f84ba3b2f45aba74e00f686d61fd4\"" Oct 13 05:50:45.935019 containerd[1527]: time="2025-10-13T05:50:45.934962730Z" level=info msg="connecting to shim 4e0716120953b75976b3dbfe96e5aed5366f84ba3b2f45aba74e00f686d61fd4" address="unix:///run/containerd/s/2507fc2d5ec6b9ea2a401bb6c82bc17c64cd7741a5ff4bb136793638797260e3" protocol=ttrpc version=3 Oct 13 05:50:46.010556 systemd[1]: Started cri-containerd-4e0716120953b75976b3dbfe96e5aed5366f84ba3b2f45aba74e00f686d61fd4.scope - libcontainer container 4e0716120953b75976b3dbfe96e5aed5366f84ba3b2f45aba74e00f686d61fd4. Oct 13 05:50:46.091254 containerd[1527]: time="2025-10-13T05:50:46.091094177Z" level=info msg="StartContainer for \"4e0716120953b75976b3dbfe96e5aed5366f84ba3b2f45aba74e00f686d61fd4\" returns successfully" Oct 13 05:50:46.382251 kubelet[2706]: I1013 05:50:46.382036 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84f6bbddc4-4gjtm" podStartSLOduration=23.872574206 podStartE2EDuration="34.382016964s" podCreationTimestamp="2025-10-13 05:50:12 +0000 UTC" firstStartedPulling="2025-10-13 05:50:35.337367866 +0000 UTC m=+46.843078593" lastFinishedPulling="2025-10-13 05:50:45.846810623 +0000 UTC m=+57.352521351" observedRunningTime="2025-10-13 05:50:46.380912975 +0000 UTC m=+57.886623724" watchObservedRunningTime="2025-10-13 05:50:46.382016964 +0000 UTC m=+57.887727727" Oct 13 05:50:46.421705 containerd[1527]: time="2025-10-13T05:50:46.421564277Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e0716120953b75976b3dbfe96e5aed5366f84ba3b2f45aba74e00f686d61fd4\" id:\"8f1cbdc97113553e6b3f8ac699636f2e52e65b0adfac2426f394c49fc18ba378\" pid:5088 exited_at:{seconds:1760334646 nanos:420158212}" Oct 13 05:50:48.882715 systemd[1]: Started sshd@7-137.184.180.203:22-139.178.89.65:42628.service - OpenSSH per-connection server daemon (139.178.89.65:42628). Oct 13 05:50:49.170756 sshd[5111]: Accepted publickey for core from 139.178.89.65 port 42628 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:50:49.175624 sshd-session[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:50:49.186137 systemd-logind[1497]: New session 8 of user core. Oct 13 05:50:49.191429 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:50:49.963853 sshd[5117]: Connection closed by 139.178.89.65 port 42628 Oct 13 05:50:49.964262 sshd-session[5111]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:49.983128 systemd[1]: sshd@7-137.184.180.203:22-139.178.89.65:42628.service: Deactivated successfully. Oct 13 05:50:49.988756 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:50:49.991575 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:50:49.999119 systemd-logind[1497]: Removed session 8. Oct 13 05:50:50.416476 containerd[1527]: time="2025-10-13T05:50:50.416378331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:50.417900 containerd[1527]: time="2025-10-13T05:50:50.417174598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:50:50.418748 containerd[1527]: time="2025-10-13T05:50:50.418606870Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:50.421787 containerd[1527]: time="2025-10-13T05:50:50.421303349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:50.422198 containerd[1527]: time="2025-10-13T05:50:50.422162602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.574963013s" Oct 13 05:50:50.422198 containerd[1527]: time="2025-10-13T05:50:50.422199022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:50:50.425173 containerd[1527]: time="2025-10-13T05:50:50.424622877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:50:50.432120 containerd[1527]: time="2025-10-13T05:50:50.431465103Z" level=info msg="CreateContainer within sandbox \"ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:50:50.444709 containerd[1527]: time="2025-10-13T05:50:50.444352574Z" level=info msg="Container a1dee33129300fc8ddbf2004fbbf15e2546cbaf95d281c339b4644e4b5d56576: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:50.467057 containerd[1527]: time="2025-10-13T05:50:50.466944066Z" level=info msg="CreateContainer within sandbox \"ecbd909603b1f06cd6c137dd4bfe292c5f9ee6f81906eeaf4ddb8a0886ef4fed\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a1dee33129300fc8ddbf2004fbbf15e2546cbaf95d281c339b4644e4b5d56576\"" Oct 13 05:50:50.468253 containerd[1527]: time="2025-10-13T05:50:50.468204414Z" level=info msg="StartContainer for \"a1dee33129300fc8ddbf2004fbbf15e2546cbaf95d281c339b4644e4b5d56576\"" Oct 13 05:50:50.469912 containerd[1527]: time="2025-10-13T05:50:50.469870212Z" level=info msg="connecting to shim a1dee33129300fc8ddbf2004fbbf15e2546cbaf95d281c339b4644e4b5d56576" address="unix:///run/containerd/s/1b30dfff9da1e9b0b72b4e74d21aee016833ab355ddbb6a1f02174e3f823100b" protocol=ttrpc version=3 Oct 13 05:50:50.508277 systemd[1]: Started cri-containerd-a1dee33129300fc8ddbf2004fbbf15e2546cbaf95d281c339b4644e4b5d56576.scope - libcontainer container a1dee33129300fc8ddbf2004fbbf15e2546cbaf95d281c339b4644e4b5d56576. Oct 13 05:50:50.575338 containerd[1527]: time="2025-10-13T05:50:50.575286046Z" level=info msg="StartContainer for \"a1dee33129300fc8ddbf2004fbbf15e2546cbaf95d281c339b4644e4b5d56576\" returns successfully" Oct 13 05:50:50.825581 containerd[1527]: time="2025-10-13T05:50:50.825409233Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:50.826359 containerd[1527]: time="2025-10-13T05:50:50.826264793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:50:50.828498 containerd[1527]: time="2025-10-13T05:50:50.828378223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 403.70755ms" Oct 13 05:50:50.828498 containerd[1527]: time="2025-10-13T05:50:50.828417485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:50:50.829900 containerd[1527]: time="2025-10-13T05:50:50.829874719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:50:50.835559 containerd[1527]: time="2025-10-13T05:50:50.835502206Z" level=info msg="CreateContainer within sandbox \"5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:50:50.842876 containerd[1527]: time="2025-10-13T05:50:50.842151387Z" level=info msg="Container 71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:50.871614 containerd[1527]: time="2025-10-13T05:50:50.871558760Z" level=info msg="CreateContainer within sandbox \"5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db\"" Oct 13 05:50:50.888890 containerd[1527]: time="2025-10-13T05:50:50.888830438Z" level=info msg="StartContainer for \"71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db\"" Oct 13 05:50:50.893146 containerd[1527]: time="2025-10-13T05:50:50.893102124Z" level=info msg="connecting to shim 71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db" address="unix:///run/containerd/s/813a55f6d5f995e41d940b888c13c635324b5bec6d12e4664f3a9d3f9c6d8415" protocol=ttrpc version=3 Oct 13 05:50:50.931245 systemd[1]: Started cri-containerd-71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db.scope - libcontainer container 71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db. Oct 13 05:50:51.025373 containerd[1527]: time="2025-10-13T05:50:51.025330025Z" level=info msg="StartContainer for \"71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db\" returns successfully" Oct 13 05:50:51.395420 kubelet[2706]: I1013 05:50:51.395218 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cd79d768-qqxmr" podStartSLOduration=30.446650159 podStartE2EDuration="44.395099093s" podCreationTimestamp="2025-10-13 05:50:07 +0000 UTC" firstStartedPulling="2025-10-13 05:50:36.475914878 +0000 UTC m=+47.981625606" lastFinishedPulling="2025-10-13 05:50:50.424363793 +0000 UTC m=+61.930074540" observedRunningTime="2025-10-13 05:50:51.393395041 +0000 UTC m=+62.899105790" watchObservedRunningTime="2025-10-13 05:50:51.395099093 +0000 UTC m=+62.900809844" Oct 13 05:50:51.429043 kubelet[2706]: I1013 05:50:51.428946 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f9868759c-srwzw" podStartSLOduration=31.994871022 podStartE2EDuration="45.428925956s" podCreationTimestamp="2025-10-13 05:50:06 +0000 UTC" firstStartedPulling="2025-10-13 05:50:37.395390296 +0000 UTC m=+48.901101024" lastFinishedPulling="2025-10-13 05:50:50.829445218 +0000 UTC m=+62.335155958" observedRunningTime="2025-10-13 05:50:51.427393122 +0000 UTC m=+62.933103872" watchObservedRunningTime="2025-10-13 05:50:51.428925956 +0000 UTC m=+62.934636704" Oct 13 05:50:52.401573 kubelet[2706]: I1013 05:50:52.401515 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:50:52.402886 kubelet[2706]: I1013 05:50:52.402274 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:50:52.877342 containerd[1527]: time="2025-10-13T05:50:52.877262608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:52.878768 containerd[1527]: time="2025-10-13T05:50:52.878725197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:50:52.880023 containerd[1527]: time="2025-10-13T05:50:52.879947019Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:52.885756 containerd[1527]: time="2025-10-13T05:50:52.885676395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:52.887037 containerd[1527]: time="2025-10-13T05:50:52.886490765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.056584322s" Oct 13 05:50:52.887037 containerd[1527]: time="2025-10-13T05:50:52.886533888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:50:52.888616 containerd[1527]: time="2025-10-13T05:50:52.888433546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:50:52.895865 containerd[1527]: time="2025-10-13T05:50:52.895242217Z" level=info msg="CreateContainer within sandbox \"19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:50:52.939342 containerd[1527]: time="2025-10-13T05:50:52.939292883Z" level=info msg="Container 218fed1db2c5f35775c5552c7388e861a6c7cdf2155512b9c5757a8b1ec691e5: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:52.945963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1535386681.mount: Deactivated successfully. Oct 13 05:50:52.986942 containerd[1527]: time="2025-10-13T05:50:52.986739076Z" level=info msg="CreateContainer within sandbox \"19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"218fed1db2c5f35775c5552c7388e861a6c7cdf2155512b9c5757a8b1ec691e5\"" Oct 13 05:50:52.988603 containerd[1527]: time="2025-10-13T05:50:52.988407871Z" level=info msg="StartContainer for \"218fed1db2c5f35775c5552c7388e861a6c7cdf2155512b9c5757a8b1ec691e5\"" Oct 13 05:50:52.994092 containerd[1527]: time="2025-10-13T05:50:52.994009041Z" level=info msg="connecting to shim 218fed1db2c5f35775c5552c7388e861a6c7cdf2155512b9c5757a8b1ec691e5" address="unix:///run/containerd/s/ab60e2fea7f65d3b6b89719d2f986c1c72ec4d1a87de811713906c04ec6e6590" protocol=ttrpc version=3 Oct 13 05:50:53.026232 systemd[1]: Started cri-containerd-218fed1db2c5f35775c5552c7388e861a6c7cdf2155512b9c5757a8b1ec691e5.scope - libcontainer container 218fed1db2c5f35775c5552c7388e861a6c7cdf2155512b9c5757a8b1ec691e5. Oct 13 05:50:53.082072 containerd[1527]: time="2025-10-13T05:50:53.081636463Z" level=info msg="StartContainer for \"218fed1db2c5f35775c5552c7388e861a6c7cdf2155512b9c5757a8b1ec691e5\" returns successfully" Oct 13 05:50:53.260354 containerd[1527]: time="2025-10-13T05:50:53.259441045Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:53.260480 containerd[1527]: time="2025-10-13T05:50:53.260411662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:50:53.262512 containerd[1527]: time="2025-10-13T05:50:53.262475113Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 374.002973ms" Oct 13 05:50:53.262689 containerd[1527]: time="2025-10-13T05:50:53.262671680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:50:53.265071 containerd[1527]: time="2025-10-13T05:50:53.264632075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:50:53.268788 containerd[1527]: time="2025-10-13T05:50:53.268754761Z" level=info msg="CreateContainer within sandbox \"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:50:53.276996 containerd[1527]: time="2025-10-13T05:50:53.275170307Z" level=info msg="Container 2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:53.288491 containerd[1527]: time="2025-10-13T05:50:53.288413996Z" level=info msg="CreateContainer within sandbox \"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\"" Oct 13 05:50:53.289729 containerd[1527]: time="2025-10-13T05:50:53.289687356Z" level=info msg="StartContainer for \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\"" Oct 13 05:50:53.292003 containerd[1527]: time="2025-10-13T05:50:53.291931410Z" level=info msg="connecting to shim 2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722" address="unix:///run/containerd/s/fe046e015ea094ecb8ba7b95444def53f2c5ed0c6fa85150a58338a9281c681c" protocol=ttrpc version=3 Oct 13 05:50:53.321269 systemd[1]: Started cri-containerd-2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722.scope - libcontainer container 2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722. Oct 13 05:50:53.432246 containerd[1527]: time="2025-10-13T05:50:53.432160520Z" level=info msg="StartContainer for \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\" returns successfully" Oct 13 05:50:54.452653 kubelet[2706]: I1013 05:50:54.452584 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f9868759c-jzd69" podStartSLOduration=35.385884219 podStartE2EDuration="48.452567814s" podCreationTimestamp="2025-10-13 05:50:06 +0000 UTC" firstStartedPulling="2025-10-13 05:50:40.197143952 +0000 UTC m=+51.702854680" lastFinishedPulling="2025-10-13 05:50:53.263827547 +0000 UTC m=+64.769538275" observedRunningTime="2025-10-13 05:50:54.452196947 +0000 UTC m=+65.957907695" watchObservedRunningTime="2025-10-13 05:50:54.452567814 +0000 UTC m=+65.958278563" Oct 13 05:50:54.996825 systemd[1]: Started sshd@8-137.184.180.203:22-139.178.89.65:45186.service - OpenSSH per-connection server daemon (139.178.89.65:45186). Oct 13 05:50:55.235015 sshd[5289]: Accepted publickey for core from 139.178.89.65 port 45186 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:50:55.242596 sshd-session[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:50:55.256728 systemd-logind[1497]: New session 9 of user core. Oct 13 05:50:55.263252 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:50:56.139390 sshd[5295]: Connection closed by 139.178.89.65 port 45186 Oct 13 05:50:56.140489 sshd-session[5289]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:56.153113 systemd[1]: sshd@8-137.184.180.203:22-139.178.89.65:45186.service: Deactivated successfully. Oct 13 05:50:56.164378 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:50:56.167867 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:50:56.173469 systemd-logind[1497]: Removed session 9. Oct 13 05:50:56.370109 containerd[1527]: time="2025-10-13T05:50:56.370044914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:56.373701 containerd[1527]: time="2025-10-13T05:50:56.372073245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:50:56.373701 containerd[1527]: time="2025-10-13T05:50:56.372227626Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:56.375787 containerd[1527]: time="2025-10-13T05:50:56.375158969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:50:56.376869 containerd[1527]: time="2025-10-13T05:50:56.376832947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.112163922s" Oct 13 05:50:56.377096 containerd[1527]: time="2025-10-13T05:50:56.377036145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:50:56.439615 containerd[1527]: time="2025-10-13T05:50:56.439485170Z" level=info msg="CreateContainer within sandbox \"19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:50:56.459228 containerd[1527]: time="2025-10-13T05:50:56.458113794Z" level=info msg="Container 042aa35ea9a5bdc4d93c0a14603a8e01ab7941d31761ecdbc8bfd051207f1341: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:56.485032 containerd[1527]: time="2025-10-13T05:50:56.484938676Z" level=info msg="CreateContainer within sandbox \"19cfdf36d43779c82ce7ed74c46464c24050161287e6514bbefe50a4e88386d2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"042aa35ea9a5bdc4d93c0a14603a8e01ab7941d31761ecdbc8bfd051207f1341\"" Oct 13 05:50:56.488831 containerd[1527]: time="2025-10-13T05:50:56.488772001Z" level=info msg="StartContainer for \"042aa35ea9a5bdc4d93c0a14603a8e01ab7941d31761ecdbc8bfd051207f1341\"" Oct 13 05:50:56.503290 containerd[1527]: time="2025-10-13T05:50:56.503199374Z" level=info msg="connecting to shim 042aa35ea9a5bdc4d93c0a14603a8e01ab7941d31761ecdbc8bfd051207f1341" address="unix:///run/containerd/s/ab60e2fea7f65d3b6b89719d2f986c1c72ec4d1a87de811713906c04ec6e6590" protocol=ttrpc version=3 Oct 13 05:50:56.534462 containerd[1527]: time="2025-10-13T05:50:56.534395542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e16b02efe718ed8eb1ed06238a68e910b0d526e1cf0b78741e52011feea26aa\" id:\"fe8e70de4159655c2a8f46a88ef00baec7d95df61f99a82a03a754c21e561253\" pid:5323 exited_at:{seconds:1760334656 nanos:504723075}" Oct 13 05:50:56.556808 systemd[1]: Started cri-containerd-042aa35ea9a5bdc4d93c0a14603a8e01ab7941d31761ecdbc8bfd051207f1341.scope - libcontainer container 042aa35ea9a5bdc4d93c0a14603a8e01ab7941d31761ecdbc8bfd051207f1341. Oct 13 05:50:56.650595 containerd[1527]: time="2025-10-13T05:50:56.647810231Z" level=info msg="StartContainer for \"042aa35ea9a5bdc4d93c0a14603a8e01ab7941d31761ecdbc8bfd051207f1341\" returns successfully" Oct 13 05:50:56.923702 kubelet[2706]: I1013 05:50:56.899330 2706 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:50:56.927529 kubelet[2706]: I1013 05:50:56.927408 2706 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:50:57.495308 kubelet[2706]: I1013 05:50:57.495116 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rp8ql" podStartSLOduration=27.451470291 podStartE2EDuration="45.488572265s" podCreationTimestamp="2025-10-13 05:50:12 +0000 UTC" firstStartedPulling="2025-10-13 05:50:38.344172352 +0000 UTC m=+49.849883093" lastFinishedPulling="2025-10-13 05:50:56.381274334 +0000 UTC m=+67.886985067" observedRunningTime="2025-10-13 05:50:57.484784149 +0000 UTC m=+68.990494899" watchObservedRunningTime="2025-10-13 05:50:57.488572265 +0000 UTC m=+68.994283015" Oct 13 05:50:57.676016 kubelet[2706]: I1013 05:50:57.675642 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:50:57.779235 kubelet[2706]: I1013 05:50:57.779059 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:50:57.940415 containerd[1527]: time="2025-10-13T05:50:57.940100121Z" level=info msg="StopContainer for \"71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db\" with timeout 30 (s)" Oct 13 05:50:57.949842 containerd[1527]: time="2025-10-13T05:50:57.949352457Z" level=info msg="Stop container \"71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db\" with signal terminated" Oct 13 05:50:58.005809 systemd[1]: Created slice kubepods-besteffort-poded2ebcf0_c2ce_4dc8_b03d_3f26a1758709.slice - libcontainer container kubepods-besteffort-poded2ebcf0_c2ce_4dc8_b03d_3f26a1758709.slice. Oct 13 05:50:58.060373 systemd[1]: cri-containerd-71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db.scope: Deactivated successfully. Oct 13 05:50:58.072865 containerd[1527]: time="2025-10-13T05:50:58.072790504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db\" id:\"71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db\" pid:5184 exit_status:1 exited_at:{seconds:1760334658 nanos:71779419}" Oct 13 05:50:58.084638 containerd[1527]: time="2025-10-13T05:50:58.084555004Z" level=info msg="received exit event container_id:\"71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db\" id:\"71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db\" pid:5184 exit_status:1 exited_at:{seconds:1760334658 nanos:71779419}" Oct 13 05:50:58.113701 kubelet[2706]: I1013 05:50:58.113628 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ed2ebcf0-c2ce-4dc8-b03d-3f26a1758709-calico-apiserver-certs\") pod \"calico-apiserver-6cd79d768-ws9wf\" (UID: \"ed2ebcf0-c2ce-4dc8-b03d-3f26a1758709\") " pod="calico-apiserver/calico-apiserver-6cd79d768-ws9wf" Oct 13 05:50:58.114728 kubelet[2706]: I1013 05:50:58.114212 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-585mb\" (UniqueName: \"kubernetes.io/projected/ed2ebcf0-c2ce-4dc8-b03d-3f26a1758709-kube-api-access-585mb\") pod \"calico-apiserver-6cd79d768-ws9wf\" (UID: \"ed2ebcf0-c2ce-4dc8-b03d-3f26a1758709\") " pod="calico-apiserver/calico-apiserver-6cd79d768-ws9wf" Oct 13 05:50:58.133898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db-rootfs.mount: Deactivated successfully. Oct 13 05:50:58.168727 containerd[1527]: time="2025-10-13T05:50:58.168662494Z" level=info msg="StopContainer for \"71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db\" returns successfully" Oct 13 05:50:58.177474 containerd[1527]: time="2025-10-13T05:50:58.177413027Z" level=info msg="StopPodSandbox for \"5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d\"" Oct 13 05:50:58.199063 containerd[1527]: time="2025-10-13T05:50:58.198996699Z" level=info msg="Container to stop \"71541431f31344bdaadc68dbe90945e17e2f3a50d0a20578ead3b17a5d56e7db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:50:58.227286 systemd[1]: cri-containerd-5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d.scope: Deactivated successfully. Oct 13 05:50:58.233661 containerd[1527]: time="2025-10-13T05:50:58.233546744Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d\" id:\"5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d\" pid:4358 exit_status:137 exited_at:{seconds:1760334658 nanos:231509014}" Oct 13 05:50:58.302506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d-rootfs.mount: Deactivated successfully. Oct 13 05:50:58.338415 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d-shm.mount: Deactivated successfully. Oct 13 05:50:58.348246 containerd[1527]: time="2025-10-13T05:50:58.348199766Z" level=info msg="shim disconnected" id=5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d namespace=k8s.io Oct 13 05:50:58.348701 containerd[1527]: time="2025-10-13T05:50:58.348450754Z" level=warning msg="cleaning up after shim disconnected" id=5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d namespace=k8s.io Oct 13 05:50:58.348701 containerd[1527]: time="2025-10-13T05:50:58.348466915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 05:50:58.366842 containerd[1527]: time="2025-10-13T05:50:58.366228252Z" level=info msg="received exit event sandbox_id:\"5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d\" exit_status:137 exited_at:{seconds:1760334658 nanos:231509014}" Oct 13 05:50:58.367490 containerd[1527]: time="2025-10-13T05:50:58.367181808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd79d768-ws9wf,Uid:ed2ebcf0-c2ce-4dc8-b03d-3f26a1758709,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:50:58.515196 kubelet[2706]: I1013 05:50:58.513115 2706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Oct 13 05:50:58.804319 systemd-networkd[1439]: calie28215c6f81: Link DOWN Oct 13 05:50:58.804331 systemd-networkd[1439]: calie28215c6f81: Lost carrier Oct 13 05:50:59.123647 containerd[1527]: 2025-10-13 05:50:58.764 [INFO][5441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Oct 13 05:50:59.123647 containerd[1527]: 2025-10-13 05:50:58.765 [INFO][5441] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" iface="eth0" netns="/var/run/netns/cni-51e7ff0d-e717-bed2-8ae9-0de0523a3871" Oct 13 05:50:59.123647 containerd[1527]: 2025-10-13 05:50:58.767 [INFO][5441] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" iface="eth0" netns="/var/run/netns/cni-51e7ff0d-e717-bed2-8ae9-0de0523a3871" Oct 13 05:50:59.123647 containerd[1527]: 2025-10-13 05:50:58.829 [INFO][5441] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" after=64.057184ms iface="eth0" netns="/var/run/netns/cni-51e7ff0d-e717-bed2-8ae9-0de0523a3871" Oct 13 05:50:59.123647 containerd[1527]: 2025-10-13 05:50:58.829 [INFO][5441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Oct 13 05:50:59.123647 containerd[1527]: 2025-10-13 05:50:58.829 [INFO][5441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Oct 13 05:50:59.123647 containerd[1527]: 2025-10-13 05:50:59.040 [INFO][5467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" HandleID="k8s-pod-network.5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0" Oct 13 05:50:59.123647 containerd[1527]: 2025-10-13 05:50:59.044 [INFO][5467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:50:59.123647 containerd[1527]: 2025-10-13 05:50:59.045 [INFO][5467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:50:59.123647 containerd[1527]: 2025-10-13 05:50:59.109 [INFO][5467] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" HandleID="k8s-pod-network.5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0" Oct 13 05:50:59.123647 containerd[1527]: 2025-10-13 05:50:59.109 [INFO][5467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" HandleID="k8s-pod-network.5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--srwzw-eth0" Oct 13 05:50:59.123647 containerd[1527]: 2025-10-13 05:50:59.111 [INFO][5467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:50:59.123647 containerd[1527]: 2025-10-13 05:50:59.118 [INFO][5441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d" Oct 13 05:50:59.135395 containerd[1527]: time="2025-10-13T05:50:59.135207962Z" level=info msg="TearDown network for sandbox \"5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d\" successfully" Oct 13 05:50:59.135617 containerd[1527]: time="2025-10-13T05:50:59.135587298Z" level=info msg="StopPodSandbox for \"5f11a6eefa14942132047a6eddacc68ddba259811a9b85e7a43328fd78d0f78d\" returns successfully" Oct 13 05:50:59.138689 systemd[1]: run-netns-cni\x2d51e7ff0d\x2de717\x2dbed2\x2d8ae9\x2d0de0523a3871.mount: Deactivated successfully. Oct 13 05:50:59.197914 systemd-networkd[1439]: calieccd75b5741: Link UP Oct 13 05:50:59.200358 systemd-networkd[1439]: calieccd75b5741: Gained carrier Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:58.734 [INFO][5447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-eth0 calico-apiserver-6cd79d768- calico-apiserver ed2ebcf0-c2ce-4dc8-b03d-3f26a1758709 1218 0 2025-10-13 05:50:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd79d768 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-5-82d9fc1916 calico-apiserver-6cd79d768-ws9wf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieccd75b5741 [] [] }} ContainerID="655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-ws9wf" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-" Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:58.738 [INFO][5447] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-ws9wf" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-eth0" Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.050 [INFO][5464] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" HandleID="k8s-pod-network.655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-eth0" Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.052 [INFO][5464] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" HandleID="k8s-pod-network.655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031c2b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-5-82d9fc1916", "pod":"calico-apiserver-6cd79d768-ws9wf", "timestamp":"2025-10-13 05:50:59.050860152 +0000 UTC"}, Hostname:"ci-4459.1.0-5-82d9fc1916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.052 [INFO][5464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.111 [INFO][5464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.111 [INFO][5464] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-5-82d9fc1916' Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.124 [INFO][5464] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.141 [INFO][5464] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.150 [INFO][5464] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.157 [INFO][5464] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.164 [INFO][5464] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.164 [INFO][5464] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.167 [INFO][5464] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.173 [INFO][5464] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.184 [INFO][5464] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.74/26] block=192.168.50.64/26 handle="k8s-pod-network.655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.184 [INFO][5464] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.74/26] handle="k8s-pod-network.655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" host="ci-4459.1.0-5-82d9fc1916" Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.185 [INFO][5464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:50:59.227005 containerd[1527]: 2025-10-13 05:50:59.185 [INFO][5464] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.74/26] IPv6=[] ContainerID="655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" HandleID="k8s-pod-network.655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-eth0" Oct 13 05:50:59.232684 containerd[1527]: 2025-10-13 05:50:59.191 [INFO][5447] cni-plugin/k8s.go 418: Populated endpoint ContainerID="655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-ws9wf" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-eth0", GenerateName:"calico-apiserver-6cd79d768-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed2ebcf0-c2ce-4dc8-b03d-3f26a1758709", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd79d768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"", Pod:"calico-apiserver-6cd79d768-ws9wf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieccd75b5741", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:59.232684 containerd[1527]: 2025-10-13 05:50:59.191 [INFO][5447] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.74/32] ContainerID="655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-ws9wf" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-eth0" Oct 13 05:50:59.232684 containerd[1527]: 2025-10-13 05:50:59.192 [INFO][5447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieccd75b5741 ContainerID="655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-ws9wf" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-eth0" Oct 13 05:50:59.232684 containerd[1527]: 2025-10-13 05:50:59.198 [INFO][5447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-ws9wf" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-eth0" Oct 13 05:50:59.232684 containerd[1527]: 2025-10-13 05:50:59.198 [INFO][5447] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-ws9wf" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-eth0", GenerateName:"calico-apiserver-6cd79d768-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed2ebcf0-c2ce-4dc8-b03d-3f26a1758709", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 50, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd79d768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-5-82d9fc1916", ContainerID:"655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b", Pod:"calico-apiserver-6cd79d768-ws9wf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieccd75b5741", MAC:"02:ac:54:14:62:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:50:59.232684 containerd[1527]: 2025-10-13 05:50:59.213 [INFO][5447] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd79d768-ws9wf" WorkloadEndpoint="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--6cd79d768--ws9wf-eth0" Oct 13 05:50:59.278292 containerd[1527]: time="2025-10-13T05:50:59.278047776Z" level=info msg="connecting to shim 655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b" address="unix:///run/containerd/s/b027de935d6443c9ea99820b662d81a71c22558bc042add0fc0fff8a8a03f39b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:50:59.334344 systemd[1]: Started cri-containerd-655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b.scope - libcontainer container 655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b. Oct 13 05:50:59.356028 kubelet[2706]: I1013 05:50:59.354903 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f96336ff-8765-4a8c-9987-b1ec94233d7e-calico-apiserver-certs\") pod \"f96336ff-8765-4a8c-9987-b1ec94233d7e\" (UID: \"f96336ff-8765-4a8c-9987-b1ec94233d7e\") " Oct 13 05:50:59.359009 kubelet[2706]: I1013 05:50:59.358605 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc4dc\" (UniqueName: \"kubernetes.io/projected/f96336ff-8765-4a8c-9987-b1ec94233d7e-kube-api-access-lc4dc\") pod \"f96336ff-8765-4a8c-9987-b1ec94233d7e\" (UID: \"f96336ff-8765-4a8c-9987-b1ec94233d7e\") " Oct 13 05:50:59.373646 systemd[1]: var-lib-kubelet-pods-f96336ff\x2d8765\x2d4a8c\x2d9987\x2db1ec94233d7e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlc4dc.mount: Deactivated successfully. Oct 13 05:50:59.378096 kubelet[2706]: I1013 05:50:59.373066 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f96336ff-8765-4a8c-9987-b1ec94233d7e-kube-api-access-lc4dc" (OuterVolumeSpecName: "kube-api-access-lc4dc") pod "f96336ff-8765-4a8c-9987-b1ec94233d7e" (UID: "f96336ff-8765-4a8c-9987-b1ec94233d7e"). InnerVolumeSpecName "kube-api-access-lc4dc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:50:59.380857 kubelet[2706]: I1013 05:50:59.380805 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f96336ff-8765-4a8c-9987-b1ec94233d7e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "f96336ff-8765-4a8c-9987-b1ec94233d7e" (UID: "f96336ff-8765-4a8c-9987-b1ec94233d7e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:50:59.381877 systemd[1]: var-lib-kubelet-pods-f96336ff\x2d8765\x2d4a8c\x2d9987\x2db1ec94233d7e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 13 05:50:59.423454 containerd[1527]: time="2025-10-13T05:50:59.423386174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd79d768-ws9wf,Uid:ed2ebcf0-c2ce-4dc8-b03d-3f26a1758709,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b\"" Oct 13 05:50:59.435351 containerd[1527]: time="2025-10-13T05:50:59.435298704Z" level=info msg="CreateContainer within sandbox \"655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:50:59.450112 containerd[1527]: time="2025-10-13T05:50:59.450064949Z" level=info msg="Container 5b707ed27174555a826f71f5e0eaeab7bb1ff465e8b7cdd079b1c5ee2be97906: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:50:59.459111 kubelet[2706]: I1013 05:50:59.459044 2706 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f96336ff-8765-4a8c-9987-b1ec94233d7e-calico-apiserver-certs\") on node \"ci-4459.1.0-5-82d9fc1916\" DevicePath \"\"" Oct 13 05:50:59.459111 kubelet[2706]: I1013 05:50:59.459079 2706 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lc4dc\" (UniqueName: \"kubernetes.io/projected/f96336ff-8765-4a8c-9987-b1ec94233d7e-kube-api-access-lc4dc\") on node \"ci-4459.1.0-5-82d9fc1916\" DevicePath \"\"" Oct 13 05:50:59.462044 containerd[1527]: time="2025-10-13T05:50:59.461241356Z" level=info msg="CreateContainer within sandbox \"655adb96ecd919ca202aacc48afea09584088d919933bfd4af29f7c5a44f4d8b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b707ed27174555a826f71f5e0eaeab7bb1ff465e8b7cdd079b1c5ee2be97906\"" Oct 13 05:50:59.463259 containerd[1527]: time="2025-10-13T05:50:59.463221189Z" level=info msg="StartContainer for \"5b707ed27174555a826f71f5e0eaeab7bb1ff465e8b7cdd079b1c5ee2be97906\"" Oct 13 05:50:59.465360 containerd[1527]: time="2025-10-13T05:50:59.465321874Z" level=info msg="connecting to shim 5b707ed27174555a826f71f5e0eaeab7bb1ff465e8b7cdd079b1c5ee2be97906" address="unix:///run/containerd/s/b027de935d6443c9ea99820b662d81a71c22558bc042add0fc0fff8a8a03f39b" protocol=ttrpc version=3 Oct 13 05:50:59.510539 systemd[1]: Started cri-containerd-5b707ed27174555a826f71f5e0eaeab7bb1ff465e8b7cdd079b1c5ee2be97906.scope - libcontainer container 5b707ed27174555a826f71f5e0eaeab7bb1ff465e8b7cdd079b1c5ee2be97906. Oct 13 05:50:59.585666 systemd[1]: Removed slice kubepods-besteffort-podf96336ff_8765_4a8c_9987_b1ec94233d7e.slice - libcontainer container kubepods-besteffort-podf96336ff_8765_4a8c_9987_b1ec94233d7e.slice. Oct 13 05:50:59.644690 containerd[1527]: time="2025-10-13T05:50:59.644162424Z" level=info msg="StartContainer for \"5b707ed27174555a826f71f5e0eaeab7bb1ff465e8b7cdd079b1c5ee2be97906\" returns successfully" Oct 13 05:50:59.680460 kubelet[2706]: E1013 05:50:59.680326 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:51:00.592509 kubelet[2706]: I1013 05:51:00.590709 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cd79d768-ws9wf" podStartSLOduration=3.5842397200000002 podStartE2EDuration="3.58423972s" podCreationTimestamp="2025-10-13 05:50:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:51:00.584104869 +0000 UTC m=+72.089815617" watchObservedRunningTime="2025-10-13 05:51:00.58423972 +0000 UTC m=+72.089950469" Oct 13 05:51:00.650831 systemd-networkd[1439]: calieccd75b5741: Gained IPv6LL Oct 13 05:51:00.686615 kubelet[2706]: I1013 05:51:00.686550 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f96336ff-8765-4a8c-9987-b1ec94233d7e" path="/var/lib/kubelet/pods/f96336ff-8765-4a8c-9987-b1ec94233d7e/volumes" Oct 13 05:51:01.164534 systemd[1]: Started sshd@9-137.184.180.203:22-139.178.89.65:45188.service - OpenSSH per-connection server daemon (139.178.89.65:45188). Oct 13 05:51:01.352729 sshd[5585]: Accepted publickey for core from 139.178.89.65 port 45188 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:01.356247 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:01.365030 systemd-logind[1497]: New session 10 of user core. Oct 13 05:51:01.370410 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:51:01.408851 containerd[1527]: time="2025-10-13T05:51:01.408791823Z" level=info msg="StopContainer for \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\" with timeout 30 (s)" Oct 13 05:51:01.411349 containerd[1527]: time="2025-10-13T05:51:01.411186027Z" level=info msg="Stop container \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\" with signal terminated" Oct 13 05:51:01.637821 systemd[1]: cri-containerd-2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722.scope: Deactivated successfully. Oct 13 05:51:01.652009 containerd[1527]: time="2025-10-13T05:51:01.651753198Z" level=info msg="received exit event container_id:\"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\" id:\"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\" pid:5263 exit_status:1 exited_at:{seconds:1760334661 nanos:650802307}" Oct 13 05:51:01.653861 containerd[1527]: time="2025-10-13T05:51:01.651784633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\" id:\"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\" pid:5263 exit_status:1 exited_at:{seconds:1760334661 nanos:650802307}" Oct 13 05:51:01.779394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722-rootfs.mount: Deactivated successfully. Oct 13 05:51:01.814992 containerd[1527]: time="2025-10-13T05:51:01.813797562Z" level=info msg="StopContainer for \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\" returns successfully" Oct 13 05:51:01.818575 containerd[1527]: time="2025-10-13T05:51:01.817937770Z" level=info msg="StopPodSandbox for \"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83\"" Oct 13 05:51:01.818575 containerd[1527]: time="2025-10-13T05:51:01.818520467Z" level=info msg="Container to stop \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:51:01.838015 systemd[1]: cri-containerd-63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83.scope: Deactivated successfully. Oct 13 05:51:01.849644 containerd[1527]: time="2025-10-13T05:51:01.849592433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83\" id:\"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83\" pid:4759 exit_status:137 exited_at:{seconds:1760334661 nanos:849209202}" Oct 13 05:51:01.923239 containerd[1527]: time="2025-10-13T05:51:01.922058881Z" level=info msg="shim disconnected" id=63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83 namespace=k8s.io Oct 13 05:51:01.923239 containerd[1527]: time="2025-10-13T05:51:01.922100686Z" level=warning msg="cleaning up after shim disconnected" id=63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83 namespace=k8s.io Oct 13 05:51:01.923239 containerd[1527]: time="2025-10-13T05:51:01.922108839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 05:51:01.926448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83-rootfs.mount: Deactivated successfully. Oct 13 05:51:01.976330 containerd[1527]: time="2025-10-13T05:51:01.976219554Z" level=info msg="received exit event sandbox_id:\"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83\" exit_status:137 exited_at:{seconds:1760334661 nanos:849209202}" Oct 13 05:51:01.977032 containerd[1527]: time="2025-10-13T05:51:01.976913822Z" level=error msg="Failed to handle event container_id:\"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83\" id:\"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83\" pid:4759 exit_status:137 exited_at:{seconds:1760334661 nanos:849209202} for 63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Oct 13 05:51:01.990428 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83-shm.mount: Deactivated successfully. Oct 13 05:51:02.145108 systemd-networkd[1439]: cali8cb0c0b34dc: Link DOWN Oct 13 05:51:02.145729 systemd-networkd[1439]: cali8cb0c0b34dc: Lost carrier Oct 13 05:51:02.288207 sshd[5589]: Connection closed by 139.178.89.65 port 45188 Oct 13 05:51:02.286880 sshd-session[5585]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:02.311660 systemd[1]: sshd@9-137.184.180.203:22-139.178.89.65:45188.service: Deactivated successfully. Oct 13 05:51:02.316432 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:51:02.322475 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:51:02.336909 systemd[1]: Started sshd@10-137.184.180.203:22-139.178.89.65:53356.service - OpenSSH per-connection server daemon (139.178.89.65:53356). Oct 13 05:51:02.343859 systemd-logind[1497]: Removed session 10. Oct 13 05:51:02.511123 sshd[5705]: Accepted publickey for core from 139.178.89.65 port 53356 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:02.517767 containerd[1527]: 2025-10-13 05:51:02.107 [INFO][5667] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Oct 13 05:51:02.517767 containerd[1527]: 2025-10-13 05:51:02.118 [INFO][5667] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" iface="eth0" netns="/var/run/netns/cni-7551f5fc-9bb7-ae55-dfeb-633584793e1d" Oct 13 05:51:02.517767 containerd[1527]: 2025-10-13 05:51:02.136 [INFO][5667] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" iface="eth0" netns="/var/run/netns/cni-7551f5fc-9bb7-ae55-dfeb-633584793e1d" Oct 13 05:51:02.517767 containerd[1527]: 2025-10-13 05:51:02.156 [INFO][5667] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" after=31.901736ms iface="eth0" netns="/var/run/netns/cni-7551f5fc-9bb7-ae55-dfeb-633584793e1d" Oct 13 05:51:02.517767 containerd[1527]: 2025-10-13 05:51:02.159 [INFO][5667] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Oct 13 05:51:02.517767 containerd[1527]: 2025-10-13 05:51:02.159 [INFO][5667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Oct 13 05:51:02.517767 containerd[1527]: 2025-10-13 05:51:02.364 [INFO][5679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" HandleID="k8s-pod-network.63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0" Oct 13 05:51:02.517767 containerd[1527]: 2025-10-13 05:51:02.365 [INFO][5679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:51:02.517767 containerd[1527]: 2025-10-13 05:51:02.365 [INFO][5679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:51:02.517767 containerd[1527]: 2025-10-13 05:51:02.488 [INFO][5679] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" HandleID="k8s-pod-network.63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0" Oct 13 05:51:02.517767 containerd[1527]: 2025-10-13 05:51:02.490 [INFO][5679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" HandleID="k8s-pod-network.63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Workload="ci--4459.1.0--5--82d9fc1916-k8s-calico--apiserver--f9868759c--jzd69-eth0" Oct 13 05:51:02.517767 containerd[1527]: 2025-10-13 05:51:02.495 [INFO][5679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:51:02.517767 containerd[1527]: 2025-10-13 05:51:02.508 [INFO][5667] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83" Oct 13 05:51:02.523580 containerd[1527]: time="2025-10-13T05:51:02.519157389Z" level=info msg="TearDown network for sandbox \"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83\" successfully" Oct 13 05:51:02.523580 containerd[1527]: time="2025-10-13T05:51:02.519329853Z" level=info msg="StopPodSandbox for \"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83\" returns successfully" Oct 13 05:51:02.522665 sshd-session[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:02.533256 systemd[1]: run-netns-cni\x2d7551f5fc\x2d9bb7\x2dae55\x2ddfeb\x2d633584793e1d.mount: Deactivated successfully. Oct 13 05:51:02.555501 systemd-logind[1497]: New session 11 of user core. Oct 13 05:51:02.562879 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:51:02.602314 kubelet[2706]: I1013 05:51:02.601794 2706 scope.go:117] "RemoveContainer" containerID="2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722" Oct 13 05:51:02.607915 kubelet[2706]: I1013 05:51:02.607824 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/625d1f05-914e-4b92-8eda-0f0088321193-calico-apiserver-certs\") pod \"625d1f05-914e-4b92-8eda-0f0088321193\" (UID: \"625d1f05-914e-4b92-8eda-0f0088321193\") " Oct 13 05:51:02.607915 kubelet[2706]: I1013 05:51:02.607912 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5sdm\" (UniqueName: \"kubernetes.io/projected/625d1f05-914e-4b92-8eda-0f0088321193-kube-api-access-p5sdm\") pod \"625d1f05-914e-4b92-8eda-0f0088321193\" (UID: \"625d1f05-914e-4b92-8eda-0f0088321193\") " Oct 13 05:51:02.626574 containerd[1527]: time="2025-10-13T05:51:02.626407405Z" level=info msg="RemoveContainer for \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\"" Oct 13 05:51:02.649106 kubelet[2706]: I1013 05:51:02.647642 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/625d1f05-914e-4b92-8eda-0f0088321193-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "625d1f05-914e-4b92-8eda-0f0088321193" (UID: "625d1f05-914e-4b92-8eda-0f0088321193"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:51:02.652520 systemd[1]: var-lib-kubelet-pods-625d1f05\x2d914e\x2d4b92\x2d8eda\x2d0f0088321193-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 13 05:51:02.668628 kubelet[2706]: I1013 05:51:02.668503 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/625d1f05-914e-4b92-8eda-0f0088321193-kube-api-access-p5sdm" (OuterVolumeSpecName: "kube-api-access-p5sdm") pod "625d1f05-914e-4b92-8eda-0f0088321193" (UID: "625d1f05-914e-4b92-8eda-0f0088321193"). InnerVolumeSpecName "kube-api-access-p5sdm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:51:02.689162 containerd[1527]: time="2025-10-13T05:51:02.688799911Z" level=info msg="RemoveContainer for \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\" returns successfully" Oct 13 05:51:02.692733 kubelet[2706]: I1013 05:51:02.691270 2706 scope.go:117] "RemoveContainer" containerID="2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722" Oct 13 05:51:02.702552 containerd[1527]: time="2025-10-13T05:51:02.694362244Z" level=error msg="ContainerStatus for \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\": not found" Oct 13 05:51:02.710824 kubelet[2706]: I1013 05:51:02.710714 2706 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/625d1f05-914e-4b92-8eda-0f0088321193-calico-apiserver-certs\") on node \"ci-4459.1.0-5-82d9fc1916\" DevicePath \"\"" Oct 13 05:51:02.710824 kubelet[2706]: I1013 05:51:02.710778 2706 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p5sdm\" (UniqueName: \"kubernetes.io/projected/625d1f05-914e-4b92-8eda-0f0088321193-kube-api-access-p5sdm\") on node \"ci-4459.1.0-5-82d9fc1916\" DevicePath \"\"" Oct 13 05:51:02.776796 systemd[1]: var-lib-kubelet-pods-625d1f05\x2d914e\x2d4b92\x2d8eda\x2d0f0088321193-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp5sdm.mount: Deactivated successfully. Oct 13 05:51:02.790381 kubelet[2706]: E1013 05:51:02.786424 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\": not found" containerID="2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722" Oct 13 05:51:02.801360 kubelet[2706]: I1013 05:51:02.789311 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722"} err="failed to get container status \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\": rpc error: code = NotFound desc = an error occurred when try to find container \"2277cbf462aa1863ca9b4ca8e4f98ed8ed3d0aa80cdcc1ff662f4b882568a722\": not found" Oct 13 05:51:02.801855 kubelet[2706]: E1013 05:51:02.801229 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:51:02.803034 systemd[1]: Removed slice kubepods-besteffort-pod625d1f05_914e_4b92_8eda_0f0088321193.slice - libcontainer container kubepods-besteffort-pod625d1f05_914e_4b92_8eda_0f0088321193.slice. Oct 13 05:51:02.803183 systemd[1]: kubepods-besteffort-pod625d1f05_914e_4b92_8eda_0f0088321193.slice: Consumed 1.024s CPU time, 57.2M memory peak, 705K read from disk. Oct 13 05:51:02.988785 containerd[1527]: time="2025-10-13T05:51:02.988385484Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77e21e581b51f2088b8a348c155ed30defcaea59224842646326434ac1ae6a95\" id:\"873129e01d03fc5ed80d088318b0640307c3441ff8fbe9356e4d05271a5ac9ee\" pid:5707 exited_at:{seconds:1760334662 nanos:982294421}" Oct 13 05:51:03.175167 sshd[5730]: Connection closed by 139.178.89.65 port 53356 Oct 13 05:51:03.178001 sshd-session[5705]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:03.194168 systemd[1]: sshd@10-137.184.180.203:22-139.178.89.65:53356.service: Deactivated successfully. Oct 13 05:51:03.202246 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:51:03.207745 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:51:03.218632 systemd[1]: Started sshd@11-137.184.180.203:22-139.178.89.65:53370.service - OpenSSH per-connection server daemon (139.178.89.65:53370). Oct 13 05:51:03.221956 systemd-logind[1497]: Removed session 11. Oct 13 05:51:03.347195 sshd[5743]: Accepted publickey for core from 139.178.89.65 port 53370 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:03.350259 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:03.358489 systemd-logind[1497]: New session 12 of user core. Oct 13 05:51:03.365367 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:51:03.567505 sshd[5748]: Connection closed by 139.178.89.65 port 53370 Oct 13 05:51:03.569371 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:03.579818 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:51:03.580754 systemd[1]: sshd@11-137.184.180.203:22-139.178.89.65:53370.service: Deactivated successfully. Oct 13 05:51:03.585482 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:51:03.589077 systemd-logind[1497]: Removed session 12. Oct 13 05:51:03.853593 containerd[1527]: time="2025-10-13T05:51:03.853394033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83\" id:\"63af6a4c2b2951b01e094bab4cf9e107689310209cf99078d9b384bc38829e83\" pid:4759 exit_status:137 exited_at:{seconds:1760334661 nanos:849209202}" Oct 13 05:51:04.695384 kubelet[2706]: I1013 05:51:04.695093 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="625d1f05-914e-4b92-8eda-0f0088321193" path="/var/lib/kubelet/pods/625d1f05-914e-4b92-8eda-0f0088321193/volumes" Oct 13 05:51:08.583498 systemd[1]: Started sshd@12-137.184.180.203:22-139.178.89.65:53378.service - OpenSSH per-connection server daemon (139.178.89.65:53378). Oct 13 05:51:08.738718 sshd[5766]: Accepted publickey for core from 139.178.89.65 port 53378 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:08.741533 sshd-session[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:08.751351 systemd-logind[1497]: New session 13 of user core. Oct 13 05:51:08.756376 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:51:09.001233 sshd[5769]: Connection closed by 139.178.89.65 port 53378 Oct 13 05:51:09.002647 sshd-session[5766]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:09.009037 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:51:09.009388 systemd[1]: sshd@12-137.184.180.203:22-139.178.89.65:53378.service: Deactivated successfully. Oct 13 05:51:09.015510 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:51:09.021580 systemd-logind[1497]: Removed session 13. Oct 13 05:51:14.018473 systemd[1]: Started sshd@13-137.184.180.203:22-139.178.89.65:37330.service - OpenSSH per-connection server daemon (139.178.89.65:37330). Oct 13 05:51:14.181610 sshd[5783]: Accepted publickey for core from 139.178.89.65 port 37330 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:14.185804 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:14.195965 systemd-logind[1497]: New session 14 of user core. Oct 13 05:51:14.202240 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:51:14.863006 sshd[5787]: Connection closed by 139.178.89.65 port 37330 Oct 13 05:51:14.861940 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:14.867408 systemd[1]: sshd@13-137.184.180.203:22-139.178.89.65:37330.service: Deactivated successfully. Oct 13 05:51:14.872240 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:51:14.875291 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:51:14.877852 systemd-logind[1497]: Removed session 14. Oct 13 05:51:14.880721 containerd[1527]: time="2025-10-13T05:51:14.880677929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e16b02efe718ed8eb1ed06238a68e910b0d526e1cf0b78741e52011feea26aa\" id:\"6a1a94dc437ac5c3db69300aa83f46b8c0f7bf29cc3e980118571cf665ca907b\" pid:5807 exited_at:{seconds:1760334674 nanos:880189795}" Oct 13 05:51:16.459017 containerd[1527]: time="2025-10-13T05:51:16.458938415Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e0716120953b75976b3dbfe96e5aed5366f84ba3b2f45aba74e00f686d61fd4\" id:\"76373def2955e986bda06754b1c161f4b0e5a8a65ddc7f9490b230f430f97f4f\" pid:5836 exited_at:{seconds:1760334676 nanos:458150389}" Oct 13 05:51:19.879914 systemd[1]: Started sshd@14-137.184.180.203:22-139.178.89.65:37332.service - OpenSSH per-connection server daemon (139.178.89.65:37332). Oct 13 05:51:20.008445 sshd[5847]: Accepted publickey for core from 139.178.89.65 port 37332 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:20.011202 sshd-session[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:20.020045 systemd-logind[1497]: New session 15 of user core. Oct 13 05:51:20.024194 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:51:20.381121 sshd[5850]: Connection closed by 139.178.89.65 port 37332 Oct 13 05:51:20.382205 sshd-session[5847]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:20.389833 systemd[1]: sshd@14-137.184.180.203:22-139.178.89.65:37332.service: Deactivated successfully. Oct 13 05:51:20.392687 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:51:20.394846 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:51:20.398000 systemd-logind[1497]: Removed session 15. Oct 13 05:51:20.742006 containerd[1527]: time="2025-10-13T05:51:20.741808457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e0716120953b75976b3dbfe96e5aed5366f84ba3b2f45aba74e00f686d61fd4\" id:\"4fee844fddeacc27aee0f69c5f2be921553e3238a10058fae294267286467827\" pid:5873 exited_at:{seconds:1760334680 nanos:741294722}" Oct 13 05:51:24.716510 kubelet[2706]: E1013 05:51:24.713640 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:51:25.407909 systemd[1]: Started sshd@15-137.184.180.203:22-139.178.89.65:45968.service - OpenSSH per-connection server daemon (139.178.89.65:45968). Oct 13 05:51:25.627151 sshd[5892]: Accepted publickey for core from 139.178.89.65 port 45968 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:25.628577 sshd-session[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:25.640097 systemd-logind[1497]: New session 16 of user core. Oct 13 05:51:25.647240 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:51:26.387303 sshd[5895]: Connection closed by 139.178.89.65 port 45968 Oct 13 05:51:26.393244 sshd-session[5892]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:26.413902 systemd[1]: sshd@15-137.184.180.203:22-139.178.89.65:45968.service: Deactivated successfully. Oct 13 05:51:26.419525 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:51:26.422197 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:51:26.432111 systemd[1]: Started sshd@16-137.184.180.203:22-139.178.89.65:45980.service - OpenSSH per-connection server daemon (139.178.89.65:45980). Oct 13 05:51:26.434061 systemd-logind[1497]: Removed session 16. Oct 13 05:51:26.544328 sshd[5908]: Accepted publickey for core from 139.178.89.65 port 45980 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:26.546831 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:26.560240 systemd-logind[1497]: New session 17 of user core. Oct 13 05:51:26.566329 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:51:26.680202 kubelet[2706]: E1013 05:51:26.679996 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 13 05:51:26.988175 sshd[5913]: Connection closed by 139.178.89.65 port 45980 Oct 13 05:51:26.988033 sshd-session[5908]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:27.007938 systemd[1]: sshd@16-137.184.180.203:22-139.178.89.65:45980.service: Deactivated successfully. Oct 13 05:51:27.014649 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:51:27.019299 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:51:27.028339 systemd[1]: Started sshd@17-137.184.180.203:22-139.178.89.65:45992.service - OpenSSH per-connection server daemon (139.178.89.65:45992). Oct 13 05:51:27.030098 systemd-logind[1497]: Removed session 17. Oct 13 05:51:27.130107 sshd[5923]: Accepted publickey for core from 139.178.89.65 port 45992 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:27.134477 sshd-session[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:27.144128 systemd-logind[1497]: New session 18 of user core. Oct 13 05:51:27.149308 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:51:28.255770 sshd[5926]: Connection closed by 139.178.89.65 port 45992 Oct 13 05:51:28.258582 sshd-session[5923]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:28.272084 systemd[1]: sshd@17-137.184.180.203:22-139.178.89.65:45992.service: Deactivated successfully. Oct 13 05:51:28.276907 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:51:28.279043 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:51:28.287262 systemd[1]: Started sshd@18-137.184.180.203:22-139.178.89.65:45996.service - OpenSSH per-connection server daemon (139.178.89.65:45996). Oct 13 05:51:28.292861 systemd-logind[1497]: Removed session 18. Oct 13 05:51:28.400408 sshd[5941]: Accepted publickey for core from 139.178.89.65 port 45996 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:28.404458 sshd-session[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:28.414735 systemd-logind[1497]: New session 19 of user core. Oct 13 05:51:28.419172 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:51:28.987839 sshd[5944]: Connection closed by 139.178.89.65 port 45996 Oct 13 05:51:28.988595 sshd-session[5941]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:29.004373 systemd[1]: sshd@18-137.184.180.203:22-139.178.89.65:45996.service: Deactivated successfully. Oct 13 05:51:29.011930 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:51:29.015091 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:51:29.022655 systemd[1]: Started sshd@19-137.184.180.203:22-139.178.89.65:46002.service - OpenSSH per-connection server daemon (139.178.89.65:46002). Oct 13 05:51:29.032235 systemd-logind[1497]: Removed session 19. Oct 13 05:51:29.105710 sshd[5955]: Accepted publickey for core from 139.178.89.65 port 46002 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:29.110807 sshd-session[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:29.121020 systemd-logind[1497]: New session 20 of user core. Oct 13 05:51:29.123521 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:51:29.318454 sshd[5958]: Connection closed by 139.178.89.65 port 46002 Oct 13 05:51:29.319734 sshd-session[5955]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:29.326638 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:51:29.327446 systemd[1]: sshd@19-137.184.180.203:22-139.178.89.65:46002.service: Deactivated successfully. Oct 13 05:51:29.331884 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:51:29.336318 systemd-logind[1497]: Removed session 20. Oct 13 05:51:32.785149 containerd[1527]: time="2025-10-13T05:51:32.785024380Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77e21e581b51f2088b8a348c155ed30defcaea59224842646326434ac1ae6a95\" id:\"cc2edb79d068c177b35c3ae4e6dcbaf38426200295d840d89279ad635696d2c7\" pid:5984 exited_at:{seconds:1760334692 nanos:783211269}" Oct 13 05:51:34.337990 systemd[1]: Started sshd@20-137.184.180.203:22-139.178.89.65:40148.service - OpenSSH per-connection server daemon (139.178.89.65:40148). Oct 13 05:51:34.492543 sshd[5998]: Accepted publickey for core from 139.178.89.65 port 40148 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:34.495762 sshd-session[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:34.503799 systemd-logind[1497]: New session 21 of user core. Oct 13 05:51:34.512238 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:51:35.057837 sshd[6001]: Connection closed by 139.178.89.65 port 40148 Oct 13 05:51:35.057698 sshd-session[5998]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:35.068248 systemd[1]: sshd@20-137.184.180.203:22-139.178.89.65:40148.service: Deactivated successfully. Oct 13 05:51:35.083686 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:51:35.088711 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:51:35.091379 systemd-logind[1497]: Removed session 21. Oct 13 05:51:40.074418 systemd[1]: Started sshd@21-137.184.180.203:22-139.178.89.65:40154.service - OpenSSH per-connection server daemon (139.178.89.65:40154). Oct 13 05:51:40.148222 sshd[6014]: Accepted publickey for core from 139.178.89.65 port 40154 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:40.151952 sshd-session[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:40.160656 systemd-logind[1497]: New session 22 of user core. Oct 13 05:51:40.170355 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:51:40.373825 sshd[6017]: Connection closed by 139.178.89.65 port 40154 Oct 13 05:51:40.374273 sshd-session[6014]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:40.381757 systemd[1]: sshd@21-137.184.180.203:22-139.178.89.65:40154.service: Deactivated successfully. Oct 13 05:51:40.387887 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:51:40.390132 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:51:40.397532 systemd-logind[1497]: Removed session 22. Oct 13 05:51:42.150350 systemd[1]: Started sshd@22-137.184.180.203:22-8.137.104.94:40874.service - OpenSSH per-connection server daemon (8.137.104.94:40874). Oct 13 05:51:44.727020 containerd[1527]: time="2025-10-13T05:51:44.726512559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e16b02efe718ed8eb1ed06238a68e910b0d526e1cf0b78741e52011feea26aa\" id:\"843de03b7923f2549e1feba01a9c21ab4f2b34390205333ac057f73e9e9e3a58\" pid:6049 exited_at:{seconds:1760334704 nanos:726011396}" Oct 13 05:51:45.390057 systemd[1]: Started sshd@23-137.184.180.203:22-139.178.89.65:33162.service - OpenSSH per-connection server daemon (139.178.89.65:33162). Oct 13 05:51:45.508219 sshd[6060]: Accepted publickey for core from 139.178.89.65 port 33162 ssh2: RSA SHA256:ute8EHbIxInN5ULe6pB25aggDfqFBUDOgvC7nToDGNM Oct 13 05:51:45.510281 sshd-session[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:45.521325 systemd-logind[1497]: New session 23 of user core. Oct 13 05:51:45.525161 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:51:45.995761 sshd[6063]: Connection closed by 139.178.89.65 port 33162 Oct 13 05:51:45.996744 sshd-session[6060]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:46.005542 systemd[1]: sshd@23-137.184.180.203:22-139.178.89.65:33162.service: Deactivated successfully. Oct 13 05:51:46.011508 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:51:46.016076 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:51:46.017103 systemd-logind[1497]: Removed session 23. Oct 13 05:51:46.422512 containerd[1527]: time="2025-10-13T05:51:46.422467929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e0716120953b75976b3dbfe96e5aed5366f84ba3b2f45aba74e00f686d61fd4\" id:\"a3ff6760488d8995cac282f1607f6f3a03afcc9b134dabe05bf64dda392aadab\" pid:6087 exited_at:{seconds:1760334706 nanos:421582203}"