Aug 13 07:06:19.911885 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:06:19.911914 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:06:19.911931 kernel: BIOS-provided physical RAM map: Aug 13 07:06:19.911937 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 07:06:19.911944 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 07:06:19.911950 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 07:06:19.911959 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 07:06:19.911970 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 07:06:19.911983 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 07:06:19.911996 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 07:06:19.912006 kernel: NX (Execute Disable) protection: active Aug 13 07:06:19.912015 kernel: APIC: Static calls initialized Aug 13 07:06:19.912029 kernel: SMBIOS 2.8 present. Aug 13 07:06:19.912041 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 07:06:19.912051 kernel: Hypervisor detected: KVM Aug 13 07:06:19.912063 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:06:19.912075 kernel: kvm-clock: using sched offset of 3044437345 cycles Aug 13 07:06:19.912083 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:06:19.912091 kernel: tsc: Detected 2494.134 MHz processor Aug 13 07:06:19.912100 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:06:19.912111 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:06:19.912119 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 07:06:19.912126 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 07:06:19.914187 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:06:19.914205 kernel: ACPI: Early table checksum verification disabled Aug 13 07:06:19.914214 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 07:06:19.914223 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:19.914231 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:19.914239 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:19.914248 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 07:06:19.914255 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:19.914266 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:19.914274 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:19.914285 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:06:19.914298 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 07:06:19.914306 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 07:06:19.914314 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 07:06:19.914322 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 07:06:19.914330 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 07:06:19.914338 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 07:06:19.914353 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 07:06:19.914361 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:06:19.914370 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:06:19.914378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:06:19.914386 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 07:06:19.914400 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 13 07:06:19.914422 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 13 07:06:19.914439 kernel: Zone ranges: Aug 13 07:06:19.914451 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:06:19.914459 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 07:06:19.914467 kernel: Normal empty Aug 13 07:06:19.914475 kernel: Movable zone start for each node Aug 13 07:06:19.914484 kernel: Early memory node ranges Aug 13 07:06:19.914492 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 07:06:19.914500 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 07:06:19.914508 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 07:06:19.914520 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:06:19.914528 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 07:06:19.914540 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 07:06:19.914548 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:06:19.914557 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:06:19.914565 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:06:19.914573 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:06:19.914582 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:06:19.914590 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:06:19.914601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:06:19.914610 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:06:19.914618 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:06:19.914626 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:06:19.914634 kernel: TSC deadline timer available Aug 13 07:06:19.914643 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:06:19.914651 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:06:19.914660 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 07:06:19.914670 kernel: Booting paravirtualized kernel on KVM Aug 13 07:06:19.914679 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:06:19.914690 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:06:19.914699 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:06:19.914707 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:06:19.914716 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:06:19.914727 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 07:06:19.914743 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:06:19.914756 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:06:19.914773 kernel: random: crng init done Aug 13 07:06:19.914787 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:06:19.914797 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:06:19.914805 kernel: Fallback order for Node 0: 0 Aug 13 07:06:19.914813 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 13 07:06:19.914822 kernel: Policy zone: DMA32 Aug 13 07:06:19.914830 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:06:19.914840 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 125148K reserved, 0K cma-reserved) Aug 13 07:06:19.914854 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:06:19.914873 kernel: Kernel/User page tables isolation: enabled Aug 13 07:06:19.914887 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:06:19.914899 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:06:19.914910 kernel: Dynamic Preempt: voluntary Aug 13 07:06:19.914921 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:06:19.914935 kernel: rcu: RCU event tracing is enabled. Aug 13 07:06:19.914946 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:06:19.914958 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:06:19.914977 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:06:19.914995 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:06:19.915006 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:06:19.915025 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:06:19.915037 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:06:19.915050 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:06:19.915065 kernel: Console: colour VGA+ 80x25 Aug 13 07:06:19.915073 kernel: printk: console [tty0] enabled Aug 13 07:06:19.915081 kernel: printk: console [ttyS0] enabled Aug 13 07:06:19.915090 kernel: ACPI: Core revision 20230628 Aug 13 07:06:19.915098 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:06:19.915111 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:06:19.915119 kernel: x2apic enabled Aug 13 07:06:19.915139 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:06:19.915147 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:06:19.915156 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Aug 13 07:06:19.915170 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) Aug 13 07:06:19.915178 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 07:06:19.915187 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 07:06:19.915208 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:06:19.915216 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:06:19.915225 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:06:19.915237 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 07:06:19.915248 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:06:19.915257 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:06:19.915266 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:06:19.915274 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:06:19.915283 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:06:19.915299 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:06:19.915310 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:06:19.915319 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:06:19.915327 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:06:19.915336 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:06:19.915345 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:06:19.915354 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:06:19.915363 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:06:19.915375 kernel: landlock: Up and running. Aug 13 07:06:19.915388 kernel: SELinux: Initializing. Aug 13 07:06:19.915397 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:06:19.915406 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:06:19.915415 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 07:06:19.915423 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:06:19.915432 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:06:19.915441 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:06:19.915455 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 07:06:19.915466 kernel: signal: max sigframe size: 1776 Aug 13 07:06:19.915474 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:06:19.915483 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:06:19.915497 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:06:19.915506 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:06:19.915515 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:06:19.915524 kernel: .... node #0, CPUs: #1 Aug 13 07:06:19.915532 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:06:19.915543 kernel: smpboot: Max logical packages: 1 Aug 13 07:06:19.915555 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) Aug 13 07:06:19.915565 kernel: devtmpfs: initialized Aug 13 07:06:19.915574 kernel: x86/mm: Memory block size: 128MB Aug 13 07:06:19.915583 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:06:19.915594 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:06:19.915603 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:06:19.915612 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:06:19.915621 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:06:19.915629 kernel: audit: type=2000 audit(1755068778.473:1): state=initialized audit_enabled=0 res=1 Aug 13 07:06:19.915641 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:06:19.915650 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:06:19.915658 kernel: cpuidle: using governor menu Aug 13 07:06:19.915667 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:06:19.915676 kernel: dca service started, version 1.12.1 Aug 13 07:06:19.915685 kernel: PCI: Using configuration type 1 for base access Aug 13 07:06:19.915693 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:06:19.915702 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:06:19.915712 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:06:19.915723 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:06:19.915737 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:06:19.915746 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:06:19.915755 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:06:19.915764 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:06:19.915957 kernel: ACPI: Interpreter enabled Aug 13 07:06:19.915966 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:06:19.915975 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:06:19.915983 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:06:19.916011 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:06:19.916024 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 07:06:19.916039 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:06:19.919266 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:06:19.919487 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:06:19.919607 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:06:19.921198 kernel: acpiphp: Slot [3] registered Aug 13 07:06:19.921233 kernel: acpiphp: Slot [4] registered Aug 13 07:06:19.921249 kernel: acpiphp: Slot [5] registered Aug 13 07:06:19.921258 kernel: acpiphp: Slot [6] registered Aug 13 07:06:19.921267 kernel: acpiphp: Slot [7] registered Aug 13 07:06:19.921276 kernel: acpiphp: Slot [8] registered Aug 13 07:06:19.921285 kernel: acpiphp: Slot [9] registered Aug 13 07:06:19.921294 kernel: acpiphp: Slot [10] registered Aug 13 07:06:19.921303 kernel: acpiphp: Slot [11] registered Aug 13 07:06:19.921312 kernel: acpiphp: Slot [12] registered Aug 13 07:06:19.921324 kernel: acpiphp: Slot [13] registered Aug 13 07:06:19.921333 kernel: acpiphp: Slot [14] registered Aug 13 07:06:19.921341 kernel: acpiphp: Slot [15] registered Aug 13 07:06:19.921350 kernel: acpiphp: Slot [16] registered Aug 13 07:06:19.921359 kernel: acpiphp: Slot [17] registered Aug 13 07:06:19.921368 kernel: acpiphp: Slot [18] registered Aug 13 07:06:19.921376 kernel: acpiphp: Slot [19] registered Aug 13 07:06:19.921386 kernel: acpiphp: Slot [20] registered Aug 13 07:06:19.921400 kernel: acpiphp: Slot [21] registered Aug 13 07:06:19.921412 kernel: acpiphp: Slot [22] registered Aug 13 07:06:19.921429 kernel: acpiphp: Slot [23] registered Aug 13 07:06:19.921441 kernel: acpiphp: Slot [24] registered Aug 13 07:06:19.921454 kernel: acpiphp: Slot [25] registered Aug 13 07:06:19.921468 kernel: acpiphp: Slot [26] registered Aug 13 07:06:19.921482 kernel: acpiphp: Slot [27] registered Aug 13 07:06:19.921493 kernel: acpiphp: Slot [28] registered Aug 13 07:06:19.921502 kernel: acpiphp: Slot [29] registered Aug 13 07:06:19.921511 kernel: acpiphp: Slot [30] registered Aug 13 07:06:19.921519 kernel: acpiphp: Slot [31] registered Aug 13 07:06:19.921532 kernel: PCI host bridge to bus 0000:00 Aug 13 07:06:19.921708 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:06:19.921803 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:06:19.921890 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:06:19.921976 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 07:06:19.922061 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 07:06:19.924283 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:06:19.924505 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:06:19.924635 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 07:06:19.924748 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 13 07:06:19.924851 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 13 07:06:19.924954 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 07:06:19.925051 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 07:06:19.925174 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 07:06:19.926376 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 07:06:19.926536 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 13 07:06:19.926644 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 13 07:06:19.926810 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:06:19.926941 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 07:06:19.927038 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 07:06:19.928269 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 13 07:06:19.928390 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 13 07:06:19.928545 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 07:06:19.928651 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 13 07:06:19.928753 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 13 07:06:19.928849 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:06:19.928994 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:06:19.929110 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 13 07:06:19.930317 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 13 07:06:19.930527 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 07:06:19.930678 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:06:19.930800 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 13 07:06:19.930899 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 13 07:06:19.931003 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 07:06:19.931112 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 13 07:06:19.932332 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 13 07:06:19.932483 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 13 07:06:19.932630 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 07:06:19.932798 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:06:19.932954 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 07:06:19.933113 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 13 07:06:19.934369 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 07:06:19.934565 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:06:19.934690 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 13 07:06:19.934854 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 13 07:06:19.934995 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 07:06:19.935110 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 13 07:06:19.938179 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 13 07:06:19.938302 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 07:06:19.938315 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:06:19.938325 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:06:19.938334 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:06:19.938343 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:06:19.938353 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:06:19.938370 kernel: iommu: Default domain type: Translated Aug 13 07:06:19.938380 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:06:19.938389 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:06:19.938398 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:06:19.938419 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 07:06:19.938433 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 07:06:19.938597 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 07:06:19.938712 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 07:06:19.938809 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:06:19.938829 kernel: vgaarb: loaded Aug 13 07:06:19.938838 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:06:19.938847 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:06:19.938857 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:06:19.938866 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:06:19.938876 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:06:19.938885 kernel: pnp: PnP ACPI init Aug 13 07:06:19.938900 kernel: pnp: PnP ACPI: found 4 devices Aug 13 07:06:19.938914 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:06:19.938932 kernel: NET: Registered PF_INET protocol family Aug 13 07:06:19.938944 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:06:19.938956 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 07:06:19.938969 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:06:19.938982 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:06:19.938995 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 07:06:19.939008 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 07:06:19.939020 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:06:19.939034 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:06:19.939059 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:06:19.939078 kernel: NET: Registered PF_XDP protocol family Aug 13 07:06:19.939260 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:06:19.939352 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:06:19.939439 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:06:19.939525 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 07:06:19.939610 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 07:06:19.939756 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 07:06:19.939869 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:06:19.939883 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 07:06:19.939982 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 31165 usecs Aug 13 07:06:19.939994 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:06:19.940003 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:06:19.940013 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Aug 13 07:06:19.940028 kernel: Initialise system trusted keyrings Aug 13 07:06:19.940041 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 07:06:19.940061 kernel: Key type asymmetric registered Aug 13 07:06:19.940074 kernel: Asymmetric key parser 'x509' registered Aug 13 07:06:19.940089 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:06:19.940103 kernel: io scheduler mq-deadline registered Aug 13 07:06:19.940118 kernel: io scheduler kyber registered Aug 13 07:06:19.941445 kernel: io scheduler bfq registered Aug 13 07:06:19.941468 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:06:19.941483 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 07:06:19.941497 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:06:19.941520 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:06:19.941532 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:06:19.941545 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:06:19.941558 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:06:19.941570 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:06:19.941582 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:06:19.941839 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 07:06:19.941866 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:06:19.942015 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 07:06:19.942239 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T07:06:19 UTC (1755068779) Aug 13 07:06:19.942341 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 07:06:19.942353 kernel: intel_pstate: CPU model not supported Aug 13 07:06:19.942362 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:06:19.942371 kernel: Segment Routing with IPv6 Aug 13 07:06:19.942381 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:06:19.942392 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:06:19.942404 kernel: Key type dns_resolver registered Aug 13 07:06:19.942441 kernel: IPI shorthand broadcast: enabled Aug 13 07:06:19.942455 kernel: sched_clock: Marking stable (857005382, 97904168)->(1066869445, -111959895) Aug 13 07:06:19.942464 kernel: registered taskstats version 1 Aug 13 07:06:19.942473 kernel: Loading compiled-in X.509 certificates Aug 13 07:06:19.942482 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:06:19.942491 kernel: Key type .fscrypt registered Aug 13 07:06:19.942500 kernel: Key type fscrypt-provisioning registered Aug 13 07:06:19.942509 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:06:19.942518 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:06:19.942530 kernel: ima: No architecture policies found Aug 13 07:06:19.942539 kernel: clk: Disabling unused clocks Aug 13 07:06:19.942547 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:06:19.942556 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:06:19.942565 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:06:19.942595 kernel: Run /init as init process Aug 13 07:06:19.942611 kernel: with arguments: Aug 13 07:06:19.942626 kernel: /init Aug 13 07:06:19.942638 kernel: with environment: Aug 13 07:06:19.942655 kernel: HOME=/ Aug 13 07:06:19.942668 kernel: TERM=linux Aug 13 07:06:19.942682 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:06:19.942701 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:06:19.942716 systemd[1]: Detected virtualization kvm. Aug 13 07:06:19.942725 systemd[1]: Detected architecture x86-64. Aug 13 07:06:19.942735 systemd[1]: Running in initrd. Aug 13 07:06:19.942747 systemd[1]: No hostname configured, using default hostname. Aug 13 07:06:19.942756 systemd[1]: Hostname set to . Aug 13 07:06:19.942766 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:06:19.942776 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:06:19.942785 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:06:19.942795 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:06:19.942806 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:06:19.942816 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:06:19.942828 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:06:19.942838 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:06:19.942849 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:06:19.942859 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:06:19.942869 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:06:19.942878 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:06:19.942888 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:06:19.942903 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:06:19.942912 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:06:19.942922 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:06:19.942936 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:06:19.942945 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:06:19.942955 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:06:19.942968 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:06:19.942983 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:06:19.942997 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:06:19.943010 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:06:19.943024 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:06:19.943038 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:06:19.943048 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:06:19.943057 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:06:19.943071 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:06:19.943083 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:06:19.943093 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:06:19.943103 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:06:19.943113 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:06:19.944218 systemd-journald[183]: Collecting audit messages is disabled. Aug 13 07:06:19.944275 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:06:19.944287 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:06:19.944298 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:06:19.944312 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:06:19.944326 systemd-journald[183]: Journal started Aug 13 07:06:19.944352 systemd-journald[183]: Runtime Journal (/run/log/journal/e8916ef90d8c4c26b750e1641377fb76) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:06:19.946780 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 07:06:19.982251 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:06:19.982306 kernel: Bridge firewalling registered Aug 13 07:06:19.982326 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:06:19.975690 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 07:06:19.983780 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:06:19.987959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:19.995396 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:06:19.997306 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:06:20.001308 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:06:20.005414 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:06:20.021253 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:06:20.033307 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:06:20.036575 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:06:20.041420 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:06:20.042314 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:06:20.051430 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:06:20.071171 dracut-cmdline[218]: dracut-dracut-053 Aug 13 07:06:20.075152 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:06:20.093433 systemd-resolved[221]: Positive Trust Anchors: Aug 13 07:06:20.094090 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:06:20.094650 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:06:20.099717 systemd-resolved[221]: Defaulting to hostname 'linux'. Aug 13 07:06:20.101343 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:06:20.101959 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:06:20.175172 kernel: SCSI subsystem initialized Aug 13 07:06:20.185172 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:06:20.197169 kernel: iscsi: registered transport (tcp) Aug 13 07:06:20.220773 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:06:20.220840 kernel: QLogic iSCSI HBA Driver Aug 13 07:06:20.276031 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:06:20.281383 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:06:20.310407 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:06:20.310500 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:06:20.311590 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:06:20.360210 kernel: raid6: avx2x4 gen() 15632 MB/s Aug 13 07:06:20.377204 kernel: raid6: avx2x2 gen() 16911 MB/s Aug 13 07:06:20.394536 kernel: raid6: avx2x1 gen() 12528 MB/s Aug 13 07:06:20.394629 kernel: raid6: using algorithm avx2x2 gen() 16911 MB/s Aug 13 07:06:20.412289 kernel: raid6: .... xor() 18919 MB/s, rmw enabled Aug 13 07:06:20.412386 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:06:20.437181 kernel: xor: automatically using best checksumming function avx Aug 13 07:06:20.610160 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:06:20.624389 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:06:20.634412 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:06:20.662637 systemd-udevd[404]: Using default interface naming scheme 'v255'. Aug 13 07:06:20.668307 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:06:20.678372 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:06:20.696270 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Aug 13 07:06:20.730353 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:06:20.742569 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:06:20.815552 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:06:20.825454 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:06:20.850343 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:06:20.855376 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:06:20.856442 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:06:20.858527 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:06:20.864390 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:06:20.895975 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:06:20.905194 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 13 07:06:20.908679 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 07:06:20.915352 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:06:20.915428 kernel: GPT:9289727 != 125829119 Aug 13 07:06:20.915441 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:06:20.915475 kernel: GPT:9289727 != 125829119 Aug 13 07:06:20.916663 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:06:20.916737 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:06:20.948441 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:06:20.964043 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:06:20.980155 kernel: libata version 3.00 loaded. Aug 13 07:06:20.980216 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 13 07:06:20.987662 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 07:06:20.999161 kernel: scsi host1: ata_piix Aug 13 07:06:21.001339 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Aug 13 07:06:21.004065 kernel: scsi host2: ata_piix Aug 13 07:06:21.004465 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 13 07:06:21.004482 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 13 07:06:21.009174 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:06:21.010179 kernel: AES CTR mode by8 optimization enabled Aug 13 07:06:21.010303 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:06:21.011108 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:06:21.012478 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:06:21.018339 kernel: ACPI: bus type USB registered Aug 13 07:06:21.018373 kernel: usbcore: registered new interface driver usbfs Aug 13 07:06:21.018388 kernel: usbcore: registered new interface driver hub Aug 13 07:06:21.018401 kernel: usbcore: registered new device driver usb Aug 13 07:06:21.017054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:06:21.017305 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:21.017677 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:06:21.024440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:06:21.076028 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:21.087415 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:06:21.107339 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:06:21.197188 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Aug 13 07:06:21.202235 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (452) Aug 13 07:06:21.201980 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:06:21.211082 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:06:21.218887 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:06:21.223168 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:06:21.224310 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:06:21.233164 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 07:06:21.233419 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 07:06:21.233558 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 07:06:21.233679 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 13 07:06:21.233817 kernel: hub 1-0:1.0: USB hub found Aug 13 07:06:21.233968 kernel: hub 1-0:1.0: 2 ports detected Aug 13 07:06:21.231404 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:06:21.241414 disk-uuid[547]: Primary Header is updated. Aug 13 07:06:21.241414 disk-uuid[547]: Secondary Entries is updated. Aug 13 07:06:21.241414 disk-uuid[547]: Secondary Header is updated. Aug 13 07:06:21.255187 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:06:21.261360 kernel: GPT:disk_guids don't match. Aug 13 07:06:21.261431 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:06:21.261457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:06:22.268167 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:06:22.268483 disk-uuid[548]: The operation has completed successfully. Aug 13 07:06:22.313961 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:06:22.314085 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:06:22.325409 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:06:22.341939 sh[561]: Success Aug 13 07:06:22.357416 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:06:22.428077 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:06:22.436301 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:06:22.440250 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:06:22.458603 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:06:22.458691 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:06:22.458714 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:06:22.460143 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:06:22.462157 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:06:22.471258 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:06:22.472520 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:06:22.478377 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:06:22.480529 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:06:22.495740 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:06:22.495808 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:06:22.497219 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:06:22.502165 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:06:22.513658 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:06:22.516194 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:06:22.523734 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:06:22.529422 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:06:22.650091 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:06:22.661124 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:06:22.684532 systemd-networkd[749]: lo: Link UP Aug 13 07:06:22.684542 systemd-networkd[749]: lo: Gained carrier Aug 13 07:06:22.688673 systemd-networkd[749]: Enumeration completed Aug 13 07:06:22.689120 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:06:22.689124 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 07:06:22.692502 ignition[647]: Ignition 2.19.0 Aug 13 07:06:22.690028 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:06:22.692512 ignition[647]: Stage: fetch-offline Aug 13 07:06:22.690035 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:06:22.692566 ignition[647]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:22.690428 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:06:22.692581 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:22.691946 systemd-networkd[749]: eth0: Link UP Aug 13 07:06:22.692760 ignition[647]: parsed url from cmdline: "" Aug 13 07:06:22.691951 systemd-networkd[749]: eth0: Gained carrier Aug 13 07:06:22.692766 ignition[647]: no config URL provided Aug 13 07:06:22.691961 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:06:22.692775 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:06:22.694337 systemd[1]: Reached target network.target - Network. Aug 13 07:06:22.692787 ignition[647]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:06:22.697454 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:06:22.692795 ignition[647]: failed to fetch config: resource requires networking Aug 13 07:06:22.699618 systemd-networkd[749]: eth1: Link UP Aug 13 07:06:22.693099 ignition[647]: Ignition finished successfully Aug 13 07:06:22.699623 systemd-networkd[749]: eth1: Gained carrier Aug 13 07:06:22.699643 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:06:22.708585 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:06:22.713219 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.18/20 acquired from 169.254.169.253 Aug 13 07:06:22.717274 systemd-networkd[749]: eth0: DHCPv4 address 64.227.105.235/20, gateway 64.227.96.1 acquired from 169.254.169.253 Aug 13 07:06:22.734947 ignition[754]: Ignition 2.19.0 Aug 13 07:06:22.734958 ignition[754]: Stage: fetch Aug 13 07:06:22.735167 ignition[754]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:22.735179 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:22.735307 ignition[754]: parsed url from cmdline: "" Aug 13 07:06:22.735311 ignition[754]: no config URL provided Aug 13 07:06:22.735317 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:06:22.735326 ignition[754]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:06:22.735346 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 07:06:22.752319 ignition[754]: GET result: OK Aug 13 07:06:22.753032 ignition[754]: parsing config with SHA512: c03cf8b6ae0338c8866909cfe9f2699ea582c1bc534337453a554571e3e4af013f7c93ae476f76f55976e2ffd6ff902b77bba657b30b433e08c72d9aa88ceffd Aug 13 07:06:22.760626 unknown[754]: fetched base config from "system" Aug 13 07:06:22.760642 unknown[754]: fetched base config from "system" Aug 13 07:06:22.761454 ignition[754]: fetch: fetch complete Aug 13 07:06:22.760652 unknown[754]: fetched user config from "digitalocean" Aug 13 07:06:22.761463 ignition[754]: fetch: fetch passed Aug 13 07:06:22.761574 ignition[754]: Ignition finished successfully Aug 13 07:06:22.763719 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:06:22.770425 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:06:22.794055 ignition[761]: Ignition 2.19.0 Aug 13 07:06:22.794074 ignition[761]: Stage: kargs Aug 13 07:06:22.794416 ignition[761]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:22.794448 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:22.796065 ignition[761]: kargs: kargs passed Aug 13 07:06:22.796182 ignition[761]: Ignition finished successfully Aug 13 07:06:22.798291 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:06:22.802406 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:06:22.821852 ignition[767]: Ignition 2.19.0 Aug 13 07:06:22.821870 ignition[767]: Stage: disks Aug 13 07:06:22.822122 ignition[767]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:22.822161 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:22.828460 ignition[767]: disks: disks passed Aug 13 07:06:22.829082 ignition[767]: Ignition finished successfully Aug 13 07:06:22.831185 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:06:22.832021 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:06:22.832687 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:06:22.833605 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:06:22.834646 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:06:22.835451 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:06:22.853506 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:06:22.872034 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:06:22.875097 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:06:22.881301 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:06:22.991155 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:06:22.991995 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:06:22.992880 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:06:23.001360 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:06:23.004598 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:06:23.011348 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Aug 13 07:06:23.014181 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (783) Aug 13 07:06:23.016399 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 07:06:23.017501 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:06:23.024278 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:06:23.024328 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:06:23.024343 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:06:23.018859 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:06:23.027491 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:06:23.041647 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:06:23.044392 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:06:23.048949 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:06:23.111263 coreos-metadata[785]: Aug 13 07:06:23.110 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:06:23.124107 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:06:23.125274 coreos-metadata[786]: Aug 13 07:06:23.123 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:06:23.127046 coreos-metadata[785]: Aug 13 07:06:23.125 INFO Fetch successful Aug 13 07:06:23.131602 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:06:23.133326 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 13 07:06:23.133464 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Aug 13 07:06:23.137712 coreos-metadata[786]: Aug 13 07:06:23.137 INFO Fetch successful Aug 13 07:06:23.142247 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:06:23.143475 coreos-metadata[786]: Aug 13 07:06:23.142 INFO wrote hostname ci-4081.3.5-5-1812e6c6f4 to /sysroot/etc/hostname Aug 13 07:06:23.145604 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:06:23.151847 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:06:23.260627 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:06:23.271349 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:06:23.273355 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:06:23.285164 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:06:23.313364 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:06:23.325272 ignition[904]: INFO : Ignition 2.19.0 Aug 13 07:06:23.325272 ignition[904]: INFO : Stage: mount Aug 13 07:06:23.326737 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:23.326737 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:23.328058 ignition[904]: INFO : mount: mount passed Aug 13 07:06:23.328058 ignition[904]: INFO : Ignition finished successfully Aug 13 07:06:23.328299 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:06:23.332312 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:06:23.457084 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:06:23.464434 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:06:23.477155 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (916) Aug 13 07:06:23.480580 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:06:23.480652 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:06:23.480666 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:06:23.485431 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:06:23.487399 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:06:23.520167 ignition[933]: INFO : Ignition 2.19.0 Aug 13 07:06:23.520167 ignition[933]: INFO : Stage: files Aug 13 07:06:23.520167 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:23.520167 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:23.522825 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:06:23.523615 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:06:23.523615 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:06:23.527978 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:06:23.528557 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:06:23.528557 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:06:23.528488 unknown[933]: wrote ssh authorized keys file for user: core Aug 13 07:06:23.530528 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 07:06:23.530528 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 07:06:23.679508 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:06:23.771978 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 07:06:23.771978 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:06:23.771978 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:06:23.771978 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:06:23.771978 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:06:23.771978 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:06:23.771978 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:06:23.771978 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:06:23.771978 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:06:23.779797 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:06:23.779797 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:06:23.779797 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:06:23.779797 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:06:23.779797 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:06:23.779797 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 07:06:23.792360 systemd-networkd[749]: eth0: Gained IPv6LL Aug 13 07:06:24.119399 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:06:24.176653 systemd-networkd[749]: eth1: Gained IPv6LL Aug 13 07:06:25.522189 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:06:25.522189 ignition[933]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:06:25.524024 ignition[933]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:06:25.524024 ignition[933]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:06:25.524024 ignition[933]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:06:25.524024 ignition[933]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:06:25.524024 ignition[933]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:06:25.528425 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:06:25.528425 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:06:25.528425 ignition[933]: INFO : files: files passed Aug 13 07:06:25.528425 ignition[933]: INFO : Ignition finished successfully Aug 13 07:06:25.526400 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:06:25.544743 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:06:25.547614 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:06:25.549067 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:06:25.549238 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:06:25.577181 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:06:25.577181 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:06:25.581176 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:06:25.584935 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:06:25.585865 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:06:25.590496 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:06:25.651298 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:06:25.652220 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:06:25.654907 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:06:25.655583 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:06:25.656810 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:06:25.662496 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:06:25.694095 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:06:25.700488 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:06:25.731221 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:06:25.732909 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:06:25.733701 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:06:25.734967 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:06:25.735198 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:06:25.736382 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:06:25.737097 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:06:25.737946 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:06:25.739179 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:06:25.739990 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:06:25.741068 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:06:25.742045 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:06:25.743042 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:06:25.743854 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:06:25.744751 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:06:25.745584 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:06:25.745855 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:06:25.747231 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:06:25.748218 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:06:25.749073 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:06:25.749257 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:06:25.749909 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:06:25.750121 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:06:25.751809 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:06:25.752098 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:06:25.752955 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:06:25.753228 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:06:25.754622 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 07:06:25.754879 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:06:25.770754 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:06:25.775570 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:06:25.776752 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:06:25.777013 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:06:25.779619 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:06:25.779827 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:06:25.790600 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:06:25.794693 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:06:25.802703 ignition[986]: INFO : Ignition 2.19.0 Aug 13 07:06:25.802703 ignition[986]: INFO : Stage: umount Aug 13 07:06:25.802703 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:06:25.802703 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:06:25.807998 ignition[986]: INFO : umount: umount passed Aug 13 07:06:25.807998 ignition[986]: INFO : Ignition finished successfully Aug 13 07:06:25.810047 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:06:25.810208 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:06:25.812543 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:06:25.812749 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:06:25.817236 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:06:25.817374 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:06:25.825270 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:06:25.825966 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:06:25.827955 systemd[1]: Stopped target network.target - Network. Aug 13 07:06:25.828339 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:06:25.828455 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:06:25.829055 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:06:25.831254 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:06:25.836269 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:06:25.836753 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:06:25.837073 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:06:25.837579 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:06:25.837660 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:06:25.838730 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:06:25.838795 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:06:25.839609 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:06:25.839691 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:06:25.840667 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:06:25.840738 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:06:25.841726 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:06:25.842801 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:06:25.845166 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:06:25.845808 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:06:25.845906 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:06:25.846230 systemd-networkd[749]: eth1: DHCPv6 lease lost Aug 13 07:06:25.848349 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:06:25.848555 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:06:25.849974 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:06:25.850235 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:06:25.850330 systemd-networkd[749]: eth0: DHCPv6 lease lost Aug 13 07:06:25.854289 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:06:25.854424 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:06:25.857364 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:06:25.857413 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:06:25.862485 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:06:25.864443 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:06:25.864559 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:06:25.865519 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:06:25.865614 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:06:25.866916 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:06:25.867008 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:06:25.867566 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:06:25.867631 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:06:25.870346 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:06:25.883811 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:06:25.884036 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:06:25.891789 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:06:25.891991 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:06:25.893879 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:06:25.893991 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:06:25.895232 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:06:25.895300 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:06:25.896056 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:06:25.896164 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:06:25.897525 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:06:25.897609 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:06:25.898959 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:06:25.899038 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:06:25.904412 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:06:25.904846 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:06:25.904922 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:06:25.905346 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:06:25.905395 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:06:25.905835 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:06:25.905882 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:06:25.908064 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:06:25.908979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:25.919861 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:06:25.920062 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:06:25.921375 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:06:25.924485 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:06:25.940656 systemd[1]: Switching root. Aug 13 07:06:25.973724 systemd-journald[183]: Journal stopped Aug 13 07:06:27.190154 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 07:06:27.190261 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:06:27.190290 kernel: SELinux: policy capability open_perms=1 Aug 13 07:06:27.190321 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:06:27.190345 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:06:27.190362 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:06:27.190392 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:06:27.190415 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:06:27.190433 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:06:27.190450 kernel: audit: type=1403 audit(1755068786.145:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:06:27.190485 systemd[1]: Successfully loaded SELinux policy in 43.716ms. Aug 13 07:06:27.190521 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.257ms. Aug 13 07:06:27.190546 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:06:27.190565 systemd[1]: Detected virtualization kvm. Aug 13 07:06:27.190585 systemd[1]: Detected architecture x86-64. Aug 13 07:06:27.190603 systemd[1]: Detected first boot. Aug 13 07:06:27.190620 systemd[1]: Hostname set to . Aug 13 07:06:27.190639 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:06:27.190658 zram_generator::config[1029]: No configuration found. Aug 13 07:06:27.190677 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:06:27.190701 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:06:27.190720 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:06:27.190737 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:06:27.190758 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:06:27.190776 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:06:27.190795 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:06:27.190814 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:06:27.190833 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:06:27.190850 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:06:27.190875 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:06:27.190901 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:06:27.190927 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:06:27.190951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:06:27.190990 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:06:27.191008 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:06:27.191026 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:06:27.191046 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:06:27.191065 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:06:27.191095 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:06:27.191113 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:06:27.191160 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:06:27.191182 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:06:27.191203 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:06:27.191226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:06:27.191246 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:06:27.191273 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:06:27.191293 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:06:27.191313 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:06:27.191331 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:06:27.191355 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:06:27.191375 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:06:27.191393 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:06:27.191412 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:06:27.191436 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:06:27.191470 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:06:27.191489 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:06:27.191509 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:27.191536 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:06:27.191556 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:06:27.191575 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:06:27.191601 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:06:27.191626 systemd[1]: Reached target machines.target - Containers. Aug 13 07:06:27.191645 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:06:27.191665 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:06:27.191690 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:06:27.191709 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:06:27.191728 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:06:27.191747 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:06:27.191766 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:06:27.191786 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:06:27.191811 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:06:27.191829 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:06:27.191853 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:06:27.191871 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:06:27.191892 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:06:27.191927 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:06:27.191948 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:06:27.191968 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:06:27.191990 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:06:27.192017 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:06:27.192037 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:06:27.192056 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:06:27.192074 systemd[1]: Stopped verity-setup.service. Aug 13 07:06:27.192092 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:27.192115 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:06:27.194631 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:06:27.194688 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:06:27.194722 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:06:27.194742 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:06:27.194755 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:06:27.194768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:06:27.194785 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:06:27.194797 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:06:27.194810 kernel: loop: module loaded Aug 13 07:06:27.194824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:06:27.194836 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:06:27.194849 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:06:27.194867 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:06:27.194884 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:06:27.194897 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:06:27.194909 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:06:27.194922 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:06:27.194942 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:06:27.194961 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:06:27.194982 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:06:27.194996 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:06:27.195049 systemd-journald[1095]: Collecting audit messages is disabled. Aug 13 07:06:27.195079 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:06:27.195118 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:06:27.195174 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:06:27.195202 systemd-journald[1095]: Journal started Aug 13 07:06:27.195241 systemd-journald[1095]: Runtime Journal (/run/log/journal/e8916ef90d8c4c26b750e1641377fb76) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:06:26.881295 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:06:26.903250 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:06:26.903806 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:06:27.199202 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:06:27.201155 kernel: fuse: init (API version 7.39) Aug 13 07:06:27.204978 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:06:27.205192 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:06:27.241255 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:06:27.241715 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:06:27.241765 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:06:27.246359 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:06:27.254325 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:06:27.260153 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:06:27.262360 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:06:27.265327 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:06:27.270393 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:06:27.270904 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:06:27.275475 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:06:27.279944 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:06:27.283023 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:06:27.288095 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:06:27.304225 kernel: ACPI: bus type drm_connector registered Aug 13 07:06:27.305805 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:06:27.307730 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:06:27.333999 systemd-journald[1095]: Time spent on flushing to /var/log/journal/e8916ef90d8c4c26b750e1641377fb76 is 21.510ms for 988 entries. Aug 13 07:06:27.333999 systemd-journald[1095]: System Journal (/var/log/journal/e8916ef90d8c4c26b750e1641377fb76) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:06:27.369582 systemd-journald[1095]: Received client request to flush runtime journal. Aug 13 07:06:27.355676 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:06:27.371818 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:06:27.372642 kernel: loop0: detected capacity change from 0 to 140768 Aug 13 07:06:27.383060 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:06:27.385513 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:06:27.394567 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:06:27.397008 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:06:27.402154 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:06:27.406265 systemd-tmpfiles[1110]: ACLs are not supported, ignoring. Aug 13 07:06:27.406285 systemd-tmpfiles[1110]: ACLs are not supported, ignoring. Aug 13 07:06:27.426944 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:06:27.431743 kernel: loop1: detected capacity change from 0 to 142488 Aug 13 07:06:27.443365 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:06:27.444575 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:06:27.446627 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:06:27.472183 kernel: loop2: detected capacity change from 0 to 229808 Aug 13 07:06:27.472826 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:06:27.490305 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:06:27.509282 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:06:27.516583 kernel: loop3: detected capacity change from 0 to 8 Aug 13 07:06:27.540095 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:06:27.558160 kernel: loop4: detected capacity change from 0 to 140768 Aug 13 07:06:27.555817 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:06:27.601163 kernel: loop5: detected capacity change from 0 to 142488 Aug 13 07:06:27.607477 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Aug 13 07:06:27.609189 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Aug 13 07:06:27.625899 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:06:27.640221 kernel: loop6: detected capacity change from 0 to 229808 Aug 13 07:06:27.672162 kernel: loop7: detected capacity change from 0 to 8 Aug 13 07:06:27.672970 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 13 07:06:27.673574 (sd-merge)[1176]: Merged extensions into '/usr'. Aug 13 07:06:27.681836 systemd[1]: Reloading requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:06:27.681984 systemd[1]: Reloading... Aug 13 07:06:27.841161 zram_generator::config[1208]: No configuration found. Aug 13 07:06:27.987454 ldconfig[1138]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:06:28.087256 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:06:28.158380 systemd[1]: Reloading finished in 475 ms. Aug 13 07:06:28.189338 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:06:28.194232 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:06:28.203508 systemd[1]: Starting ensure-sysext.service... Aug 13 07:06:28.213874 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:06:28.230660 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:06:28.230682 systemd[1]: Reloading... Aug 13 07:06:28.246946 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:06:28.248062 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:06:28.250198 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:06:28.250662 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Aug 13 07:06:28.250850 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Aug 13 07:06:28.256580 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:06:28.256743 systemd-tmpfiles[1249]: Skipping /boot Aug 13 07:06:28.281336 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:06:28.281352 systemd-tmpfiles[1249]: Skipping /boot Aug 13 07:06:28.346181 zram_generator::config[1276]: No configuration found. Aug 13 07:06:28.498864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:06:28.552614 systemd[1]: Reloading finished in 321 ms. Aug 13 07:06:28.573797 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:06:28.579849 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:06:28.591441 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:06:28.596371 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:06:28.608380 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:06:28.616813 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:06:28.623194 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:06:28.626449 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:06:28.633738 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.633946 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:06:28.640576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:06:28.644521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:06:28.654998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:06:28.655668 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:06:28.655840 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.657986 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.660972 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:06:28.661242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:06:28.669443 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:06:28.670514 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.674121 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.676492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:06:28.685581 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:06:28.687446 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:06:28.687649 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:28.692214 systemd[1]: Finished ensure-sysext.service. Aug 13 07:06:28.706818 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:06:28.708672 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:06:28.711312 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:06:28.714841 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Aug 13 07:06:28.731846 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:06:28.740102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:06:28.740394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:06:28.744643 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:06:28.744831 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:06:28.746701 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:06:28.749416 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:06:28.751414 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:06:28.752274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:06:28.756744 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:06:28.767509 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:06:28.769451 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:06:28.771591 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:06:28.771846 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:06:28.785352 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:06:28.795608 augenrules[1363]: No rules Aug 13 07:06:28.800225 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:06:28.807281 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:06:28.830691 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:06:28.933859 systemd-resolved[1331]: Positive Trust Anchors: Aug 13 07:06:28.934234 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:06:28.934374 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:06:28.942821 systemd-resolved[1331]: Using system hostname 'ci-4081.3.5-5-1812e6c6f4'. Aug 13 07:06:28.945092 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:06:28.946652 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:06:28.985243 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:06:28.999256 systemd-networkd[1361]: lo: Link UP Aug 13 07:06:28.999681 systemd-networkd[1361]: lo: Gained carrier Aug 13 07:06:29.001623 systemd-networkd[1361]: Enumeration completed Aug 13 07:06:29.001972 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:06:29.002646 systemd[1]: Reached target network.target - Network. Aug 13 07:06:29.013426 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:06:29.017944 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:06:29.018570 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:06:29.030166 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1360) Aug 13 07:06:29.039366 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 13 07:06:29.039766 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:29.039951 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:06:29.042382 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:06:29.044325 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:06:29.048426 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:06:29.055081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:06:29.055169 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:06:29.055188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:06:29.076159 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 07:06:29.078815 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 13 07:06:29.082885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:06:29.083097 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:06:29.086051 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:06:29.086426 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:06:29.093372 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:06:29.096073 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:06:29.096352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:06:29.097537 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:06:29.125165 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 07:06:29.130227 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 07:06:29.131149 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:06:29.139643 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:06:29.147741 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:06:29.169789 systemd-networkd[1361]: eth1: Configuring with /run/systemd/network/10-92:83:b8:87:d6:03.network. Aug 13 07:06:29.172877 systemd-networkd[1361]: eth1: Link UP Aug 13 07:06:29.173155 systemd-networkd[1361]: eth1: Gained carrier Aug 13 07:06:29.180621 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Aug 13 07:06:29.188519 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:06:29.192826 systemd-networkd[1361]: eth0: Configuring with /run/systemd/network/10-f2:d9:50:7b:d8:1e.network. Aug 13 07:06:29.196846 systemd-networkd[1361]: eth0: Link UP Aug 13 07:06:29.197433 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 07:06:29.196862 systemd-networkd[1361]: eth0: Gained carrier Aug 13 07:06:29.251773 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:06:29.259575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:06:29.286154 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 13 07:06:29.288170 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 13 07:06:29.293165 kernel: Console: switching to colour dummy device 80x25 Aug 13 07:06:29.296182 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 07:06:29.296265 kernel: [drm] features: -context_init Aug 13 07:06:29.303913 kernel: [drm] number of scanouts: 1 Aug 13 07:06:29.305168 kernel: [drm] number of cap sets: 0 Aug 13 07:06:29.316227 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 13 07:06:29.320507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:06:29.320726 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:29.329429 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 13 07:06:29.329541 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:06:29.333560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:06:29.339161 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 13 07:06:29.367479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:06:29.367702 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:29.424151 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:06:29.496183 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:06:29.514348 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:06:29.522814 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:06:29.529539 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:06:29.557205 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:06:29.591505 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:06:29.592454 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:06:29.592625 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:06:29.592895 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:06:29.593002 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:06:29.593521 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:06:29.594464 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:06:29.594758 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:06:29.594871 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:06:29.594925 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:06:29.595025 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:06:29.597580 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:06:29.599797 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:06:29.607064 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:06:29.611263 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:06:29.614820 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:06:29.616802 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:06:29.617964 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:06:29.618781 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:06:29.618812 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:06:29.625359 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:06:29.631295 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:06:29.637346 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:06:29.640432 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:06:29.647330 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:06:29.657410 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:06:29.660817 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:06:29.665230 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:06:29.667328 jq[1440]: false Aug 13 07:06:29.671984 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:06:29.682655 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:06:29.690441 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:06:29.709391 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:06:29.711760 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:06:29.713740 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:06:29.716425 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:06:29.729380 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:06:29.737756 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:06:29.751518 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:06:29.751983 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:06:29.753408 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:06:29.754224 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:06:29.793682 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:06:29.794397 dbus-daemon[1439]: [system] SELinux support is enabled Aug 13 07:06:29.801669 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:06:29.813606 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:06:29.820383 jq[1451]: true Aug 13 07:06:29.813653 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:06:29.815720 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:06:29.815800 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 13 07:06:29.815822 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:06:29.869748 extend-filesystems[1441]: Found loop4 Aug 13 07:06:29.869748 extend-filesystems[1441]: Found loop5 Aug 13 07:06:29.869748 extend-filesystems[1441]: Found loop6 Aug 13 07:06:29.869748 extend-filesystems[1441]: Found loop7 Aug 13 07:06:29.869748 extend-filesystems[1441]: Found vda Aug 13 07:06:29.869748 extend-filesystems[1441]: Found vda1 Aug 13 07:06:29.869748 extend-filesystems[1441]: Found vda2 Aug 13 07:06:29.869748 extend-filesystems[1441]: Found vda3 Aug 13 07:06:29.869748 extend-filesystems[1441]: Found usr Aug 13 07:06:29.869748 extend-filesystems[1441]: Found vda4 Aug 13 07:06:29.869748 extend-filesystems[1441]: Found vda6 Aug 13 07:06:29.869748 extend-filesystems[1441]: Found vda7 Aug 13 07:06:29.869748 extend-filesystems[1441]: Found vda9 Aug 13 07:06:29.869748 extend-filesystems[1441]: Checking size of /dev/vda9 Aug 13 07:06:29.969928 extend-filesystems[1441]: Resized partition /dev/vda9 Aug 13 07:06:29.911697 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:06:29.972679 coreos-metadata[1438]: Aug 13 07:06:29.911 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:06:29.972679 coreos-metadata[1438]: Aug 13 07:06:29.944 INFO Fetch successful Aug 13 07:06:29.989482 update_engine[1449]: I20250813 07:06:29.888278 1449 main.cc:92] Flatcar Update Engine starting Aug 13 07:06:29.989482 update_engine[1449]: I20250813 07:06:29.913831 1449 update_check_scheduler.cc:74] Next update check in 3m39s Aug 13 07:06:29.989906 jq[1464]: true Aug 13 07:06:29.990027 tar[1463]: linux-amd64/LICENSE Aug 13 07:06:29.990027 tar[1463]: linux-amd64/helm Aug 13 07:06:29.922980 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:06:29.992884 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:06:30.007913 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 07:06:29.923278 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:06:29.941001 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:06:30.051521 systemd-logind[1448]: New seat seat0. Aug 13 07:06:30.054121 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:06:30.057681 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:06:30.058018 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:06:30.131463 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1372) Aug 13 07:06:30.138256 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:06:30.139353 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:06:30.177558 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 07:06:30.195075 extend-filesystems[1481]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:06:30.195075 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 07:06:30.195075 extend-filesystems[1481]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 07:06:30.222335 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Aug 13 07:06:30.222335 extend-filesystems[1441]: Found vdb Aug 13 07:06:30.205323 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:06:30.231104 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:06:30.205591 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:06:30.215094 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:06:30.249605 systemd[1]: Starting sshkeys.service... Aug 13 07:06:30.271123 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:06:30.284783 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:06:30.434700 containerd[1456]: time="2025-08-13T07:06:30.434591745Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:06:30.450803 coreos-metadata[1510]: Aug 13 07:06:30.449 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:06:30.471337 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:06:30.474376 coreos-metadata[1510]: Aug 13 07:06:30.472 INFO Fetch successful Aug 13 07:06:30.474663 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:06:30.494863 unknown[1510]: wrote ssh authorized keys file for user: core Aug 13 07:06:30.529464 containerd[1456]: time="2025-08-13T07:06:30.529200903Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.532170 update-ssh-keys[1524]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:06:30.535014 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:06:30.541229 systemd[1]: Finished sshkeys.service. Aug 13 07:06:30.551159 containerd[1456]: time="2025-08-13T07:06:30.549904707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:06:30.551159 containerd[1456]: time="2025-08-13T07:06:30.549987931Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:06:30.551159 containerd[1456]: time="2025-08-13T07:06:30.550016133Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:06:30.551159 containerd[1456]: time="2025-08-13T07:06:30.550409222Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:06:30.551159 containerd[1456]: time="2025-08-13T07:06:30.550461262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.551159 containerd[1456]: time="2025-08-13T07:06:30.550664176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:06:30.551159 containerd[1456]: time="2025-08-13T07:06:30.550689122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.551159 containerd[1456]: time="2025-08-13T07:06:30.551058118Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:06:30.551159 containerd[1456]: time="2025-08-13T07:06:30.551085647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.551694 containerd[1456]: time="2025-08-13T07:06:30.551107828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:06:30.551694 containerd[1456]: time="2025-08-13T07:06:30.551560283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.551842 containerd[1456]: time="2025-08-13T07:06:30.551819897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.553870 containerd[1456]: time="2025-08-13T07:06:30.553348600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:06:30.553870 containerd[1456]: time="2025-08-13T07:06:30.553635085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:06:30.553870 containerd[1456]: time="2025-08-13T07:06:30.553666172Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:06:30.553870 containerd[1456]: time="2025-08-13T07:06:30.553835229Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:06:30.554260 containerd[1456]: time="2025-08-13T07:06:30.554229452Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:06:30.558392 containerd[1456]: time="2025-08-13T07:06:30.558338958Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:06:30.559844 containerd[1456]: time="2025-08-13T07:06:30.558613395Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:06:30.559844 containerd[1456]: time="2025-08-13T07:06:30.558662625Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:06:30.559844 containerd[1456]: time="2025-08-13T07:06:30.559231945Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:06:30.559844 containerd[1456]: time="2025-08-13T07:06:30.559270109Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:06:30.559844 containerd[1456]: time="2025-08-13T07:06:30.559502641Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:06:30.560098 containerd[1456]: time="2025-08-13T07:06:30.559986584Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:06:30.560229 containerd[1456]: time="2025-08-13T07:06:30.560201486Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:06:30.560275 containerd[1456]: time="2025-08-13T07:06:30.560235774Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:06:30.560275 containerd[1456]: time="2025-08-13T07:06:30.560257348Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:06:30.560336 containerd[1456]: time="2025-08-13T07:06:30.560288424Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.560336 containerd[1456]: time="2025-08-13T07:06:30.560303855Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.560336 containerd[1456]: time="2025-08-13T07:06:30.560317691Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.560336 containerd[1456]: time="2025-08-13T07:06:30.560332221Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.560462 containerd[1456]: time="2025-08-13T07:06:30.560345683Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.560462 containerd[1456]: time="2025-08-13T07:06:30.560359060Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.560462 containerd[1456]: time="2025-08-13T07:06:30.560371732Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.560462 containerd[1456]: time="2025-08-13T07:06:30.560389035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:06:30.560462 containerd[1456]: time="2025-08-13T07:06:30.560414881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.560462 containerd[1456]: time="2025-08-13T07:06:30.560432227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.560462 containerd[1456]: time="2025-08-13T07:06:30.560448047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.560671 containerd[1456]: time="2025-08-13T07:06:30.560465437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.560671 containerd[1456]: time="2025-08-13T07:06:30.560483595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.560671 containerd[1456]: time="2025-08-13T07:06:30.560514639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.560671 containerd[1456]: time="2025-08-13T07:06:30.560528422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.560671 containerd[1456]: time="2025-08-13T07:06:30.560543444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.560671 containerd[1456]: time="2025-08-13T07:06:30.560556326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.560671 containerd[1456]: time="2025-08-13T07:06:30.560571784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.560671 containerd[1456]: time="2025-08-13T07:06:30.560620597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.560671 containerd[1456]: time="2025-08-13T07:06:30.560637939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.560671 containerd[1456]: time="2025-08-13T07:06:30.560651841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.561030 containerd[1456]: time="2025-08-13T07:06:30.560680073Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:06:30.561030 containerd[1456]: time="2025-08-13T07:06:30.560736209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.561030 containerd[1456]: time="2025-08-13T07:06:30.560759287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.561030 containerd[1456]: time="2025-08-13T07:06:30.560775884Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:06:30.561030 containerd[1456]: time="2025-08-13T07:06:30.560875155Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:06:30.561030 containerd[1456]: time="2025-08-13T07:06:30.560907032Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:06:30.561030 containerd[1456]: time="2025-08-13T07:06:30.560922259Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:06:30.561030 containerd[1456]: time="2025-08-13T07:06:30.560936854Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:06:30.561030 containerd[1456]: time="2025-08-13T07:06:30.560948509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.561030 containerd[1456]: time="2025-08-13T07:06:30.560963087Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:06:30.561030 containerd[1456]: time="2025-08-13T07:06:30.560975205Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:06:30.561030 containerd[1456]: time="2025-08-13T07:06:30.560987402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:06:30.562665 containerd[1456]: time="2025-08-13T07:06:30.561338189Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:06:30.562665 containerd[1456]: time="2025-08-13T07:06:30.561437254Z" level=info msg="Connect containerd service" Aug 13 07:06:30.562665 containerd[1456]: time="2025-08-13T07:06:30.561517242Z" level=info msg="using legacy CRI server" Aug 13 07:06:30.562665 containerd[1456]: time="2025-08-13T07:06:30.561531377Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:06:30.562665 containerd[1456]: time="2025-08-13T07:06:30.561669330Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:06:30.568181 containerd[1456]: time="2025-08-13T07:06:30.567976218Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:06:30.569882 containerd[1456]: time="2025-08-13T07:06:30.569609520Z" level=info msg="Start subscribing containerd event" Aug 13 07:06:30.570321 containerd[1456]: time="2025-08-13T07:06:30.570112708Z" level=info msg="Start recovering state" Aug 13 07:06:30.571000 containerd[1456]: time="2025-08-13T07:06:30.570882880Z" level=info msg="Start event monitor" Aug 13 07:06:30.571224 containerd[1456]: time="2025-08-13T07:06:30.571164051Z" level=info msg="Start snapshots syncer" Aug 13 07:06:30.571224 containerd[1456]: time="2025-08-13T07:06:30.571188591Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:06:30.571224 containerd[1456]: time="2025-08-13T07:06:30.571199457Z" level=info msg="Start streaming server" Aug 13 07:06:30.571642 containerd[1456]: time="2025-08-13T07:06:30.571498380Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:06:30.571894 containerd[1456]: time="2025-08-13T07:06:30.571774949Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:06:30.572619 containerd[1456]: time="2025-08-13T07:06:30.572073452Z" level=info msg="containerd successfully booted in 0.146995s" Aug 13 07:06:30.572399 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:06:30.586282 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:06:30.601659 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:06:30.627220 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:06:30.627765 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:06:30.643225 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:06:30.682748 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:06:30.694402 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:06:30.709660 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:06:30.713110 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:06:30.896788 tar[1463]: linux-amd64/README.md Aug 13 07:06:30.910213 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:06:31.024471 systemd-networkd[1361]: eth1: Gained IPv6LL Aug 13 07:06:31.027256 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:06:31.030342 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:06:31.038530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:31.044517 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:06:31.081604 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:06:31.152397 systemd-networkd[1361]: eth0: Gained IPv6LL Aug 13 07:06:32.237944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:32.239414 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:06:32.241758 systemd[1]: Startup finished in 995ms (kernel) + 6.442s (initrd) + 6.138s (userspace) = 13.576s. Aug 13 07:06:32.253250 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:06:32.963091 kubelet[1561]: E0813 07:06:32.962977 1561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:06:32.964914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:06:32.965117 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:06:32.965525 systemd[1]: kubelet.service: Consumed 1.377s CPU time. Aug 13 07:06:33.398445 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:06:33.400026 systemd[1]: Started sshd@0-64.227.105.235:22-139.178.89.65:40338.service - OpenSSH per-connection server daemon (139.178.89.65:40338). Aug 13 07:06:33.487069 sshd[1573]: Accepted publickey for core from 139.178.89.65 port 40338 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:33.490387 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:33.502382 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:06:33.508614 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:06:33.513228 systemd-logind[1448]: New session 1 of user core. Aug 13 07:06:33.530375 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:06:33.537595 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:06:33.549121 (systemd)[1577]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:06:33.676605 systemd[1577]: Queued start job for default target default.target. Aug 13 07:06:33.687089 systemd[1577]: Created slice app.slice - User Application Slice. Aug 13 07:06:33.687151 systemd[1577]: Reached target paths.target - Paths. Aug 13 07:06:33.687175 systemd[1577]: Reached target timers.target - Timers. Aug 13 07:06:33.688950 systemd[1577]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:06:33.704423 systemd[1577]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:06:33.704572 systemd[1577]: Reached target sockets.target - Sockets. Aug 13 07:06:33.704588 systemd[1577]: Reached target basic.target - Basic System. Aug 13 07:06:33.704640 systemd[1577]: Reached target default.target - Main User Target. Aug 13 07:06:33.704683 systemd[1577]: Startup finished in 146ms. Aug 13 07:06:33.704871 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:06:33.714485 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:06:33.788195 systemd[1]: Started sshd@1-64.227.105.235:22-139.178.89.65:40340.service - OpenSSH per-connection server daemon (139.178.89.65:40340). Aug 13 07:06:33.832927 sshd[1588]: Accepted publickey for core from 139.178.89.65 port 40340 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:33.835743 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:33.841180 systemd-logind[1448]: New session 2 of user core. Aug 13 07:06:33.853471 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:06:33.918574 sshd[1588]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:33.930108 systemd[1]: sshd@1-64.227.105.235:22-139.178.89.65:40340.service: Deactivated successfully. Aug 13 07:06:33.932954 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:06:33.935356 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:06:33.940542 systemd[1]: Started sshd@2-64.227.105.235:22-139.178.89.65:40348.service - OpenSSH per-connection server daemon (139.178.89.65:40348). Aug 13 07:06:33.942295 systemd-logind[1448]: Removed session 2. Aug 13 07:06:33.995469 sshd[1595]: Accepted publickey for core from 139.178.89.65 port 40348 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:33.997347 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:34.004985 systemd-logind[1448]: New session 3 of user core. Aug 13 07:06:34.014480 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:06:34.071319 sshd[1595]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:34.082251 systemd[1]: sshd@2-64.227.105.235:22-139.178.89.65:40348.service: Deactivated successfully. Aug 13 07:06:34.084463 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:06:34.086619 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:06:34.094849 systemd[1]: Started sshd@3-64.227.105.235:22-139.178.89.65:40358.service - OpenSSH per-connection server daemon (139.178.89.65:40358). Aug 13 07:06:34.096701 systemd-logind[1448]: Removed session 3. Aug 13 07:06:34.137004 sshd[1602]: Accepted publickey for core from 139.178.89.65 port 40358 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:34.139435 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:34.145199 systemd-logind[1448]: New session 4 of user core. Aug 13 07:06:34.156452 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:06:34.219519 sshd[1602]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:34.234833 systemd[1]: sshd@3-64.227.105.235:22-139.178.89.65:40358.service: Deactivated successfully. Aug 13 07:06:34.237670 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:06:34.240394 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:06:34.248677 systemd[1]: Started sshd@4-64.227.105.235:22-139.178.89.65:40362.service - OpenSSH per-connection server daemon (139.178.89.65:40362). Aug 13 07:06:34.250094 systemd-logind[1448]: Removed session 4. Aug 13 07:06:34.298924 sshd[1609]: Accepted publickey for core from 139.178.89.65 port 40362 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:34.301318 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:34.308552 systemd-logind[1448]: New session 5 of user core. Aug 13 07:06:34.317501 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:06:34.393495 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:06:34.394596 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:06:34.417025 sudo[1612]: pam_unix(sudo:session): session closed for user root Aug 13 07:06:34.421422 sshd[1609]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:34.440250 systemd[1]: sshd@4-64.227.105.235:22-139.178.89.65:40362.service: Deactivated successfully. Aug 13 07:06:34.442683 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:06:34.445045 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:06:34.450651 systemd[1]: Started sshd@5-64.227.105.235:22-139.178.89.65:40366.service - OpenSSH per-connection server daemon (139.178.89.65:40366). Aug 13 07:06:34.452879 systemd-logind[1448]: Removed session 5. Aug 13 07:06:34.521359 sshd[1617]: Accepted publickey for core from 139.178.89.65 port 40366 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:34.523695 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:34.530104 systemd-logind[1448]: New session 6 of user core. Aug 13 07:06:34.543435 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:06:34.605105 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:06:34.605493 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:06:34.609919 sudo[1621]: pam_unix(sudo:session): session closed for user root Aug 13 07:06:34.618984 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:06:34.620052 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:06:34.643531 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:06:34.646270 auditctl[1624]: No rules Aug 13 07:06:34.646799 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:06:34.647052 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:06:34.649996 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:06:34.711421 augenrules[1642]: No rules Aug 13 07:06:34.713223 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:06:34.714370 sudo[1620]: pam_unix(sudo:session): session closed for user root Aug 13 07:06:34.718412 sshd[1617]: pam_unix(sshd:session): session closed for user core Aug 13 07:06:34.725886 systemd[1]: sshd@5-64.227.105.235:22-139.178.89.65:40366.service: Deactivated successfully. Aug 13 07:06:34.728628 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:06:34.731340 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:06:34.735720 systemd[1]: Started sshd@6-64.227.105.235:22-139.178.89.65:40378.service - OpenSSH per-connection server daemon (139.178.89.65:40378). Aug 13 07:06:34.738332 systemd-logind[1448]: Removed session 6. Aug 13 07:06:34.790327 sshd[1650]: Accepted publickey for core from 139.178.89.65 port 40378 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:06:34.792447 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:06:34.798442 systemd-logind[1448]: New session 7 of user core. Aug 13 07:06:34.806423 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:06:34.866679 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:06:34.867647 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:06:35.364597 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:06:35.369891 systemd-timesyncd[1344]: Contacted time server 85.209.17.10:123 (1.flatcar.pool.ntp.org). Aug 13 07:06:35.369990 systemd-timesyncd[1344]: Initial clock synchronization to Wed 2025-08-13 07:06:35.292300 UTC. Aug 13 07:06:35.377928 (dockerd)[1670]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:06:35.862756 dockerd[1670]: time="2025-08-13T07:06:35.862255363Z" level=info msg="Starting up" Aug 13 07:06:36.041401 dockerd[1670]: time="2025-08-13T07:06:36.041352978Z" level=info msg="Loading containers: start." Aug 13 07:06:36.169320 kernel: Initializing XFRM netlink socket Aug 13 07:06:36.268545 systemd-networkd[1361]: docker0: Link UP Aug 13 07:06:36.281917 dockerd[1670]: time="2025-08-13T07:06:36.281859796Z" level=info msg="Loading containers: done." Aug 13 07:06:36.303324 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1292543889-merged.mount: Deactivated successfully. Aug 13 07:06:36.305509 dockerd[1670]: time="2025-08-13T07:06:36.305075358Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:06:36.305509 dockerd[1670]: time="2025-08-13T07:06:36.305458779Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:06:36.305621 dockerd[1670]: time="2025-08-13T07:06:36.305605636Z" level=info msg="Daemon has completed initialization" Aug 13 07:06:36.338544 dockerd[1670]: time="2025-08-13T07:06:36.338409744Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:06:36.338839 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:06:37.085177 containerd[1456]: time="2025-08-13T07:06:37.084915322Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 07:06:37.702997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2715079841.mount: Deactivated successfully. Aug 13 07:06:38.816393 containerd[1456]: time="2025-08-13T07:06:38.815146457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:38.816393 containerd[1456]: time="2025-08-13T07:06:38.815902249Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30078237" Aug 13 07:06:38.816393 containerd[1456]: time="2025-08-13T07:06:38.816320797Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:38.819740 containerd[1456]: time="2025-08-13T07:06:38.819684136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:38.821087 containerd[1456]: time="2025-08-13T07:06:38.821035383Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 1.736065852s" Aug 13 07:06:38.821288 containerd[1456]: time="2025-08-13T07:06:38.821268899Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 07:06:38.822849 containerd[1456]: time="2025-08-13T07:06:38.822769026Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 07:06:40.278394 containerd[1456]: time="2025-08-13T07:06:40.277162435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:40.279658 containerd[1456]: time="2025-08-13T07:06:40.279578807Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26019361" Aug 13 07:06:40.280665 containerd[1456]: time="2025-08-13T07:06:40.280630349Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:40.284121 containerd[1456]: time="2025-08-13T07:06:40.284070863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:40.285820 containerd[1456]: time="2025-08-13T07:06:40.285746534Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 1.462635684s" Aug 13 07:06:40.285820 containerd[1456]: time="2025-08-13T07:06:40.285813120Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 07:06:40.287015 containerd[1456]: time="2025-08-13T07:06:40.286633038Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 07:06:41.485167 containerd[1456]: time="2025-08-13T07:06:41.484753543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:41.487280 containerd[1456]: time="2025-08-13T07:06:41.487194540Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20155013" Aug 13 07:06:41.488283 containerd[1456]: time="2025-08-13T07:06:41.488217093Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:41.491715 containerd[1456]: time="2025-08-13T07:06:41.491662700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:41.497024 containerd[1456]: time="2025-08-13T07:06:41.496940003Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 1.210251789s" Aug 13 07:06:41.497024 containerd[1456]: time="2025-08-13T07:06:41.497021499Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 07:06:41.497949 containerd[1456]: time="2025-08-13T07:06:41.497885399Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 07:06:41.499875 systemd-resolved[1331]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 13 07:06:42.638746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017542127.mount: Deactivated successfully. Aug 13 07:06:43.216700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:06:43.231329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:43.353346 containerd[1456]: time="2025-08-13T07:06:43.352271193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:43.357776 containerd[1456]: time="2025-08-13T07:06:43.357696520Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892666" Aug 13 07:06:43.359396 containerd[1456]: time="2025-08-13T07:06:43.359329812Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:43.365419 containerd[1456]: time="2025-08-13T07:06:43.365340993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:43.367384 containerd[1456]: time="2025-08-13T07:06:43.366307070Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 1.86836473s" Aug 13 07:06:43.367384 containerd[1456]: time="2025-08-13T07:06:43.367240742Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 07:06:43.368406 containerd[1456]: time="2025-08-13T07:06:43.368116124Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 07:06:43.453461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:43.456024 (kubelet)[1893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:06:43.529258 kubelet[1893]: E0813 07:06:43.529056 1893 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:06:43.537284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:06:43.537523 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:06:43.913150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1129856932.mount: Deactivated successfully. Aug 13 07:06:44.592369 systemd-resolved[1331]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 13 07:06:44.902083 containerd[1456]: time="2025-08-13T07:06:44.901938657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:44.903172 containerd[1456]: time="2025-08-13T07:06:44.903090543Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 07:06:44.903861 containerd[1456]: time="2025-08-13T07:06:44.903731143Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:44.907180 containerd[1456]: time="2025-08-13T07:06:44.906912979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:44.908452 containerd[1456]: time="2025-08-13T07:06:44.908282924Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.540087177s" Aug 13 07:06:44.908452 containerd[1456]: time="2025-08-13T07:06:44.908325540Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 07:06:44.909179 containerd[1456]: time="2025-08-13T07:06:44.909013524Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:06:45.412126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3010719276.mount: Deactivated successfully. Aug 13 07:06:45.416118 containerd[1456]: time="2025-08-13T07:06:45.416061239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:45.417029 containerd[1456]: time="2025-08-13T07:06:45.416968146Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:06:45.417699 containerd[1456]: time="2025-08-13T07:06:45.417345038Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:45.421029 containerd[1456]: time="2025-08-13T07:06:45.420228776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:45.421029 containerd[1456]: time="2025-08-13T07:06:45.420675156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 511.622195ms" Aug 13 07:06:45.421029 containerd[1456]: time="2025-08-13T07:06:45.420707212Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:06:45.421643 containerd[1456]: time="2025-08-13T07:06:45.421609068Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 07:06:45.933706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1520884099.mount: Deactivated successfully. Aug 13 07:06:47.558413 containerd[1456]: time="2025-08-13T07:06:47.558353665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:47.560102 containerd[1456]: time="2025-08-13T07:06:47.559329138Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Aug 13 07:06:47.560102 containerd[1456]: time="2025-08-13T07:06:47.560047570Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:47.564484 containerd[1456]: time="2025-08-13T07:06:47.564417693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:06:47.566284 containerd[1456]: time="2025-08-13T07:06:47.566048825Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.144396878s" Aug 13 07:06:47.566284 containerd[1456]: time="2025-08-13T07:06:47.566102060Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 07:06:50.622480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:50.635524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:50.673755 systemd[1]: Reloading requested from client PID 2039 ('systemctl') (unit session-7.scope)... Aug 13 07:06:50.673775 systemd[1]: Reloading... Aug 13 07:06:50.805909 zram_generator::config[2078]: No configuration found. Aug 13 07:06:50.946339 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:06:51.031646 systemd[1]: Reloading finished in 357 ms. Aug 13 07:06:51.094687 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:06:51.094797 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:06:51.095116 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:51.109673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:51.249598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:51.266806 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:06:51.330173 kubelet[2131]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:06:51.330173 kubelet[2131]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:06:51.330173 kubelet[2131]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:06:51.330173 kubelet[2131]: I0813 07:06:51.330105 2131 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:06:51.925684 kubelet[2131]: I0813 07:06:51.925638 2131 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 07:06:51.927185 kubelet[2131]: I0813 07:06:51.925928 2131 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:06:51.927185 kubelet[2131]: I0813 07:06:51.926328 2131 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 07:06:51.960481 kubelet[2131]: I0813 07:06:51.960399 2131 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:06:51.962024 kubelet[2131]: E0813 07:06:51.961463 2131 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://64.227.105.235:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.227.105.235:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 07:06:51.982317 kubelet[2131]: E0813 07:06:51.982153 2131 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:06:51.982317 kubelet[2131]: I0813 07:06:51.982308 2131 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:06:51.993846 kubelet[2131]: I0813 07:06:51.993772 2131 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:06:51.995686 kubelet[2131]: I0813 07:06:51.995597 2131 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:06:51.999608 kubelet[2131]: I0813 07:06:51.995682 2131 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-5-1812e6c6f4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:06:51.999608 kubelet[2131]: I0813 07:06:51.999612 2131 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:06:51.999951 kubelet[2131]: I0813 07:06:51.999633 2131 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 07:06:52.000970 kubelet[2131]: I0813 07:06:52.000920 2131 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:06:52.006421 kubelet[2131]: I0813 07:06:52.004385 2131 kubelet.go:480] "Attempting to sync node with API server" Aug 13 07:06:52.006421 kubelet[2131]: I0813 07:06:52.004448 2131 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:06:52.006421 kubelet[2131]: I0813 07:06:52.004493 2131 kubelet.go:386] "Adding apiserver pod source" Aug 13 07:06:52.006421 kubelet[2131]: I0813 07:06:52.004518 2131 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:06:52.019377 kubelet[2131]: E0813 07:06:52.019118 2131 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.227.105.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-5-1812e6c6f4&limit=500&resourceVersion=0\": dial tcp 64.227.105.235:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 07:06:52.021576 kubelet[2131]: E0813 07:06:52.021532 2131 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.227.105.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.105.235:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 07:06:52.022784 kubelet[2131]: I0813 07:06:52.021884 2131 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:06:52.022784 kubelet[2131]: I0813 07:06:52.022672 2131 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 07:06:52.023545 kubelet[2131]: W0813 07:06:52.023525 2131 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:06:52.028605 kubelet[2131]: I0813 07:06:52.028575 2131 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:06:52.028851 kubelet[2131]: I0813 07:06:52.028838 2131 server.go:1289] "Started kubelet" Aug 13 07:06:52.032124 kubelet[2131]: I0813 07:06:52.032084 2131 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:06:52.034707 kubelet[2131]: E0813 07:06:52.033238 2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.227.105.235:6443/api/v1/namespaces/default/events\": dial tcp 64.227.105.235:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-5-1812e6c6f4.185b41c9295794ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-5-1812e6c6f4,UID:ci-4081.3.5-5-1812e6c6f4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-5-1812e6c6f4,},FirstTimestamp:2025-08-13 07:06:52.028777643 +0000 UTC m=+0.755645652,LastTimestamp:2025-08-13 07:06:52.028777643 +0000 UTC m=+0.755645652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-5-1812e6c6f4,}" Aug 13 07:06:52.035712 kubelet[2131]: I0813 07:06:52.035005 2131 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:06:52.037096 kubelet[2131]: I0813 07:06:52.036340 2131 server.go:317] "Adding debug handlers to kubelet server" Aug 13 07:06:52.041676 kubelet[2131]: I0813 07:06:52.041416 2131 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:06:52.041827 kubelet[2131]: I0813 07:06:52.041768 2131 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:06:52.043741 kubelet[2131]: I0813 07:06:52.043180 2131 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:06:52.043741 kubelet[2131]: E0813 07:06:52.043509 2131 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-5-1812e6c6f4\" not found" Aug 13 07:06:52.046589 kubelet[2131]: I0813 07:06:52.042124 2131 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:06:52.056201 kubelet[2131]: I0813 07:06:52.055374 2131 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:06:52.056201 kubelet[2131]: I0813 07:06:52.055471 2131 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:06:52.056201 kubelet[2131]: E0813 07:06:52.055958 2131 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.227.105.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.105.235:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 07:06:52.056201 kubelet[2131]: E0813 07:06:52.056058 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.105.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-5-1812e6c6f4?timeout=10s\": dial tcp 64.227.105.235:6443: connect: connection refused" interval="200ms" Aug 13 07:06:52.065273 kubelet[2131]: I0813 07:06:52.064018 2131 factory.go:223] Registration of the containerd container factory successfully Aug 13 07:06:52.065273 kubelet[2131]: I0813 07:06:52.064041 2131 factory.go:223] Registration of the systemd container factory successfully Aug 13 07:06:52.065273 kubelet[2131]: I0813 07:06:52.064137 2131 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:06:52.086855 kubelet[2131]: I0813 07:06:52.086778 2131 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 07:06:52.087069 kubelet[2131]: E0813 07:06:52.087041 2131 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:06:52.088517 kubelet[2131]: I0813 07:06:52.088478 2131 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 07:06:52.088517 kubelet[2131]: I0813 07:06:52.088522 2131 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 07:06:52.088707 kubelet[2131]: I0813 07:06:52.088555 2131 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:06:52.088707 kubelet[2131]: I0813 07:06:52.088564 2131 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 07:06:52.088707 kubelet[2131]: E0813 07:06:52.088612 2131 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:06:52.099555 kubelet[2131]: I0813 07:06:52.099519 2131 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:06:52.099555 kubelet[2131]: I0813 07:06:52.099538 2131 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:06:52.099555 kubelet[2131]: I0813 07:06:52.099559 2131 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:06:52.101659 kubelet[2131]: E0813 07:06:52.101574 2131 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.227.105.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.105.235:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 07:06:52.101977 kubelet[2131]: I0813 07:06:52.101952 2131 policy_none.go:49] "None policy: Start" Aug 13 07:06:52.102077 kubelet[2131]: I0813 07:06:52.101992 2131 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:06:52.102077 kubelet[2131]: I0813 07:06:52.102005 2131 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:06:52.110003 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:06:52.121619 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:06:52.126117 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:06:52.139702 kubelet[2131]: E0813 07:06:52.139657 2131 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 07:06:52.140645 kubelet[2131]: I0813 07:06:52.139971 2131 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:06:52.140645 kubelet[2131]: I0813 07:06:52.139991 2131 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:06:52.140645 kubelet[2131]: I0813 07:06:52.140365 2131 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:06:52.142557 kubelet[2131]: E0813 07:06:52.142521 2131 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:06:52.142557 kubelet[2131]: E0813 07:06:52.142577 2131 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-5-1812e6c6f4\" not found" Aug 13 07:06:52.205601 systemd[1]: Created slice kubepods-burstable-pod1a4b55e3e32c2b2cc532c7c0ac006e41.slice - libcontainer container kubepods-burstable-pod1a4b55e3e32c2b2cc532c7c0ac006e41.slice. Aug 13 07:06:52.220466 kubelet[2131]: E0813 07:06:52.220423 2131 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-5-1812e6c6f4\" not found" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.223914 systemd[1]: Created slice kubepods-burstable-pod2ab5c7e89768c6e2eaffed68125d4147.slice - libcontainer container kubepods-burstable-pod2ab5c7e89768c6e2eaffed68125d4147.slice. Aug 13 07:06:52.227104 kubelet[2131]: E0813 07:06:52.227064 2131 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-5-1812e6c6f4\" not found" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.229667 systemd[1]: Created slice kubepods-burstable-pod07002c9df48433749116566599c452d9.slice - libcontainer container kubepods-burstable-pod07002c9df48433749116566599c452d9.slice. Aug 13 07:06:52.231722 kubelet[2131]: E0813 07:06:52.231690 2131 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-5-1812e6c6f4\" not found" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.241735 kubelet[2131]: I0813 07:06:52.241704 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.242602 kubelet[2131]: E0813 07:06:52.242569 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.105.235:6443/api/v1/nodes\": dial tcp 64.227.105.235:6443: connect: connection refused" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.257045 kubelet[2131]: I0813 07:06:52.257000 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07002c9df48433749116566599c452d9-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-5-1812e6c6f4\" (UID: \"07002c9df48433749116566599c452d9\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.257282 kubelet[2131]: I0813 07:06:52.257262 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/07002c9df48433749116566599c452d9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-5-1812e6c6f4\" (UID: \"07002c9df48433749116566599c452d9\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.257387 kubelet[2131]: I0813 07:06:52.257371 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07002c9df48433749116566599c452d9-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-5-1812e6c6f4\" (UID: \"07002c9df48433749116566599c452d9\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.257483 kubelet[2131]: E0813 07:06:52.257374 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.105.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-5-1812e6c6f4?timeout=10s\": dial tcp 64.227.105.235:6443: connect: connection refused" interval="400ms" Aug 13 07:06:52.257556 kubelet[2131]: I0813 07:06:52.257513 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ab5c7e89768c6e2eaffed68125d4147-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-5-1812e6c6f4\" (UID: \"2ab5c7e89768c6e2eaffed68125d4147\") " pod="kube-system/kube-apiserver-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.257636 kubelet[2131]: I0813 07:06:52.257625 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07002c9df48433749116566599c452d9-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-5-1812e6c6f4\" (UID: \"07002c9df48433749116566599c452d9\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.257715 kubelet[2131]: I0813 07:06:52.257699 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07002c9df48433749116566599c452d9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-5-1812e6c6f4\" (UID: \"07002c9df48433749116566599c452d9\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.257784 kubelet[2131]: I0813 07:06:52.257774 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a4b55e3e32c2b2cc532c7c0ac006e41-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-5-1812e6c6f4\" (UID: \"1a4b55e3e32c2b2cc532c7c0ac006e41\") " pod="kube-system/kube-scheduler-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.258016 kubelet[2131]: I0813 07:06:52.257945 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ab5c7e89768c6e2eaffed68125d4147-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-5-1812e6c6f4\" (UID: \"2ab5c7e89768c6e2eaffed68125d4147\") " pod="kube-system/kube-apiserver-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.258016 kubelet[2131]: I0813 07:06:52.257979 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ab5c7e89768c6e2eaffed68125d4147-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-5-1812e6c6f4\" (UID: \"2ab5c7e89768c6e2eaffed68125d4147\") " pod="kube-system/kube-apiserver-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.444627 kubelet[2131]: I0813 07:06:52.444307 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.445283 kubelet[2131]: E0813 07:06:52.445255 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.105.235:6443/api/v1/nodes\": dial tcp 64.227.105.235:6443: connect: connection refused" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.521752 kubelet[2131]: E0813 07:06:52.521474 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:52.523216 containerd[1456]: time="2025-08-13T07:06:52.522929982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-5-1812e6c6f4,Uid:1a4b55e3e32c2b2cc532c7c0ac006e41,Namespace:kube-system,Attempt:0,}" Aug 13 07:06:52.525187 systemd-resolved[1331]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Aug 13 07:06:52.528218 kubelet[2131]: E0813 07:06:52.528154 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:52.528710 containerd[1456]: time="2025-08-13T07:06:52.528671673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-5-1812e6c6f4,Uid:2ab5c7e89768c6e2eaffed68125d4147,Namespace:kube-system,Attempt:0,}" Aug 13 07:06:52.532783 kubelet[2131]: E0813 07:06:52.532422 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:52.533066 containerd[1456]: time="2025-08-13T07:06:52.533022306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-5-1812e6c6f4,Uid:07002c9df48433749116566599c452d9,Namespace:kube-system,Attempt:0,}" Aug 13 07:06:52.658180 kubelet[2131]: E0813 07:06:52.658065 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.105.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-5-1812e6c6f4?timeout=10s\": dial tcp 64.227.105.235:6443: connect: connection refused" interval="800ms" Aug 13 07:06:52.847434 kubelet[2131]: I0813 07:06:52.846691 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.848033 kubelet[2131]: E0813 07:06:52.847980 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.105.235:6443/api/v1/nodes\": dial tcp 64.227.105.235:6443: connect: connection refused" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:52.884360 kubelet[2131]: E0813 07:06:52.884305 2131 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.227.105.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.105.235:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 07:06:53.030358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169143735.mount: Deactivated successfully. Aug 13 07:06:53.037169 containerd[1456]: time="2025-08-13T07:06:53.037039638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:06:53.039829 containerd[1456]: time="2025-08-13T07:06:53.039647206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:06:53.040542 containerd[1456]: time="2025-08-13T07:06:53.040431454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:06:53.041478 containerd[1456]: time="2025-08-13T07:06:53.041424459Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:06:53.042723 containerd[1456]: time="2025-08-13T07:06:53.042683266Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:06:53.043677 containerd[1456]: time="2025-08-13T07:06:53.043249658Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:06:53.043677 containerd[1456]: time="2025-08-13T07:06:53.043613309Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:06:53.049247 containerd[1456]: time="2025-08-13T07:06:53.049174328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:06:53.053540 containerd[1456]: time="2025-08-13T07:06:53.053485575Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 524.725765ms" Aug 13 07:06:53.060051 containerd[1456]: time="2025-08-13T07:06:53.059706730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 536.698755ms" Aug 13 07:06:53.062171 containerd[1456]: time="2025-08-13T07:06:53.061688872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 528.561312ms" Aug 13 07:06:53.082242 kubelet[2131]: E0813 07:06:53.082190 2131 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.227.105.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.105.235:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 07:06:53.095160 kubelet[2131]: E0813 07:06:53.092325 2131 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.227.105.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.105.235:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 07:06:53.216951 containerd[1456]: time="2025-08-13T07:06:53.216482900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:06:53.216951 containerd[1456]: time="2025-08-13T07:06:53.216572936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:06:53.216951 containerd[1456]: time="2025-08-13T07:06:53.216685515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:06:53.218986 containerd[1456]: time="2025-08-13T07:06:53.218580942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:53.218986 containerd[1456]: time="2025-08-13T07:06:53.218891232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:53.220889 containerd[1456]: time="2025-08-13T07:06:53.219307667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:06:53.221949 containerd[1456]: time="2025-08-13T07:06:53.221194379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:06:53.221949 containerd[1456]: time="2025-08-13T07:06:53.221253249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:06:53.221949 containerd[1456]: time="2025-08-13T07:06:53.221285795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:53.221949 containerd[1456]: time="2025-08-13T07:06:53.221399588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:53.222217 containerd[1456]: time="2025-08-13T07:06:53.220892308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:53.222217 containerd[1456]: time="2025-08-13T07:06:53.221346049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:06:53.251253 systemd[1]: Started cri-containerd-4bc794b225cfbaf47804b2f7794499a2dc5de5c09fcdf3156d9a191942ff928e.scope - libcontainer container 4bc794b225cfbaf47804b2f7794499a2dc5de5c09fcdf3156d9a191942ff928e. Aug 13 07:06:53.259473 systemd[1]: Started cri-containerd-f9a80ca4de33dbac93264b125f733de2a19eb5ea38d960510034f90960b998e0.scope - libcontainer container f9a80ca4de33dbac93264b125f733de2a19eb5ea38d960510034f90960b998e0. Aug 13 07:06:53.265346 systemd[1]: Started cri-containerd-1df26eb2e6daadf50deeb8dd124687644e273109563fad498b7f996ed112afd7.scope - libcontainer container 1df26eb2e6daadf50deeb8dd124687644e273109563fad498b7f996ed112afd7. Aug 13 07:06:53.289922 kubelet[2131]: E0813 07:06:53.289884 2131 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.227.105.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-5-1812e6c6f4&limit=500&resourceVersion=0\": dial tcp 64.227.105.235:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 07:06:53.356223 containerd[1456]: time="2025-08-13T07:06:53.356043580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-5-1812e6c6f4,Uid:1a4b55e3e32c2b2cc532c7c0ac006e41,Namespace:kube-system,Attempt:0,} returns sandbox id \"1df26eb2e6daadf50deeb8dd124687644e273109563fad498b7f996ed112afd7\"" Aug 13 07:06:53.357825 kubelet[2131]: E0813 07:06:53.357588 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:53.363880 containerd[1456]: time="2025-08-13T07:06:53.363752950Z" level=info msg="CreateContainer within sandbox \"1df26eb2e6daadf50deeb8dd124687644e273109563fad498b7f996ed112afd7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:06:53.371497 containerd[1456]: time="2025-08-13T07:06:53.371327928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-5-1812e6c6f4,Uid:07002c9df48433749116566599c452d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bc794b225cfbaf47804b2f7794499a2dc5de5c09fcdf3156d9a191942ff928e\"" Aug 13 07:06:53.372868 kubelet[2131]: E0813 07:06:53.372706 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:53.379478 containerd[1456]: time="2025-08-13T07:06:53.379377446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-5-1812e6c6f4,Uid:2ab5c7e89768c6e2eaffed68125d4147,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9a80ca4de33dbac93264b125f733de2a19eb5ea38d960510034f90960b998e0\"" Aug 13 07:06:53.380566 kubelet[2131]: E0813 07:06:53.380397 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:53.380741 containerd[1456]: time="2025-08-13T07:06:53.380464234Z" level=info msg="CreateContainer within sandbox \"4bc794b225cfbaf47804b2f7794499a2dc5de5c09fcdf3156d9a191942ff928e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:06:53.387613 containerd[1456]: time="2025-08-13T07:06:53.387568548Z" level=info msg="CreateContainer within sandbox \"f9a80ca4de33dbac93264b125f733de2a19eb5ea38d960510034f90960b998e0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:06:53.391452 containerd[1456]: time="2025-08-13T07:06:53.391403751Z" level=info msg="CreateContainer within sandbox \"1df26eb2e6daadf50deeb8dd124687644e273109563fad498b7f996ed112afd7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3335fc08051400985aff839bddba9c78900f3b659b58ccf6578ed9e199861050\"" Aug 13 07:06:53.392103 containerd[1456]: time="2025-08-13T07:06:53.392077509Z" level=info msg="StartContainer for \"3335fc08051400985aff839bddba9c78900f3b659b58ccf6578ed9e199861050\"" Aug 13 07:06:53.400792 containerd[1456]: time="2025-08-13T07:06:53.400584307Z" level=info msg="CreateContainer within sandbox \"f9a80ca4de33dbac93264b125f733de2a19eb5ea38d960510034f90960b998e0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"029138d362d6dd0638258fb02ce073fc4ea31e2348e994acd8af0b2062d45503\"" Aug 13 07:06:53.401334 containerd[1456]: time="2025-08-13T07:06:53.401305913Z" level=info msg="StartContainer for \"029138d362d6dd0638258fb02ce073fc4ea31e2348e994acd8af0b2062d45503\"" Aug 13 07:06:53.405556 containerd[1456]: time="2025-08-13T07:06:53.405386548Z" level=info msg="CreateContainer within sandbox \"4bc794b225cfbaf47804b2f7794499a2dc5de5c09fcdf3156d9a191942ff928e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8dd52b52d64f825ba4b36b6957f7d7917e9256c7d070f27c38e622fc42e8fba7\"" Aug 13 07:06:53.407149 containerd[1456]: time="2025-08-13T07:06:53.406151349Z" level=info msg="StartContainer for \"8dd52b52d64f825ba4b36b6957f7d7917e9256c7d070f27c38e622fc42e8fba7\"" Aug 13 07:06:53.445193 systemd[1]: Started cri-containerd-3335fc08051400985aff839bddba9c78900f3b659b58ccf6578ed9e199861050.scope - libcontainer container 3335fc08051400985aff839bddba9c78900f3b659b58ccf6578ed9e199861050. Aug 13 07:06:53.459421 kubelet[2131]: E0813 07:06:53.458966 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.105.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-5-1812e6c6f4?timeout=10s\": dial tcp 64.227.105.235:6443: connect: connection refused" interval="1.6s" Aug 13 07:06:53.470271 systemd[1]: Started cri-containerd-029138d362d6dd0638258fb02ce073fc4ea31e2348e994acd8af0b2062d45503.scope - libcontainer container 029138d362d6dd0638258fb02ce073fc4ea31e2348e994acd8af0b2062d45503. Aug 13 07:06:53.478366 systemd[1]: Started cri-containerd-8dd52b52d64f825ba4b36b6957f7d7917e9256c7d070f27c38e622fc42e8fba7.scope - libcontainer container 8dd52b52d64f825ba4b36b6957f7d7917e9256c7d070f27c38e622fc42e8fba7. Aug 13 07:06:53.549197 containerd[1456]: time="2025-08-13T07:06:53.548730578Z" level=info msg="StartContainer for \"029138d362d6dd0638258fb02ce073fc4ea31e2348e994acd8af0b2062d45503\" returns successfully" Aug 13 07:06:53.568426 containerd[1456]: time="2025-08-13T07:06:53.568361595Z" level=info msg="StartContainer for \"3335fc08051400985aff839bddba9c78900f3b659b58ccf6578ed9e199861050\" returns successfully" Aug 13 07:06:53.574517 containerd[1456]: time="2025-08-13T07:06:53.574457687Z" level=info msg="StartContainer for \"8dd52b52d64f825ba4b36b6957f7d7917e9256c7d070f27c38e622fc42e8fba7\" returns successfully" Aug 13 07:06:53.649920 kubelet[2131]: I0813 07:06:53.649877 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:53.650996 kubelet[2131]: E0813 07:06:53.650935 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.105.235:6443/api/v1/nodes\": dial tcp 64.227.105.235:6443: connect: connection refused" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:54.114493 kubelet[2131]: E0813 07:06:54.114455 2131 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-5-1812e6c6f4\" not found" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:54.114779 kubelet[2131]: E0813 07:06:54.114588 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:54.118369 kubelet[2131]: E0813 07:06:54.118334 2131 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-5-1812e6c6f4\" not found" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:54.118527 kubelet[2131]: E0813 07:06:54.118472 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:54.120277 kubelet[2131]: E0813 07:06:54.120247 2131 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-5-1812e6c6f4\" not found" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:54.120433 kubelet[2131]: E0813 07:06:54.120375 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:55.123144 kubelet[2131]: E0813 07:06:55.121704 2131 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-5-1812e6c6f4\" not found" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:55.123144 kubelet[2131]: E0813 07:06:55.121843 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:55.124774 kubelet[2131]: E0813 07:06:55.124496 2131 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-5-1812e6c6f4\" not found" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:55.124774 kubelet[2131]: E0813 07:06:55.124659 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:55.252175 kubelet[2131]: I0813 07:06:55.252000 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:56.635890 kubelet[2131]: E0813 07:06:56.635837 2131 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-5-1812e6c6f4\" not found" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:56.757159 kubelet[2131]: I0813 07:06:56.757086 2131 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:56.761873 kubelet[2131]: I0813 07:06:56.759301 2131 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:56.761873 kubelet[2131]: I0813 07:06:56.759627 2131 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:56.827898 kubelet[2131]: E0813 07:06:56.827773 2131 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-5-1812e6c6f4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:56.828077 kubelet[2131]: E0813 07:06:56.827984 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:56.828077 kubelet[2131]: E0813 07:06:56.827782 2131 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-5-1812e6c6f4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:56.828077 kubelet[2131]: I0813 07:06:56.828068 2131 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:56.831509 kubelet[2131]: E0813 07:06:56.831102 2131 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-5-1812e6c6f4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:56.831509 kubelet[2131]: I0813 07:06:56.831147 2131 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:56.834043 kubelet[2131]: E0813 07:06:56.833980 2131 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-5-1812e6c6f4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:56.840528 kubelet[2131]: I0813 07:06:56.840467 2131 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:56.843126 kubelet[2131]: E0813 07:06:56.842838 2131 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-5-1812e6c6f4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:56.843126 kubelet[2131]: E0813 07:06:56.843035 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:57.025278 kubelet[2131]: I0813 07:06:57.024514 2131 apiserver.go:52] "Watching apiserver" Aug 13 07:06:57.055939 kubelet[2131]: I0813 07:06:57.055868 2131 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:06:58.718403 systemd[1]: Reloading requested from client PID 2416 ('systemctl') (unit session-7.scope)... Aug 13 07:06:58.718992 systemd[1]: Reloading... Aug 13 07:06:58.845191 zram_generator::config[2455]: No configuration found. Aug 13 07:06:58.998416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:06:59.115540 systemd[1]: Reloading finished in 395 ms. Aug 13 07:06:59.170629 kubelet[2131]: I0813 07:06:59.170590 2131 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:06:59.170840 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:59.186106 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:06:59.186428 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:59.186503 systemd[1]: kubelet.service: Consumed 1.239s CPU time, 128.1M memory peak, 0B memory swap peak. Aug 13 07:06:59.194723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:06:59.359438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:06:59.368010 (kubelet)[2506]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:06:59.428181 kubelet[2506]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:06:59.428181 kubelet[2506]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:06:59.428181 kubelet[2506]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:06:59.428783 kubelet[2506]: I0813 07:06:59.428203 2506 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:06:59.437031 kubelet[2506]: I0813 07:06:59.436954 2506 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 07:06:59.437031 kubelet[2506]: I0813 07:06:59.436991 2506 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:06:59.437354 kubelet[2506]: I0813 07:06:59.437327 2506 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 07:06:59.440657 kubelet[2506]: I0813 07:06:59.440550 2506 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 07:06:59.454395 kubelet[2506]: I0813 07:06:59.452579 2506 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:06:59.459291 kubelet[2506]: E0813 07:06:59.459239 2506 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:06:59.459291 kubelet[2506]: I0813 07:06:59.459283 2506 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:06:59.464065 kubelet[2506]: I0813 07:06:59.463999 2506 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:06:59.464638 kubelet[2506]: I0813 07:06:59.464596 2506 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:06:59.464789 kubelet[2506]: I0813 07:06:59.464633 2506 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-5-1812e6c6f4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:06:59.465905 kubelet[2506]: I0813 07:06:59.465852 2506 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:06:59.465905 kubelet[2506]: I0813 07:06:59.465885 2506 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 07:06:59.466051 kubelet[2506]: I0813 07:06:59.465951 2506 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:06:59.466344 kubelet[2506]: I0813 07:06:59.466281 2506 kubelet.go:480] "Attempting to sync node with API server" Aug 13 07:06:59.466344 kubelet[2506]: I0813 07:06:59.466301 2506 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:06:59.466344 kubelet[2506]: I0813 07:06:59.466329 2506 kubelet.go:386] "Adding apiserver pod source" Aug 13 07:06:59.466512 kubelet[2506]: I0813 07:06:59.466349 2506 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:06:59.476171 kubelet[2506]: I0813 07:06:59.473896 2506 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:06:59.476171 kubelet[2506]: I0813 07:06:59.474493 2506 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 07:06:59.478197 kubelet[2506]: I0813 07:06:59.477846 2506 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:06:59.478197 kubelet[2506]: I0813 07:06:59.477913 2506 server.go:1289] "Started kubelet" Aug 13 07:06:59.482336 kubelet[2506]: I0813 07:06:59.482274 2506 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:06:59.484160 kubelet[2506]: I0813 07:06:59.483825 2506 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:06:59.484460 kubelet[2506]: I0813 07:06:59.484423 2506 server.go:317] "Adding debug handlers to kubelet server" Aug 13 07:06:59.492703 kubelet[2506]: I0813 07:06:59.492615 2506 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:06:59.500931 kubelet[2506]: I0813 07:06:59.500496 2506 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:06:59.504253 kubelet[2506]: I0813 07:06:59.501745 2506 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:06:59.504253 kubelet[2506]: E0813 07:06:59.502076 2506 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-5-1812e6c6f4\" not found" Aug 13 07:06:59.504253 kubelet[2506]: I0813 07:06:59.502749 2506 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:06:59.504253 kubelet[2506]: I0813 07:06:59.502952 2506 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:06:59.504802 kubelet[2506]: I0813 07:06:59.504749 2506 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:06:59.513164 kubelet[2506]: I0813 07:06:59.512788 2506 factory.go:223] Registration of the systemd container factory successfully Aug 13 07:06:59.514771 kubelet[2506]: I0813 07:06:59.514729 2506 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:06:59.522461 kubelet[2506]: E0813 07:06:59.522422 2506 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:06:59.522939 kubelet[2506]: I0813 07:06:59.522918 2506 factory.go:223] Registration of the containerd container factory successfully Aug 13 07:06:59.543078 kubelet[2506]: I0813 07:06:59.542829 2506 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 07:06:59.545829 kubelet[2506]: I0813 07:06:59.545709 2506 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 07:06:59.545829 kubelet[2506]: I0813 07:06:59.545748 2506 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 07:06:59.545829 kubelet[2506]: I0813 07:06:59.545778 2506 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:06:59.545829 kubelet[2506]: I0813 07:06:59.545787 2506 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 07:06:59.546486 kubelet[2506]: E0813 07:06:59.546353 2506 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:06:59.584482 kubelet[2506]: I0813 07:06:59.584440 2506 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:06:59.584482 kubelet[2506]: I0813 07:06:59.584462 2506 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:06:59.584482 kubelet[2506]: I0813 07:06:59.584492 2506 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:06:59.584751 kubelet[2506]: I0813 07:06:59.584725 2506 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:06:59.584807 kubelet[2506]: I0813 07:06:59.584741 2506 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:06:59.584807 kubelet[2506]: I0813 07:06:59.584767 2506 policy_none.go:49] "None policy: Start" Aug 13 07:06:59.584807 kubelet[2506]: I0813 07:06:59.584780 2506 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:06:59.584807 kubelet[2506]: I0813 07:06:59.584793 2506 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:06:59.584967 kubelet[2506]: I0813 07:06:59.584928 2506 state_mem.go:75] "Updated machine memory state" Aug 13 07:06:59.591806 kubelet[2506]: E0813 07:06:59.591762 2506 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 07:06:59.592030 kubelet[2506]: I0813 07:06:59.592004 2506 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:06:59.592161 kubelet[2506]: I0813 07:06:59.592031 2506 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:06:59.593057 kubelet[2506]: I0813 07:06:59.592944 2506 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:06:59.596717 kubelet[2506]: E0813 07:06:59.594921 2506 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:06:59.648481 kubelet[2506]: I0813 07:06:59.647467 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.648481 kubelet[2506]: I0813 07:06:59.647519 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.648481 kubelet[2506]: I0813 07:06:59.648000 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.657694 kubelet[2506]: I0813 07:06:59.657634 2506 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:06:59.657997 kubelet[2506]: I0813 07:06:59.657786 2506 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:06:59.659054 kubelet[2506]: I0813 07:06:59.658897 2506 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:06:59.694906 kubelet[2506]: I0813 07:06:59.694828 2506 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.708159 kubelet[2506]: I0813 07:06:59.707745 2506 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.708159 kubelet[2506]: I0813 07:06:59.707825 2506 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.804053 kubelet[2506]: I0813 07:06:59.803947 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ab5c7e89768c6e2eaffed68125d4147-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-5-1812e6c6f4\" (UID: \"2ab5c7e89768c6e2eaffed68125d4147\") " pod="kube-system/kube-apiserver-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.804915 kubelet[2506]: I0813 07:06:59.804690 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ab5c7e89768c6e2eaffed68125d4147-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-5-1812e6c6f4\" (UID: \"2ab5c7e89768c6e2eaffed68125d4147\") " pod="kube-system/kube-apiserver-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.804915 kubelet[2506]: I0813 07:06:59.804767 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07002c9df48433749116566599c452d9-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-5-1812e6c6f4\" (UID: \"07002c9df48433749116566599c452d9\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.804915 kubelet[2506]: I0813 07:06:59.804798 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/07002c9df48433749116566599c452d9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-5-1812e6c6f4\" (UID: \"07002c9df48433749116566599c452d9\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.804915 kubelet[2506]: I0813 07:06:59.804858 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07002c9df48433749116566599c452d9-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-5-1812e6c6f4\" (UID: \"07002c9df48433749116566599c452d9\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.804915 kubelet[2506]: I0813 07:06:59.804883 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a4b55e3e32c2b2cc532c7c0ac006e41-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-5-1812e6c6f4\" (UID: \"1a4b55e3e32c2b2cc532c7c0ac006e41\") " pod="kube-system/kube-scheduler-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.805352 kubelet[2506]: I0813 07:06:59.805206 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ab5c7e89768c6e2eaffed68125d4147-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-5-1812e6c6f4\" (UID: \"2ab5c7e89768c6e2eaffed68125d4147\") " pod="kube-system/kube-apiserver-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.805352 kubelet[2506]: I0813 07:06:59.805263 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07002c9df48433749116566599c452d9-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-5-1812e6c6f4\" (UID: \"07002c9df48433749116566599c452d9\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.805352 kubelet[2506]: I0813 07:06:59.805285 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07002c9df48433749116566599c452d9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-5-1812e6c6f4\" (UID: \"07002c9df48433749116566599c452d9\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:06:59.959440 kubelet[2506]: E0813 07:06:59.958436 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:59.959440 kubelet[2506]: E0813 07:06:59.958729 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:06:59.960840 kubelet[2506]: E0813 07:06:59.960608 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:00.471905 kubelet[2506]: I0813 07:07:00.471835 2506 apiserver.go:52] "Watching apiserver" Aug 13 07:07:00.503606 kubelet[2506]: I0813 07:07:00.503542 2506 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:07:00.571106 kubelet[2506]: I0813 07:07:00.571043 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:00.571565 kubelet[2506]: I0813 07:07:00.571553 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:00.572047 kubelet[2506]: I0813 07:07:00.571981 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:00.583698 kubelet[2506]: I0813 07:07:00.582245 2506 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:07:00.583698 kubelet[2506]: E0813 07:07:00.582313 2506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-5-1812e6c6f4\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:00.583698 kubelet[2506]: E0813 07:07:00.582502 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:00.590156 kubelet[2506]: I0813 07:07:00.590095 2506 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:07:00.590498 kubelet[2506]: E0813 07:07:00.590474 2506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-5-1812e6c6f4\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:00.591759 kubelet[2506]: E0813 07:07:00.591688 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:00.594183 kubelet[2506]: I0813 07:07:00.592911 2506 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:07:00.594183 kubelet[2506]: E0813 07:07:00.592997 2506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-5-1812e6c6f4\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:00.594183 kubelet[2506]: E0813 07:07:00.593238 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:00.684441 kubelet[2506]: I0813 07:07:00.684336 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-5-1812e6c6f4" podStartSLOduration=1.68430978 podStartE2EDuration="1.68430978s" podCreationTimestamp="2025-08-13 07:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:07:00.641533631 +0000 UTC m=+1.265795276" watchObservedRunningTime="2025-08-13 07:07:00.68430978 +0000 UTC m=+1.308571430" Aug 13 07:07:00.709168 kubelet[2506]: I0813 07:07:00.706445 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-5-1812e6c6f4" podStartSLOduration=1.706425286 podStartE2EDuration="1.706425286s" podCreationTimestamp="2025-08-13 07:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:07:00.685233116 +0000 UTC m=+1.309494765" watchObservedRunningTime="2025-08-13 07:07:00.706425286 +0000 UTC m=+1.330686935" Aug 13 07:07:00.728488 kubelet[2506]: I0813 07:07:00.728307 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-5-1812e6c6f4" podStartSLOduration=1.728288915 podStartE2EDuration="1.728288915s" podCreationTimestamp="2025-08-13 07:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:07:00.706736894 +0000 UTC m=+1.330998543" watchObservedRunningTime="2025-08-13 07:07:00.728288915 +0000 UTC m=+1.352550543" Aug 13 07:07:01.574419 kubelet[2506]: E0813 07:07:01.573797 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:01.574419 kubelet[2506]: E0813 07:07:01.573817 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:01.574419 kubelet[2506]: E0813 07:07:01.574296 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:02.576184 kubelet[2506]: E0813 07:07:02.576042 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:03.702471 kubelet[2506]: E0813 07:07:03.702432 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:04.580590 kubelet[2506]: E0813 07:07:04.580468 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:05.180014 kubelet[2506]: I0813 07:07:05.179980 2506 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:07:05.181049 containerd[1456]: time="2025-08-13T07:07:05.181003557Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:07:05.182038 kubelet[2506]: I0813 07:07:05.181477 2506 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:07:05.581750 kubelet[2506]: E0813 07:07:05.581592 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:06.246094 systemd[1]: Created slice kubepods-besteffort-podbd25d0de_dbe0_4a11_a443_a7a90a2021e4.slice - libcontainer container kubepods-besteffort-podbd25d0de_dbe0_4a11_a443_a7a90a2021e4.slice. Aug 13 07:07:06.343911 kubelet[2506]: I0813 07:07:06.343858 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd25d0de-dbe0-4a11-a443-a7a90a2021e4-kube-proxy\") pod \"kube-proxy-94j2b\" (UID: \"bd25d0de-dbe0-4a11-a443-a7a90a2021e4\") " pod="kube-system/kube-proxy-94j2b" Aug 13 07:07:06.344685 kubelet[2506]: I0813 07:07:06.344524 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd25d0de-dbe0-4a11-a443-a7a90a2021e4-lib-modules\") pod \"kube-proxy-94j2b\" (UID: \"bd25d0de-dbe0-4a11-a443-a7a90a2021e4\") " pod="kube-system/kube-proxy-94j2b" Aug 13 07:07:06.344685 kubelet[2506]: I0813 07:07:06.344565 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd25d0de-dbe0-4a11-a443-a7a90a2021e4-xtables-lock\") pod \"kube-proxy-94j2b\" (UID: \"bd25d0de-dbe0-4a11-a443-a7a90a2021e4\") " pod="kube-system/kube-proxy-94j2b" Aug 13 07:07:06.344685 kubelet[2506]: I0813 07:07:06.344604 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcj6s\" (UniqueName: \"kubernetes.io/projected/bd25d0de-dbe0-4a11-a443-a7a90a2021e4-kube-api-access-gcj6s\") pod \"kube-proxy-94j2b\" (UID: \"bd25d0de-dbe0-4a11-a443-a7a90a2021e4\") " pod="kube-system/kube-proxy-94j2b" Aug 13 07:07:06.398719 systemd[1]: Created slice kubepods-besteffort-podde9a4a42_bd19_41b3_b7d3_7019b4d2e77a.slice - libcontainer container kubepods-besteffort-podde9a4a42_bd19_41b3_b7d3_7019b4d2e77a.slice. Aug 13 07:07:06.545893 kubelet[2506]: I0813 07:07:06.545736 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/de9a4a42-bd19-41b3-b7d3-7019b4d2e77a-var-lib-calico\") pod \"tigera-operator-747864d56d-z9wjr\" (UID: \"de9a4a42-bd19-41b3-b7d3-7019b4d2e77a\") " pod="tigera-operator/tigera-operator-747864d56d-z9wjr" Aug 13 07:07:06.545893 kubelet[2506]: I0813 07:07:06.545790 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcjcj\" (UniqueName: \"kubernetes.io/projected/de9a4a42-bd19-41b3-b7d3-7019b4d2e77a-kube-api-access-kcjcj\") pod \"tigera-operator-747864d56d-z9wjr\" (UID: \"de9a4a42-bd19-41b3-b7d3-7019b4d2e77a\") " pod="tigera-operator/tigera-operator-747864d56d-z9wjr" Aug 13 07:07:06.557409 kubelet[2506]: E0813 07:07:06.557087 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:06.558153 containerd[1456]: time="2025-08-13T07:07:06.558075421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-94j2b,Uid:bd25d0de-dbe0-4a11-a443-a7a90a2021e4,Namespace:kube-system,Attempt:0,}" Aug 13 07:07:06.593738 containerd[1456]: time="2025-08-13T07:07:06.593371387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:06.593738 containerd[1456]: time="2025-08-13T07:07:06.593450762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:06.593738 containerd[1456]: time="2025-08-13T07:07:06.593483965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:06.593738 containerd[1456]: time="2025-08-13T07:07:06.593629210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:06.623021 systemd[1]: run-containerd-runc-k8s.io-4deb98a30e9a6478c881fe40ece73827550f83051ae621953909f1786728c452-runc.PbCrvx.mount: Deactivated successfully. Aug 13 07:07:06.634385 systemd[1]: Started cri-containerd-4deb98a30e9a6478c881fe40ece73827550f83051ae621953909f1786728c452.scope - libcontainer container 4deb98a30e9a6478c881fe40ece73827550f83051ae621953909f1786728c452. Aug 13 07:07:06.677507 containerd[1456]: time="2025-08-13T07:07:06.677455268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-94j2b,Uid:bd25d0de-dbe0-4a11-a443-a7a90a2021e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4deb98a30e9a6478c881fe40ece73827550f83051ae621953909f1786728c452\"" Aug 13 07:07:06.678763 kubelet[2506]: E0813 07:07:06.678735 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:06.685465 containerd[1456]: time="2025-08-13T07:07:06.685413061Z" level=info msg="CreateContainer within sandbox \"4deb98a30e9a6478c881fe40ece73827550f83051ae621953909f1786728c452\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:07:06.701520 containerd[1456]: time="2025-08-13T07:07:06.701352928Z" level=info msg="CreateContainer within sandbox \"4deb98a30e9a6478c881fe40ece73827550f83051ae621953909f1786728c452\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e0acb3e7125ef29fc539d8f499d138a95c88984fe4f21167eeb1f33779d9d2f9\"" Aug 13 07:07:06.702867 containerd[1456]: time="2025-08-13T07:07:06.702480621Z" level=info msg="StartContainer for \"e0acb3e7125ef29fc539d8f499d138a95c88984fe4f21167eeb1f33779d9d2f9\"" Aug 13 07:07:06.702867 containerd[1456]: time="2025-08-13T07:07:06.702524127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-z9wjr,Uid:de9a4a42-bd19-41b3-b7d3-7019b4d2e77a,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:07:06.745115 containerd[1456]: time="2025-08-13T07:07:06.744819957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:06.745697 containerd[1456]: time="2025-08-13T07:07:06.745539572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:06.745697 containerd[1456]: time="2025-08-13T07:07:06.745563990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:06.746823 containerd[1456]: time="2025-08-13T07:07:06.746466945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:06.750410 systemd[1]: Started cri-containerd-e0acb3e7125ef29fc539d8f499d138a95c88984fe4f21167eeb1f33779d9d2f9.scope - libcontainer container e0acb3e7125ef29fc539d8f499d138a95c88984fe4f21167eeb1f33779d9d2f9. Aug 13 07:07:06.771464 systemd[1]: Started cri-containerd-3898642fa58ddb87080a67b1d8e70b4a5fd57bd64c640122f395298ee74b6f87.scope - libcontainer container 3898642fa58ddb87080a67b1d8e70b4a5fd57bd64c640122f395298ee74b6f87. Aug 13 07:07:06.810711 containerd[1456]: time="2025-08-13T07:07:06.809918016Z" level=info msg="StartContainer for \"e0acb3e7125ef29fc539d8f499d138a95c88984fe4f21167eeb1f33779d9d2f9\" returns successfully" Aug 13 07:07:06.862600 containerd[1456]: time="2025-08-13T07:07:06.862489817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-z9wjr,Uid:de9a4a42-bd19-41b3-b7d3-7019b4d2e77a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3898642fa58ddb87080a67b1d8e70b4a5fd57bd64c640122f395298ee74b6f87\"" Aug 13 07:07:06.867044 containerd[1456]: time="2025-08-13T07:07:06.866749511Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:07:07.593545 kubelet[2506]: E0813 07:07:07.593504 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:07.609640 kubelet[2506]: I0813 07:07:07.609079 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-94j2b" podStartSLOduration=1.609058423 podStartE2EDuration="1.609058423s" podCreationTimestamp="2025-08-13 07:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:07:07.607641732 +0000 UTC m=+8.231903381" watchObservedRunningTime="2025-08-13 07:07:07.609058423 +0000 UTC m=+8.233320071" Aug 13 07:07:08.213643 kubelet[2506]: E0813 07:07:08.213559 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:08.342322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886745763.mount: Deactivated successfully. Aug 13 07:07:08.599383 kubelet[2506]: E0813 07:07:08.598357 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:09.117148 containerd[1456]: time="2025-08-13T07:07:09.116942938Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:09.118210 containerd[1456]: time="2025-08-13T07:07:09.117956388Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:07:09.118705 containerd[1456]: time="2025-08-13T07:07:09.118660859Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:09.122264 containerd[1456]: time="2025-08-13T07:07:09.122103178Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:09.123852 containerd[1456]: time="2025-08-13T07:07:09.122937656Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.256143408s" Aug 13 07:07:09.123852 containerd[1456]: time="2025-08-13T07:07:09.122986461Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:07:09.129126 containerd[1456]: time="2025-08-13T07:07:09.129067775Z" level=info msg="CreateContainer within sandbox \"3898642fa58ddb87080a67b1d8e70b4a5fd57bd64c640122f395298ee74b6f87\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:07:09.142706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825496719.mount: Deactivated successfully. Aug 13 07:07:09.146769 containerd[1456]: time="2025-08-13T07:07:09.146706236Z" level=info msg="CreateContainer within sandbox \"3898642fa58ddb87080a67b1d8e70b4a5fd57bd64c640122f395298ee74b6f87\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d7bd33ed25eeca8a6c4ba72742030497fb97fd3db1b60563e44c6b3e03c3ee41\"" Aug 13 07:07:09.147917 containerd[1456]: time="2025-08-13T07:07:09.147853284Z" level=info msg="StartContainer for \"d7bd33ed25eeca8a6c4ba72742030497fb97fd3db1b60563e44c6b3e03c3ee41\"" Aug 13 07:07:09.189480 systemd[1]: Started cri-containerd-d7bd33ed25eeca8a6c4ba72742030497fb97fd3db1b60563e44c6b3e03c3ee41.scope - libcontainer container d7bd33ed25eeca8a6c4ba72742030497fb97fd3db1b60563e44c6b3e03c3ee41. Aug 13 07:07:09.235810 containerd[1456]: time="2025-08-13T07:07:09.235755760Z" level=info msg="StartContainer for \"d7bd33ed25eeca8a6c4ba72742030497fb97fd3db1b60563e44c6b3e03c3ee41\" returns successfully" Aug 13 07:07:11.718160 kubelet[2506]: E0813 07:07:11.717626 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:11.754248 kubelet[2506]: I0813 07:07:11.753794 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-z9wjr" podStartSLOduration=3.493425709 podStartE2EDuration="5.75366541s" podCreationTimestamp="2025-08-13 07:07:06 +0000 UTC" firstStartedPulling="2025-08-13 07:07:06.864650906 +0000 UTC m=+7.488912532" lastFinishedPulling="2025-08-13 07:07:09.124890591 +0000 UTC m=+9.749152233" observedRunningTime="2025-08-13 07:07:09.615338683 +0000 UTC m=+10.239600337" watchObservedRunningTime="2025-08-13 07:07:11.75366541 +0000 UTC m=+12.377927059" Aug 13 07:07:12.960784 systemd[1]: cri-containerd-d7bd33ed25eeca8a6c4ba72742030497fb97fd3db1b60563e44c6b3e03c3ee41.scope: Deactivated successfully. Aug 13 07:07:13.014302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7bd33ed25eeca8a6c4ba72742030497fb97fd3db1b60563e44c6b3e03c3ee41-rootfs.mount: Deactivated successfully. Aug 13 07:07:13.097571 containerd[1456]: time="2025-08-13T07:07:13.063207897Z" level=info msg="shim disconnected" id=d7bd33ed25eeca8a6c4ba72742030497fb97fd3db1b60563e44c6b3e03c3ee41 namespace=k8s.io Aug 13 07:07:13.097571 containerd[1456]: time="2025-08-13T07:07:13.097534385Z" level=warning msg="cleaning up after shim disconnected" id=d7bd33ed25eeca8a6c4ba72742030497fb97fd3db1b60563e44c6b3e03c3ee41 namespace=k8s.io Aug 13 07:07:13.100263 containerd[1456]: time="2025-08-13T07:07:13.098192362Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:13.617303 kubelet[2506]: I0813 07:07:13.616755 2506 scope.go:117] "RemoveContainer" containerID="d7bd33ed25eeca8a6c4ba72742030497fb97fd3db1b60563e44c6b3e03c3ee41" Aug 13 07:07:13.652530 containerd[1456]: time="2025-08-13T07:07:13.652476486Z" level=info msg="CreateContainer within sandbox \"3898642fa58ddb87080a67b1d8e70b4a5fd57bd64c640122f395298ee74b6f87\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 13 07:07:13.672428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1590815203.mount: Deactivated successfully. Aug 13 07:07:13.675209 containerd[1456]: time="2025-08-13T07:07:13.674980252Z" level=info msg="CreateContainer within sandbox \"3898642fa58ddb87080a67b1d8e70b4a5fd57bd64c640122f395298ee74b6f87\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"c22b44451524c630bc38be609a01bd7aa5c2a335eb5d0775d07bda8ac8b3de20\"" Aug 13 07:07:13.679286 containerd[1456]: time="2025-08-13T07:07:13.678343257Z" level=info msg="StartContainer for \"c22b44451524c630bc38be609a01bd7aa5c2a335eb5d0775d07bda8ac8b3de20\"" Aug 13 07:07:13.737475 systemd[1]: Started cri-containerd-c22b44451524c630bc38be609a01bd7aa5c2a335eb5d0775d07bda8ac8b3de20.scope - libcontainer container c22b44451524c630bc38be609a01bd7aa5c2a335eb5d0775d07bda8ac8b3de20. Aug 13 07:07:13.797757 containerd[1456]: time="2025-08-13T07:07:13.797597029Z" level=info msg="StartContainer for \"c22b44451524c630bc38be609a01bd7aa5c2a335eb5d0775d07bda8ac8b3de20\" returns successfully" Aug 13 07:07:14.665989 update_engine[1449]: I20250813 07:07:14.665813 1449 update_attempter.cc:509] Updating boot flags... Aug 13 07:07:14.706166 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2949) Aug 13 07:07:14.783325 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2951) Aug 13 07:07:14.843202 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2951) Aug 13 07:07:16.454958 sudo[1653]: pam_unix(sudo:session): session closed for user root Aug 13 07:07:16.459199 sshd[1650]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:16.463961 systemd[1]: sshd@6-64.227.105.235:22-139.178.89.65:40378.service: Deactivated successfully. Aug 13 07:07:16.466953 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:07:16.467296 systemd[1]: session-7.scope: Consumed 5.857s CPU time, 145.8M memory peak, 0B memory swap peak. Aug 13 07:07:16.468550 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:07:16.470332 systemd-logind[1448]: Removed session 7. Aug 13 07:07:21.352046 systemd[1]: Created slice kubepods-besteffort-pod0bbcef9a_9669_4aa0_9c57_b34e15b6bc25.slice - libcontainer container kubepods-besteffort-pod0bbcef9a_9669_4aa0_9c57_b34e15b6bc25.slice. Aug 13 07:07:21.361963 kubelet[2506]: I0813 07:07:21.361725 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzsts\" (UniqueName: \"kubernetes.io/projected/0bbcef9a-9669-4aa0-9c57-b34e15b6bc25-kube-api-access-kzsts\") pod \"calico-typha-5b959987f4-jm4pf\" (UID: \"0bbcef9a-9669-4aa0-9c57-b34e15b6bc25\") " pod="calico-system/calico-typha-5b959987f4-jm4pf" Aug 13 07:07:21.361963 kubelet[2506]: I0813 07:07:21.361771 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bbcef9a-9669-4aa0-9c57-b34e15b6bc25-tigera-ca-bundle\") pod \"calico-typha-5b959987f4-jm4pf\" (UID: \"0bbcef9a-9669-4aa0-9c57-b34e15b6bc25\") " pod="calico-system/calico-typha-5b959987f4-jm4pf" Aug 13 07:07:21.361963 kubelet[2506]: I0813 07:07:21.361791 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0bbcef9a-9669-4aa0-9c57-b34e15b6bc25-typha-certs\") pod \"calico-typha-5b959987f4-jm4pf\" (UID: \"0bbcef9a-9669-4aa0-9c57-b34e15b6bc25\") " pod="calico-system/calico-typha-5b959987f4-jm4pf" Aug 13 07:07:21.658549 kubelet[2506]: E0813 07:07:21.658496 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:21.659949 containerd[1456]: time="2025-08-13T07:07:21.659523677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b959987f4-jm4pf,Uid:0bbcef9a-9669-4aa0-9c57-b34e15b6bc25,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:21.719420 containerd[1456]: time="2025-08-13T07:07:21.718464490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:21.719420 containerd[1456]: time="2025-08-13T07:07:21.718662092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:21.719420 containerd[1456]: time="2025-08-13T07:07:21.718754152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:21.719228 systemd[1]: Created slice kubepods-besteffort-pod9d0091f5_f574_4ca5_9ea1_44f9c2d2731b.slice - libcontainer container kubepods-besteffort-pod9d0091f5_f574_4ca5_9ea1_44f9c2d2731b.slice. Aug 13 07:07:21.721230 containerd[1456]: time="2025-08-13T07:07:21.719214413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:21.764626 kubelet[2506]: I0813 07:07:21.764557 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9d0091f5-f574-4ca5-9ea1-44f9c2d2731b-cni-log-dir\") pod \"calico-node-4d7bw\" (UID: \"9d0091f5-f574-4ca5-9ea1-44f9c2d2731b\") " pod="calico-system/calico-node-4d7bw" Aug 13 07:07:21.765264 kubelet[2506]: I0813 07:07:21.764647 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9d0091f5-f574-4ca5-9ea1-44f9c2d2731b-node-certs\") pod \"calico-node-4d7bw\" (UID: \"9d0091f5-f574-4ca5-9ea1-44f9c2d2731b\") " pod="calico-system/calico-node-4d7bw" Aug 13 07:07:21.765264 kubelet[2506]: I0813 07:07:21.764701 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7jwb\" (UniqueName: \"kubernetes.io/projected/9d0091f5-f574-4ca5-9ea1-44f9c2d2731b-kube-api-access-j7jwb\") pod \"calico-node-4d7bw\" (UID: \"9d0091f5-f574-4ca5-9ea1-44f9c2d2731b\") " pod="calico-system/calico-node-4d7bw" Aug 13 07:07:21.765264 kubelet[2506]: I0813 07:07:21.764720 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d0091f5-f574-4ca5-9ea1-44f9c2d2731b-lib-modules\") pod \"calico-node-4d7bw\" (UID: \"9d0091f5-f574-4ca5-9ea1-44f9c2d2731b\") " pod="calico-system/calico-node-4d7bw" Aug 13 07:07:21.765264 kubelet[2506]: I0813 07:07:21.764736 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d0091f5-f574-4ca5-9ea1-44f9c2d2731b-var-lib-calico\") pod \"calico-node-4d7bw\" (UID: \"9d0091f5-f574-4ca5-9ea1-44f9c2d2731b\") " pod="calico-system/calico-node-4d7bw" Aug 13 07:07:21.765264 kubelet[2506]: I0813 07:07:21.764754 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9d0091f5-f574-4ca5-9ea1-44f9c2d2731b-flexvol-driver-host\") pod \"calico-node-4d7bw\" (UID: \"9d0091f5-f574-4ca5-9ea1-44f9c2d2731b\") " pod="calico-system/calico-node-4d7bw" Aug 13 07:07:21.765552 kubelet[2506]: I0813 07:07:21.764796 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9d0091f5-f574-4ca5-9ea1-44f9c2d2731b-var-run-calico\") pod \"calico-node-4d7bw\" (UID: \"9d0091f5-f574-4ca5-9ea1-44f9c2d2731b\") " pod="calico-system/calico-node-4d7bw" Aug 13 07:07:21.765552 kubelet[2506]: I0813 07:07:21.764858 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9d0091f5-f574-4ca5-9ea1-44f9c2d2731b-cni-bin-dir\") pod \"calico-node-4d7bw\" (UID: \"9d0091f5-f574-4ca5-9ea1-44f9c2d2731b\") " pod="calico-system/calico-node-4d7bw" Aug 13 07:07:21.765552 kubelet[2506]: I0813 07:07:21.764890 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d0091f5-f574-4ca5-9ea1-44f9c2d2731b-tigera-ca-bundle\") pod \"calico-node-4d7bw\" (UID: \"9d0091f5-f574-4ca5-9ea1-44f9c2d2731b\") " pod="calico-system/calico-node-4d7bw" Aug 13 07:07:21.765552 kubelet[2506]: I0813 07:07:21.764921 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d0091f5-f574-4ca5-9ea1-44f9c2d2731b-xtables-lock\") pod \"calico-node-4d7bw\" (UID: \"9d0091f5-f574-4ca5-9ea1-44f9c2d2731b\") " pod="calico-system/calico-node-4d7bw" Aug 13 07:07:21.765552 kubelet[2506]: I0813 07:07:21.764989 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9d0091f5-f574-4ca5-9ea1-44f9c2d2731b-cni-net-dir\") pod \"calico-node-4d7bw\" (UID: \"9d0091f5-f574-4ca5-9ea1-44f9c2d2731b\") " pod="calico-system/calico-node-4d7bw" Aug 13 07:07:21.765888 kubelet[2506]: I0813 07:07:21.765019 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9d0091f5-f574-4ca5-9ea1-44f9c2d2731b-policysync\") pod \"calico-node-4d7bw\" (UID: \"9d0091f5-f574-4ca5-9ea1-44f9c2d2731b\") " pod="calico-system/calico-node-4d7bw" Aug 13 07:07:21.806378 systemd[1]: Started cri-containerd-6de5dcee49ce6988f000cd2e518c393a5a7a66e45d5216008f9848630489abd4.scope - libcontainer container 6de5dcee49ce6988f000cd2e518c393a5a7a66e45d5216008f9848630489abd4. Aug 13 07:07:21.878472 kubelet[2506]: E0813 07:07:21.878427 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.878472 kubelet[2506]: W0813 07:07:21.878513 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.878472 kubelet[2506]: E0813 07:07:21.878554 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.893971 kubelet[2506]: E0813 07:07:21.893935 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.893971 kubelet[2506]: W0813 07:07:21.893961 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.893971 kubelet[2506]: E0813 07:07:21.893984 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.898572 kubelet[2506]: E0813 07:07:21.898514 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2xth" podUID="94837213-7248-4886-ac34-73ab8173c672" Aug 13 07:07:21.951868 kubelet[2506]: E0813 07:07:21.951731 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.951868 kubelet[2506]: W0813 07:07:21.951769 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.951868 kubelet[2506]: E0813 07:07:21.951801 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.952605 kubelet[2506]: E0813 07:07:21.952078 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.952605 kubelet[2506]: W0813 07:07:21.952089 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.952605 kubelet[2506]: E0813 07:07:21.952102 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.952605 kubelet[2506]: E0813 07:07:21.952296 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.952605 kubelet[2506]: W0813 07:07:21.952305 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.952605 kubelet[2506]: E0813 07:07:21.952314 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.954813 kubelet[2506]: E0813 07:07:21.952617 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.954813 kubelet[2506]: W0813 07:07:21.952631 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.954813 kubelet[2506]: E0813 07:07:21.952647 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.954813 kubelet[2506]: E0813 07:07:21.953074 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.954813 kubelet[2506]: W0813 07:07:21.953086 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.954813 kubelet[2506]: E0813 07:07:21.953099 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.954813 kubelet[2506]: E0813 07:07:21.953673 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.954813 kubelet[2506]: W0813 07:07:21.953685 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.954813 kubelet[2506]: E0813 07:07:21.953697 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.955467 kubelet[2506]: E0813 07:07:21.955358 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.955467 kubelet[2506]: W0813 07:07:21.955375 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.955467 kubelet[2506]: E0813 07:07:21.955391 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.955901 kubelet[2506]: E0813 07:07:21.955643 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.955901 kubelet[2506]: W0813 07:07:21.955655 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.955901 kubelet[2506]: E0813 07:07:21.955666 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.955901 kubelet[2506]: E0813 07:07:21.955860 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.955901 kubelet[2506]: W0813 07:07:21.955868 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.955901 kubelet[2506]: E0813 07:07:21.955877 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.956166 kubelet[2506]: E0813 07:07:21.956108 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.956166 kubelet[2506]: W0813 07:07:21.956118 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.956166 kubelet[2506]: E0813 07:07:21.956162 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.956353 kubelet[2506]: E0813 07:07:21.956339 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.956353 kubelet[2506]: W0813 07:07:21.956350 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.956466 kubelet[2506]: E0813 07:07:21.956361 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.956584 kubelet[2506]: E0813 07:07:21.956571 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.956584 kubelet[2506]: W0813 07:07:21.956582 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.956661 kubelet[2506]: E0813 07:07:21.956593 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.957156 kubelet[2506]: E0813 07:07:21.956992 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.957156 kubelet[2506]: W0813 07:07:21.957003 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.957156 kubelet[2506]: E0813 07:07:21.957018 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.958297 kubelet[2506]: E0813 07:07:21.957290 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.958297 kubelet[2506]: W0813 07:07:21.957299 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.958297 kubelet[2506]: E0813 07:07:21.957310 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.958297 kubelet[2506]: E0813 07:07:21.957481 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.958297 kubelet[2506]: W0813 07:07:21.957489 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.958297 kubelet[2506]: E0813 07:07:21.957497 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.958898 kubelet[2506]: E0813 07:07:21.958873 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.958898 kubelet[2506]: W0813 07:07:21.958891 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.959006 kubelet[2506]: E0813 07:07:21.958906 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.959921 kubelet[2506]: E0813 07:07:21.959906 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.960005 kubelet[2506]: W0813 07:07:21.959924 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.960005 kubelet[2506]: E0813 07:07:21.959937 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.960263 kubelet[2506]: E0813 07:07:21.960246 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.960263 kubelet[2506]: W0813 07:07:21.960261 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.960363 kubelet[2506]: E0813 07:07:21.960275 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.960566 kubelet[2506]: E0813 07:07:21.960523 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.960566 kubelet[2506]: W0813 07:07:21.960542 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.960566 kubelet[2506]: E0813 07:07:21.960554 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.961281 kubelet[2506]: E0813 07:07:21.961204 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.961281 kubelet[2506]: W0813 07:07:21.961222 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.961281 kubelet[2506]: E0813 07:07:21.961239 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.967330 kubelet[2506]: E0813 07:07:21.967287 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.967330 kubelet[2506]: W0813 07:07:21.967314 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.967330 kubelet[2506]: E0813 07:07:21.967338 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.967567 kubelet[2506]: I0813 07:07:21.967387 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94837213-7248-4886-ac34-73ab8173c672-kubelet-dir\") pod \"csi-node-driver-n2xth\" (UID: \"94837213-7248-4886-ac34-73ab8173c672\") " pod="calico-system/csi-node-driver-n2xth" Aug 13 07:07:21.968692 kubelet[2506]: E0813 07:07:21.968656 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.968692 kubelet[2506]: W0813 07:07:21.968682 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.968820 kubelet[2506]: E0813 07:07:21.968704 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.968820 kubelet[2506]: I0813 07:07:21.968745 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94837213-7248-4886-ac34-73ab8173c672-registration-dir\") pod \"csi-node-driver-n2xth\" (UID: \"94837213-7248-4886-ac34-73ab8173c672\") " pod="calico-system/csi-node-driver-n2xth" Aug 13 07:07:21.969354 kubelet[2506]: E0813 07:07:21.969318 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.969354 kubelet[2506]: W0813 07:07:21.969341 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.969354 kubelet[2506]: E0813 07:07:21.969358 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.970683 kubelet[2506]: E0813 07:07:21.970519 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.970683 kubelet[2506]: W0813 07:07:21.970538 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.970683 kubelet[2506]: E0813 07:07:21.970555 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.971307 kubelet[2506]: E0813 07:07:21.971205 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.971307 kubelet[2506]: W0813 07:07:21.971221 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.971307 kubelet[2506]: E0813 07:07:21.971235 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.971307 kubelet[2506]: I0813 07:07:21.971274 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94837213-7248-4886-ac34-73ab8173c672-socket-dir\") pod \"csi-node-driver-n2xth\" (UID: \"94837213-7248-4886-ac34-73ab8173c672\") " pod="calico-system/csi-node-driver-n2xth" Aug 13 07:07:21.971636 kubelet[2506]: E0813 07:07:21.971572 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.971636 kubelet[2506]: W0813 07:07:21.971595 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.971636 kubelet[2506]: E0813 07:07:21.971613 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.971949 kubelet[2506]: E0813 07:07:21.971814 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.971949 kubelet[2506]: W0813 07:07:21.971827 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.971949 kubelet[2506]: E0813 07:07:21.971837 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.972425 kubelet[2506]: E0813 07:07:21.972378 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.972425 kubelet[2506]: W0813 07:07:21.972393 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.972425 kubelet[2506]: E0813 07:07:21.972405 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.972697 kubelet[2506]: I0813 07:07:21.972437 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/94837213-7248-4886-ac34-73ab8173c672-varrun\") pod \"csi-node-driver-n2xth\" (UID: \"94837213-7248-4886-ac34-73ab8173c672\") " pod="calico-system/csi-node-driver-n2xth" Aug 13 07:07:21.973202 kubelet[2506]: E0813 07:07:21.973177 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.973202 kubelet[2506]: W0813 07:07:21.973199 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.973427 kubelet[2506]: E0813 07:07:21.973215 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.973427 kubelet[2506]: I0813 07:07:21.973341 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw8zn\" (UniqueName: \"kubernetes.io/projected/94837213-7248-4886-ac34-73ab8173c672-kube-api-access-nw8zn\") pod \"csi-node-driver-n2xth\" (UID: \"94837213-7248-4886-ac34-73ab8173c672\") " pod="calico-system/csi-node-driver-n2xth" Aug 13 07:07:21.973526 kubelet[2506]: E0813 07:07:21.973508 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.973570 kubelet[2506]: W0813 07:07:21.973525 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.973570 kubelet[2506]: E0813 07:07:21.973540 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.975020 kubelet[2506]: E0813 07:07:21.974991 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.975020 kubelet[2506]: W0813 07:07:21.975014 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.975160 kubelet[2506]: E0813 07:07:21.975033 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.975514 kubelet[2506]: E0813 07:07:21.975488 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.975514 kubelet[2506]: W0813 07:07:21.975509 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.975617 kubelet[2506]: E0813 07:07:21.975527 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.975853 kubelet[2506]: E0813 07:07:21.975830 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.975853 kubelet[2506]: W0813 07:07:21.975848 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.975979 kubelet[2506]: E0813 07:07:21.975864 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.976716 kubelet[2506]: E0813 07:07:21.976696 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.976716 kubelet[2506]: W0813 07:07:21.976711 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.976958 kubelet[2506]: E0813 07:07:21.976724 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:21.977169 kubelet[2506]: E0813 07:07:21.977153 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:21.977169 kubelet[2506]: W0813 07:07:21.977167 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:21.977245 kubelet[2506]: E0813 07:07:21.977179 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.022720 containerd[1456]: time="2025-08-13T07:07:22.022592702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b959987f4-jm4pf,Uid:0bbcef9a-9669-4aa0-9c57-b34e15b6bc25,Namespace:calico-system,Attempt:0,} returns sandbox id \"6de5dcee49ce6988f000cd2e518c393a5a7a66e45d5216008f9848630489abd4\"" Aug 13 07:07:22.024431 kubelet[2506]: E0813 07:07:22.024388 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:22.025869 containerd[1456]: time="2025-08-13T07:07:22.025826915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:07:22.027084 containerd[1456]: time="2025-08-13T07:07:22.027040258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4d7bw,Uid:9d0091f5-f574-4ca5-9ea1-44f9c2d2731b,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:22.080355 kubelet[2506]: E0813 07:07:22.078582 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.080355 kubelet[2506]: W0813 07:07:22.078624 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.080355 kubelet[2506]: E0813 07:07:22.078652 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.080355 kubelet[2506]: E0813 07:07:22.079541 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.080355 kubelet[2506]: W0813 07:07:22.079557 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.080355 kubelet[2506]: E0813 07:07:22.079574 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.080355 kubelet[2506]: E0813 07:07:22.080034 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.080355 kubelet[2506]: W0813 07:07:22.080046 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.080355 kubelet[2506]: E0813 07:07:22.080060 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.082516 kubelet[2506]: E0813 07:07:22.082360 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.082516 kubelet[2506]: W0813 07:07:22.082379 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.082516 kubelet[2506]: E0813 07:07:22.082396 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.083066 kubelet[2506]: E0813 07:07:22.082883 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.083066 kubelet[2506]: W0813 07:07:22.082896 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.083066 kubelet[2506]: E0813 07:07:22.082911 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.083265 kubelet[2506]: E0813 07:07:22.083255 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.084737 kubelet[2506]: W0813 07:07:22.084001 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.084737 kubelet[2506]: E0813 07:07:22.084030 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.084737 kubelet[2506]: E0813 07:07:22.084545 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.084737 kubelet[2506]: W0813 07:07:22.084557 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.084737 kubelet[2506]: E0813 07:07:22.084569 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.086184 kubelet[2506]: E0813 07:07:22.086001 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.086184 kubelet[2506]: W0813 07:07:22.086016 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.087284 kubelet[2506]: E0813 07:07:22.086462 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.088377 kubelet[2506]: E0813 07:07:22.088142 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.088377 kubelet[2506]: W0813 07:07:22.088157 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.088377 kubelet[2506]: E0813 07:07:22.088171 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.089440 kubelet[2506]: E0813 07:07:22.089412 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.090379 kubelet[2506]: W0813 07:07:22.090165 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.090379 kubelet[2506]: E0813 07:07:22.090194 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.091384 kubelet[2506]: E0813 07:07:22.091346 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.092567 kubelet[2506]: W0813 07:07:22.092265 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.092567 kubelet[2506]: E0813 07:07:22.092295 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.097389 kubelet[2506]: E0813 07:07:22.096555 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.097389 kubelet[2506]: W0813 07:07:22.096584 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.097389 kubelet[2506]: E0813 07:07:22.096612 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.099741 kubelet[2506]: E0813 07:07:22.098012 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.099741 kubelet[2506]: W0813 07:07:22.098037 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.099741 kubelet[2506]: E0813 07:07:22.098060 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.099741 kubelet[2506]: E0813 07:07:22.099372 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.099741 kubelet[2506]: W0813 07:07:22.099390 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.099741 kubelet[2506]: E0813 07:07:22.099409 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.099741 kubelet[2506]: E0813 07:07:22.099615 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.099741 kubelet[2506]: W0813 07:07:22.099623 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.099741 kubelet[2506]: E0813 07:07:22.099632 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.101260 kubelet[2506]: E0813 07:07:22.101236 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.101497 kubelet[2506]: W0813 07:07:22.101479 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.101581 kubelet[2506]: E0813 07:07:22.101563 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.103293 kubelet[2506]: E0813 07:07:22.102220 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.103293 kubelet[2506]: W0813 07:07:22.102234 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.103293 kubelet[2506]: E0813 07:07:22.102250 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.105475 kubelet[2506]: E0813 07:07:22.103825 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.105475 kubelet[2506]: W0813 07:07:22.103854 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.105475 kubelet[2506]: E0813 07:07:22.103879 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.107451 kubelet[2506]: E0813 07:07:22.106677 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.107451 kubelet[2506]: W0813 07:07:22.106708 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.107451 kubelet[2506]: E0813 07:07:22.106730 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.109265 kubelet[2506]: E0813 07:07:22.109230 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.111250 kubelet[2506]: W0813 07:07:22.110920 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.111250 kubelet[2506]: E0813 07:07:22.110974 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.112585 kubelet[2506]: E0813 07:07:22.111712 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.112585 kubelet[2506]: W0813 07:07:22.111733 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.112585 kubelet[2506]: E0813 07:07:22.111756 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.113901 kubelet[2506]: E0813 07:07:22.113812 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.115443 kubelet[2506]: W0813 07:07:22.114231 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.115443 kubelet[2506]: E0813 07:07:22.114519 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.118499 kubelet[2506]: E0813 07:07:22.118262 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.118499 kubelet[2506]: W0813 07:07:22.118290 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.118499 kubelet[2506]: E0813 07:07:22.118316 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.119160 kubelet[2506]: E0813 07:07:22.119004 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.119562 kubelet[2506]: W0813 07:07:22.119253 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.119562 kubelet[2506]: E0813 07:07:22.119281 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.121584 kubelet[2506]: E0813 07:07:22.121556 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.122415 kubelet[2506]: W0813 07:07:22.121996 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.122415 kubelet[2506]: E0813 07:07:22.122039 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.154518 kubelet[2506]: E0813 07:07:22.153940 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:22.154518 kubelet[2506]: W0813 07:07:22.153975 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:22.155741 kubelet[2506]: E0813 07:07:22.155210 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:22.156910 containerd[1456]: time="2025-08-13T07:07:22.154046220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:22.157276 containerd[1456]: time="2025-08-13T07:07:22.156881067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:22.157276 containerd[1456]: time="2025-08-13T07:07:22.157120516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:22.159560 containerd[1456]: time="2025-08-13T07:07:22.159293850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:22.194082 systemd[1]: Started cri-containerd-56cde0ecfc0b7c8e0fe410d16e29dac59d8bd67c5a458acc0778da578d6680e1.scope - libcontainer container 56cde0ecfc0b7c8e0fe410d16e29dac59d8bd67c5a458acc0778da578d6680e1. Aug 13 07:07:22.255997 containerd[1456]: time="2025-08-13T07:07:22.255826749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4d7bw,Uid:9d0091f5-f574-4ca5-9ea1-44f9c2d2731b,Namespace:calico-system,Attempt:0,} returns sandbox id \"56cde0ecfc0b7c8e0fe410d16e29dac59d8bd67c5a458acc0778da578d6680e1\"" Aug 13 07:07:22.478177 systemd[1]: run-containerd-runc-k8s.io-6de5dcee49ce6988f000cd2e518c393a5a7a66e45d5216008f9848630489abd4-runc.JQGsFa.mount: Deactivated successfully. Aug 13 07:07:23.535362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4131451343.mount: Deactivated successfully. Aug 13 07:07:23.549826 kubelet[2506]: E0813 07:07:23.549403 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2xth" podUID="94837213-7248-4886-ac34-73ab8173c672" Aug 13 07:07:24.613437 containerd[1456]: time="2025-08-13T07:07:24.613367475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:24.614344 containerd[1456]: time="2025-08-13T07:07:24.614258826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:07:24.615160 containerd[1456]: time="2025-08-13T07:07:24.614974874Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:24.617499 containerd[1456]: time="2025-08-13T07:07:24.617190864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:24.617981 containerd[1456]: time="2025-08-13T07:07:24.617948847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.592083443s" Aug 13 07:07:24.618054 containerd[1456]: time="2025-08-13T07:07:24.617985092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:07:24.619368 containerd[1456]: time="2025-08-13T07:07:24.619337739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:07:24.641359 containerd[1456]: time="2025-08-13T07:07:24.641268371Z" level=info msg="CreateContainer within sandbox \"6de5dcee49ce6988f000cd2e518c393a5a7a66e45d5216008f9848630489abd4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:07:24.690635 containerd[1456]: time="2025-08-13T07:07:24.689866850Z" level=info msg="CreateContainer within sandbox \"6de5dcee49ce6988f000cd2e518c393a5a7a66e45d5216008f9848630489abd4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cd9ced1cdd7e358eaeae6ccdfa8563fc520bea61d948052f51e17b33897c2b53\"" Aug 13 07:07:24.690848 containerd[1456]: time="2025-08-13T07:07:24.690681712Z" level=info msg="StartContainer for \"cd9ced1cdd7e358eaeae6ccdfa8563fc520bea61d948052f51e17b33897c2b53\"" Aug 13 07:07:24.737432 systemd[1]: Started cri-containerd-cd9ced1cdd7e358eaeae6ccdfa8563fc520bea61d948052f51e17b33897c2b53.scope - libcontainer container cd9ced1cdd7e358eaeae6ccdfa8563fc520bea61d948052f51e17b33897c2b53. Aug 13 07:07:24.802369 containerd[1456]: time="2025-08-13T07:07:24.802249260Z" level=info msg="StartContainer for \"cd9ced1cdd7e358eaeae6ccdfa8563fc520bea61d948052f51e17b33897c2b53\" returns successfully" Aug 13 07:07:25.551300 kubelet[2506]: E0813 07:07:25.548083 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2xth" podUID="94837213-7248-4886-ac34-73ab8173c672" Aug 13 07:07:25.694568 kubelet[2506]: E0813 07:07:25.694527 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:25.710694 kubelet[2506]: I0813 07:07:25.707692 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b959987f4-jm4pf" podStartSLOduration=2.114323116 podStartE2EDuration="4.707671179s" podCreationTimestamp="2025-08-13 07:07:21 +0000 UTC" firstStartedPulling="2025-08-13 07:07:22.025492072 +0000 UTC m=+22.649753712" lastFinishedPulling="2025-08-13 07:07:24.618840136 +0000 UTC m=+25.243101775" observedRunningTime="2025-08-13 07:07:25.707235404 +0000 UTC m=+26.331497049" watchObservedRunningTime="2025-08-13 07:07:25.707671179 +0000 UTC m=+26.331932822" Aug 13 07:07:25.787827 kubelet[2506]: E0813 07:07:25.787585 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.787827 kubelet[2506]: W0813 07:07:25.787626 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.787827 kubelet[2506]: E0813 07:07:25.787657 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.788831 kubelet[2506]: E0813 07:07:25.788301 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.788831 kubelet[2506]: W0813 07:07:25.788331 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.788831 kubelet[2506]: E0813 07:07:25.788361 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.789367 kubelet[2506]: E0813 07:07:25.789146 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.789367 kubelet[2506]: W0813 07:07:25.789194 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.789367 kubelet[2506]: E0813 07:07:25.789216 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.789771 kubelet[2506]: E0813 07:07:25.789630 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.789771 kubelet[2506]: W0813 07:07:25.789653 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.789771 kubelet[2506]: E0813 07:07:25.789690 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.790170 kubelet[2506]: E0813 07:07:25.790006 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.790170 kubelet[2506]: W0813 07:07:25.790021 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.790170 kubelet[2506]: E0813 07:07:25.790040 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.790287 kubelet[2506]: E0813 07:07:25.790270 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.790320 kubelet[2506]: W0813 07:07:25.790282 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.790320 kubelet[2506]: E0813 07:07:25.790303 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.790693 kubelet[2506]: E0813 07:07:25.790537 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.790693 kubelet[2506]: W0813 07:07:25.790556 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.790693 kubelet[2506]: E0813 07:07:25.790572 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.790796 kubelet[2506]: E0813 07:07:25.790769 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.790796 kubelet[2506]: W0813 07:07:25.790777 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.790796 kubelet[2506]: E0813 07:07:25.790786 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.791231 kubelet[2506]: E0813 07:07:25.790987 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.791231 kubelet[2506]: W0813 07:07:25.790998 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.791231 kubelet[2506]: E0813 07:07:25.791011 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.791231 kubelet[2506]: E0813 07:07:25.791228 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.791410 kubelet[2506]: W0813 07:07:25.791240 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.791410 kubelet[2506]: E0813 07:07:25.791258 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.791473 kubelet[2506]: E0813 07:07:25.791449 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.791473 kubelet[2506]: W0813 07:07:25.791458 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.791535 kubelet[2506]: E0813 07:07:25.791474 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.791890 kubelet[2506]: E0813 07:07:25.791649 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.791890 kubelet[2506]: W0813 07:07:25.791660 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.791890 kubelet[2506]: E0813 07:07:25.791674 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.791890 kubelet[2506]: E0813 07:07:25.791885 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.792234 kubelet[2506]: W0813 07:07:25.791899 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.792234 kubelet[2506]: E0813 07:07:25.791913 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.792234 kubelet[2506]: E0813 07:07:25.792124 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.792234 kubelet[2506]: W0813 07:07:25.792179 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.792234 kubelet[2506]: E0813 07:07:25.792190 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.792559 kubelet[2506]: E0813 07:07:25.792380 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.792559 kubelet[2506]: W0813 07:07:25.792390 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.792559 kubelet[2506]: E0813 07:07:25.792400 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.825015 kubelet[2506]: E0813 07:07:25.824111 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.825015 kubelet[2506]: W0813 07:07:25.824154 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.825015 kubelet[2506]: E0813 07:07:25.824176 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.825015 kubelet[2506]: E0813 07:07:25.824483 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.825015 kubelet[2506]: W0813 07:07:25.824497 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.825015 kubelet[2506]: E0813 07:07:25.824516 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.826802 kubelet[2506]: E0813 07:07:25.826668 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.826802 kubelet[2506]: W0813 07:07:25.826685 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.826802 kubelet[2506]: E0813 07:07:25.826701 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.827046 kubelet[2506]: E0813 07:07:25.827032 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.827046 kubelet[2506]: W0813 07:07:25.827044 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.827117 kubelet[2506]: E0813 07:07:25.827057 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.827367 kubelet[2506]: E0813 07:07:25.827354 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.827421 kubelet[2506]: W0813 07:07:25.827366 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.827421 kubelet[2506]: E0813 07:07:25.827382 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.827608 kubelet[2506]: E0813 07:07:25.827596 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.827608 kubelet[2506]: W0813 07:07:25.827607 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.827684 kubelet[2506]: E0813 07:07:25.827616 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.827977 kubelet[2506]: E0813 07:07:25.827962 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.827977 kubelet[2506]: W0813 07:07:25.827975 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.828058 kubelet[2506]: E0813 07:07:25.827985 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.828211 kubelet[2506]: E0813 07:07:25.828200 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.828211 kubelet[2506]: W0813 07:07:25.828211 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.828290 kubelet[2506]: E0813 07:07:25.828220 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.828505 kubelet[2506]: E0813 07:07:25.828491 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.828505 kubelet[2506]: W0813 07:07:25.828503 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.828570 kubelet[2506]: E0813 07:07:25.828514 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.828867 kubelet[2506]: E0813 07:07:25.828687 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.828867 kubelet[2506]: W0813 07:07:25.828703 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.828867 kubelet[2506]: E0813 07:07:25.828714 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.828867 kubelet[2506]: E0813 07:07:25.828860 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.828867 kubelet[2506]: W0813 07:07:25.828867 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.829041 kubelet[2506]: E0813 07:07:25.828875 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.829071 kubelet[2506]: E0813 07:07:25.829053 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.829071 kubelet[2506]: W0813 07:07:25.829060 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.829071 kubelet[2506]: E0813 07:07:25.829068 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.829306 kubelet[2506]: E0813 07:07:25.829293 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.829306 kubelet[2506]: W0813 07:07:25.829304 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.829392 kubelet[2506]: E0813 07:07:25.829312 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.829842 kubelet[2506]: E0813 07:07:25.829779 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.829842 kubelet[2506]: W0813 07:07:25.829794 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.829842 kubelet[2506]: E0813 07:07:25.829806 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.830139 kubelet[2506]: E0813 07:07:25.830120 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.830177 kubelet[2506]: W0813 07:07:25.830156 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.830177 kubelet[2506]: E0813 07:07:25.830169 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.830411 kubelet[2506]: E0813 07:07:25.830398 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.830411 kubelet[2506]: W0813 07:07:25.830408 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.830484 kubelet[2506]: E0813 07:07:25.830417 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.830636 kubelet[2506]: E0813 07:07:25.830619 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.830636 kubelet[2506]: W0813 07:07:25.830631 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.830687 kubelet[2506]: E0813 07:07:25.830641 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:25.831011 kubelet[2506]: E0813 07:07:25.830998 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:07:25.831011 kubelet[2506]: W0813 07:07:25.831010 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:07:25.831091 kubelet[2506]: E0813 07:07:25.831020 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:07:26.010454 containerd[1456]: time="2025-08-13T07:07:26.010377947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:26.012074 containerd[1456]: time="2025-08-13T07:07:26.011889426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:07:26.012846 containerd[1456]: time="2025-08-13T07:07:26.012807973Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:26.015208 containerd[1456]: time="2025-08-13T07:07:26.015169089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:26.016697 containerd[1456]: time="2025-08-13T07:07:26.016087074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.396716413s" Aug 13 07:07:26.016697 containerd[1456]: time="2025-08-13T07:07:26.016127413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:07:26.020682 containerd[1456]: time="2025-08-13T07:07:26.020645350Z" level=info msg="CreateContainer within sandbox \"56cde0ecfc0b7c8e0fe410d16e29dac59d8bd67c5a458acc0778da578d6680e1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:07:26.053487 containerd[1456]: time="2025-08-13T07:07:26.053437732Z" level=info msg="CreateContainer within sandbox \"56cde0ecfc0b7c8e0fe410d16e29dac59d8bd67c5a458acc0778da578d6680e1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a7e5bd8e01922d5ebeebf689744216a273d6d2de809e921ac49d9f4192f83403\"" Aug 13 07:07:26.054893 containerd[1456]: time="2025-08-13T07:07:26.054841475Z" level=info msg="StartContainer for \"a7e5bd8e01922d5ebeebf689744216a273d6d2de809e921ac49d9f4192f83403\"" Aug 13 07:07:26.099061 systemd[1]: run-containerd-runc-k8s.io-a7e5bd8e01922d5ebeebf689744216a273d6d2de809e921ac49d9f4192f83403-runc.yj686R.mount: Deactivated successfully. Aug 13 07:07:26.113494 systemd[1]: Started cri-containerd-a7e5bd8e01922d5ebeebf689744216a273d6d2de809e921ac49d9f4192f83403.scope - libcontainer container a7e5bd8e01922d5ebeebf689744216a273d6d2de809e921ac49d9f4192f83403. Aug 13 07:07:26.158706 containerd[1456]: time="2025-08-13T07:07:26.158653557Z" level=info msg="StartContainer for \"a7e5bd8e01922d5ebeebf689744216a273d6d2de809e921ac49d9f4192f83403\" returns successfully" Aug 13 07:07:26.173455 systemd[1]: cri-containerd-a7e5bd8e01922d5ebeebf689744216a273d6d2de809e921ac49d9f4192f83403.scope: Deactivated successfully. Aug 13 07:07:26.204597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7e5bd8e01922d5ebeebf689744216a273d6d2de809e921ac49d9f4192f83403-rootfs.mount: Deactivated successfully. Aug 13 07:07:26.209290 containerd[1456]: time="2025-08-13T07:07:26.209214592Z" level=info msg="shim disconnected" id=a7e5bd8e01922d5ebeebf689744216a273d6d2de809e921ac49d9f4192f83403 namespace=k8s.io Aug 13 07:07:26.209290 containerd[1456]: time="2025-08-13T07:07:26.209289410Z" level=warning msg="cleaning up after shim disconnected" id=a7e5bd8e01922d5ebeebf689744216a273d6d2de809e921ac49d9f4192f83403 namespace=k8s.io Aug 13 07:07:26.209290 containerd[1456]: time="2025-08-13T07:07:26.209304704Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:26.708218 kubelet[2506]: I0813 07:07:26.704779 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:07:26.708218 kubelet[2506]: E0813 07:07:26.705740 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:26.710225 containerd[1456]: time="2025-08-13T07:07:26.710180256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:07:27.547211 kubelet[2506]: E0813 07:07:27.546951 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2xth" podUID="94837213-7248-4886-ac34-73ab8173c672" Aug 13 07:07:29.549687 kubelet[2506]: E0813 07:07:29.549512 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2xth" podUID="94837213-7248-4886-ac34-73ab8173c672" Aug 13 07:07:30.734167 containerd[1456]: time="2025-08-13T07:07:30.733371157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:30.737210 containerd[1456]: time="2025-08-13T07:07:30.736917882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:07:30.738166 containerd[1456]: time="2025-08-13T07:07:30.738044323Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:30.742960 containerd[1456]: time="2025-08-13T07:07:30.742911932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:30.745571 containerd[1456]: time="2025-08-13T07:07:30.745351723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.035097592s" Aug 13 07:07:30.745571 containerd[1456]: time="2025-08-13T07:07:30.745438464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:07:30.754264 containerd[1456]: time="2025-08-13T07:07:30.754202593Z" level=info msg="CreateContainer within sandbox \"56cde0ecfc0b7c8e0fe410d16e29dac59d8bd67c5a458acc0778da578d6680e1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:07:30.776104 containerd[1456]: time="2025-08-13T07:07:30.775012753Z" level=info msg="CreateContainer within sandbox \"56cde0ecfc0b7c8e0fe410d16e29dac59d8bd67c5a458acc0778da578d6680e1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"64cd12c443630a4666f54603ea724595ffcbeba6054dcfb024c06f24bb6ce9ab\"" Aug 13 07:07:30.777317 containerd[1456]: time="2025-08-13T07:07:30.777264467Z" level=info msg="StartContainer for \"64cd12c443630a4666f54603ea724595ffcbeba6054dcfb024c06f24bb6ce9ab\"" Aug 13 07:07:30.827439 systemd[1]: Started cri-containerd-64cd12c443630a4666f54603ea724595ffcbeba6054dcfb024c06f24bb6ce9ab.scope - libcontainer container 64cd12c443630a4666f54603ea724595ffcbeba6054dcfb024c06f24bb6ce9ab. Aug 13 07:07:30.869120 containerd[1456]: time="2025-08-13T07:07:30.869054258Z" level=info msg="StartContainer for \"64cd12c443630a4666f54603ea724595ffcbeba6054dcfb024c06f24bb6ce9ab\" returns successfully" Aug 13 07:07:31.506462 systemd[1]: cri-containerd-64cd12c443630a4666f54603ea724595ffcbeba6054dcfb024c06f24bb6ce9ab.scope: Deactivated successfully. Aug 13 07:07:31.537167 kubelet[2506]: I0813 07:07:31.536918 2506 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:07:31.544566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64cd12c443630a4666f54603ea724595ffcbeba6054dcfb024c06f24bb6ce9ab-rootfs.mount: Deactivated successfully. Aug 13 07:07:31.552590 containerd[1456]: time="2025-08-13T07:07:31.550744806Z" level=info msg="shim disconnected" id=64cd12c443630a4666f54603ea724595ffcbeba6054dcfb024c06f24bb6ce9ab namespace=k8s.io Aug 13 07:07:31.552590 containerd[1456]: time="2025-08-13T07:07:31.550911326Z" level=warning msg="cleaning up after shim disconnected" id=64cd12c443630a4666f54603ea724595ffcbeba6054dcfb024c06f24bb6ce9ab namespace=k8s.io Aug 13 07:07:31.552590 containerd[1456]: time="2025-08-13T07:07:31.550922447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:07:31.559072 systemd[1]: Created slice kubepods-besteffort-pod94837213_7248_4886_ac34_73ab8173c672.slice - libcontainer container kubepods-besteffort-pod94837213_7248_4886_ac34_73ab8173c672.slice. Aug 13 07:07:31.567576 containerd[1456]: time="2025-08-13T07:07:31.566916502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2xth,Uid:94837213-7248-4886-ac34-73ab8173c672,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:31.635811 systemd[1]: Created slice kubepods-besteffort-pod2a94fe9f_df0a_43ab_ad0b_a3eba03e2144.slice - libcontainer container kubepods-besteffort-pod2a94fe9f_df0a_43ab_ad0b_a3eba03e2144.slice. Aug 13 07:07:31.668204 systemd[1]: Created slice kubepods-besteffort-pod7c4ccec3_f907_4938_9a80_cb54e4ef0fc4.slice - libcontainer container kubepods-besteffort-pod7c4ccec3_f907_4938_9a80_cb54e4ef0fc4.slice. Aug 13 07:07:31.676168 kubelet[2506]: I0813 07:07:31.674389 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929-whisker-backend-key-pair\") pod \"whisker-6ff464d979-6n8l7\" (UID: \"6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929\") " pod="calico-system/whisker-6ff464d979-6n8l7" Aug 13 07:07:31.676168 kubelet[2506]: I0813 07:07:31.674438 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bbc0eece-7b19-4b32-8aa8-4f52057a212b-calico-apiserver-certs\") pod \"calico-apiserver-6df6784b98-8v5cf\" (UID: \"bbc0eece-7b19-4b32-8aa8-4f52057a212b\") " pod="calico-apiserver/calico-apiserver-6df6784b98-8v5cf" Aug 13 07:07:31.676168 kubelet[2506]: I0813 07:07:31.674460 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7c4ccec3-f907-4938-9a80-cb54e4ef0fc4-calico-apiserver-certs\") pod \"calico-apiserver-6df6784b98-zfzpv\" (UID: \"7c4ccec3-f907-4938-9a80-cb54e4ef0fc4\") " pod="calico-apiserver/calico-apiserver-6df6784b98-zfzpv" Aug 13 07:07:31.676168 kubelet[2506]: I0813 07:07:31.674478 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94eead8f-716f-4f57-a31e-047b0ab9c02f-config-volume\") pod \"coredns-674b8bbfcf-flgjr\" (UID: \"94eead8f-716f-4f57-a31e-047b0ab9c02f\") " pod="kube-system/coredns-674b8bbfcf-flgjr" Aug 13 07:07:31.676168 kubelet[2506]: I0813 07:07:31.674504 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmkq7\" (UniqueName: \"kubernetes.io/projected/94eead8f-716f-4f57-a31e-047b0ab9c02f-kube-api-access-fmkq7\") pod \"coredns-674b8bbfcf-flgjr\" (UID: \"94eead8f-716f-4f57-a31e-047b0ab9c02f\") " pod="kube-system/coredns-674b8bbfcf-flgjr" Aug 13 07:07:31.676781 kubelet[2506]: I0813 07:07:31.674524 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4cmr\" (UniqueName: \"kubernetes.io/projected/7c4ccec3-f907-4938-9a80-cb54e4ef0fc4-kube-api-access-s4cmr\") pod \"calico-apiserver-6df6784b98-zfzpv\" (UID: \"7c4ccec3-f907-4938-9a80-cb54e4ef0fc4\") " pod="calico-apiserver/calico-apiserver-6df6784b98-zfzpv" Aug 13 07:07:31.676781 kubelet[2506]: I0813 07:07:31.674543 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929-whisker-ca-bundle\") pod \"whisker-6ff464d979-6n8l7\" (UID: \"6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929\") " pod="calico-system/whisker-6ff464d979-6n8l7" Aug 13 07:07:31.676781 kubelet[2506]: I0813 07:07:31.674575 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a94fe9f-df0a-43ab-ad0b-a3eba03e2144-tigera-ca-bundle\") pod \"calico-kube-controllers-b8dbbdc94-vgwsd\" (UID: \"2a94fe9f-df0a-43ab-ad0b-a3eba03e2144\") " pod="calico-system/calico-kube-controllers-b8dbbdc94-vgwsd" Aug 13 07:07:31.676781 kubelet[2506]: I0813 07:07:31.674609 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkrpv\" (UniqueName: \"kubernetes.io/projected/22db522d-1126-4583-97ae-d9ff192443f7-kube-api-access-wkrpv\") pod \"coredns-674b8bbfcf-qxnq4\" (UID: \"22db522d-1126-4583-97ae-d9ff192443f7\") " pod="kube-system/coredns-674b8bbfcf-qxnq4" Aug 13 07:07:31.676781 kubelet[2506]: I0813 07:07:31.674659 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpbqh\" (UniqueName: \"kubernetes.io/projected/bbc0eece-7b19-4b32-8aa8-4f52057a212b-kube-api-access-gpbqh\") pod \"calico-apiserver-6df6784b98-8v5cf\" (UID: \"bbc0eece-7b19-4b32-8aa8-4f52057a212b\") " pod="calico-apiserver/calico-apiserver-6df6784b98-8v5cf" Aug 13 07:07:31.677033 kubelet[2506]: I0813 07:07:31.674682 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mss8\" (UniqueName: \"kubernetes.io/projected/6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929-kube-api-access-4mss8\") pod \"whisker-6ff464d979-6n8l7\" (UID: \"6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929\") " pod="calico-system/whisker-6ff464d979-6n8l7" Aug 13 07:07:31.677033 kubelet[2506]: I0813 07:07:31.674699 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrqfq\" (UniqueName: \"kubernetes.io/projected/2a94fe9f-df0a-43ab-ad0b-a3eba03e2144-kube-api-access-qrqfq\") pod \"calico-kube-controllers-b8dbbdc94-vgwsd\" (UID: \"2a94fe9f-df0a-43ab-ad0b-a3eba03e2144\") " pod="calico-system/calico-kube-controllers-b8dbbdc94-vgwsd" Aug 13 07:07:31.677033 kubelet[2506]: I0813 07:07:31.674725 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22db522d-1126-4583-97ae-d9ff192443f7-config-volume\") pod \"coredns-674b8bbfcf-qxnq4\" (UID: \"22db522d-1126-4583-97ae-d9ff192443f7\") " pod="kube-system/coredns-674b8bbfcf-qxnq4" Aug 13 07:07:31.684272 systemd[1]: Created slice kubepods-besteffort-pod6e5ce7c1_4815_4ccb_bf3e_09ecfaf12929.slice - libcontainer container kubepods-besteffort-pod6e5ce7c1_4815_4ccb_bf3e_09ecfaf12929.slice. Aug 13 07:07:31.704601 systemd[1]: Created slice kubepods-besteffort-podbbc0eece_7b19_4b32_8aa8_4f52057a212b.slice - libcontainer container kubepods-besteffort-podbbc0eece_7b19_4b32_8aa8_4f52057a212b.slice. Aug 13 07:07:31.726724 systemd[1]: Created slice kubepods-burstable-pod94eead8f_716f_4f57_a31e_047b0ab9c02f.slice - libcontainer container kubepods-burstable-pod94eead8f_716f_4f57_a31e_047b0ab9c02f.slice. Aug 13 07:07:31.741539 systemd[1]: Created slice kubepods-burstable-pod22db522d_1126_4583_97ae_d9ff192443f7.slice - libcontainer container kubepods-burstable-pod22db522d_1126_4583_97ae_d9ff192443f7.slice. Aug 13 07:07:31.745313 systemd[1]: Created slice kubepods-besteffort-pod536d979e_0e84_4095_adcc_e89aae57b3e3.slice - libcontainer container kubepods-besteffort-pod536d979e_0e84_4095_adcc_e89aae57b3e3.slice. Aug 13 07:07:31.776790 kubelet[2506]: I0813 07:07:31.776168 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/536d979e-0e84-4095-adcc-e89aae57b3e3-goldmane-key-pair\") pod \"goldmane-768f4c5c69-67ggw\" (UID: \"536d979e-0e84-4095-adcc-e89aae57b3e3\") " pod="calico-system/goldmane-768f4c5c69-67ggw" Aug 13 07:07:31.776790 kubelet[2506]: I0813 07:07:31.776247 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nlr9\" (UniqueName: \"kubernetes.io/projected/536d979e-0e84-4095-adcc-e89aae57b3e3-kube-api-access-2nlr9\") pod \"goldmane-768f4c5c69-67ggw\" (UID: \"536d979e-0e84-4095-adcc-e89aae57b3e3\") " pod="calico-system/goldmane-768f4c5c69-67ggw" Aug 13 07:07:31.776790 kubelet[2506]: I0813 07:07:31.776409 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536d979e-0e84-4095-adcc-e89aae57b3e3-config\") pod \"goldmane-768f4c5c69-67ggw\" (UID: \"536d979e-0e84-4095-adcc-e89aae57b3e3\") " pod="calico-system/goldmane-768f4c5c69-67ggw" Aug 13 07:07:31.776790 kubelet[2506]: I0813 07:07:31.776443 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/536d979e-0e84-4095-adcc-e89aae57b3e3-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-67ggw\" (UID: \"536d979e-0e84-4095-adcc-e89aae57b3e3\") " pod="calico-system/goldmane-768f4c5c69-67ggw" Aug 13 07:07:31.856360 containerd[1456]: time="2025-08-13T07:07:31.856320782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:07:31.967612 containerd[1456]: time="2025-08-13T07:07:31.967546309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b8dbbdc94-vgwsd,Uid:2a94fe9f-df0a-43ab-ad0b-a3eba03e2144,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:31.992844 containerd[1456]: time="2025-08-13T07:07:31.992246370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df6784b98-zfzpv,Uid:7c4ccec3-f907-4938-9a80-cb54e4ef0fc4,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:07:32.023805 containerd[1456]: time="2025-08-13T07:07:32.023759253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df6784b98-8v5cf,Uid:bbc0eece-7b19-4b32-8aa8-4f52057a212b,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:07:32.024490 containerd[1456]: time="2025-08-13T07:07:32.024458785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ff464d979-6n8l7,Uid:6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:32.033177 kubelet[2506]: E0813 07:07:32.033052 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:32.039918 containerd[1456]: time="2025-08-13T07:07:32.039690617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-flgjr,Uid:94eead8f-716f-4f57-a31e-047b0ab9c02f,Namespace:kube-system,Attempt:0,}" Aug 13 07:07:32.059296 kubelet[2506]: E0813 07:07:32.058640 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:32.068543 containerd[1456]: time="2025-08-13T07:07:32.068504299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-67ggw,Uid:536d979e-0e84-4095-adcc-e89aae57b3e3,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:32.070113 containerd[1456]: time="2025-08-13T07:07:32.068935986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qxnq4,Uid:22db522d-1126-4583-97ae-d9ff192443f7,Namespace:kube-system,Attempt:0,}" Aug 13 07:07:32.070950 containerd[1456]: time="2025-08-13T07:07:32.070736356Z" level=error msg="Failed to destroy network for sandbox \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.080784 containerd[1456]: time="2025-08-13T07:07:32.080628786Z" level=error msg="encountered an error cleaning up failed sandbox \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.081800 containerd[1456]: time="2025-08-13T07:07:32.081676563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2xth,Uid:94837213-7248-4886-ac34-73ab8173c672,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.092346 kubelet[2506]: E0813 07:07:32.092301 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.092636 kubelet[2506]: E0813 07:07:32.092616 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n2xth" Aug 13 07:07:32.093813 kubelet[2506]: E0813 07:07:32.093237 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n2xth" Aug 13 07:07:32.093964 kubelet[2506]: E0813 07:07:32.093908 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n2xth_calico-system(94837213-7248-4886-ac34-73ab8173c672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n2xth_calico-system(94837213-7248-4886-ac34-73ab8173c672)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n2xth" podUID="94837213-7248-4886-ac34-73ab8173c672" Aug 13 07:07:32.342204 containerd[1456]: time="2025-08-13T07:07:32.342036871Z" level=error msg="Failed to destroy network for sandbox \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.346167 containerd[1456]: time="2025-08-13T07:07:32.346067206Z" level=error msg="encountered an error cleaning up failed sandbox \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.346413 containerd[1456]: time="2025-08-13T07:07:32.346222554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b8dbbdc94-vgwsd,Uid:2a94fe9f-df0a-43ab-ad0b-a3eba03e2144,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.348320 kubelet[2506]: E0813 07:07:32.347509 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.348320 kubelet[2506]: E0813 07:07:32.347571 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b8dbbdc94-vgwsd" Aug 13 07:07:32.348320 kubelet[2506]: E0813 07:07:32.347597 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b8dbbdc94-vgwsd" Aug 13 07:07:32.348547 kubelet[2506]: E0813 07:07:32.348009 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b8dbbdc94-vgwsd_calico-system(2a94fe9f-df0a-43ab-ad0b-a3eba03e2144)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b8dbbdc94-vgwsd_calico-system(2a94fe9f-df0a-43ab-ad0b-a3eba03e2144)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b8dbbdc94-vgwsd" podUID="2a94fe9f-df0a-43ab-ad0b-a3eba03e2144" Aug 13 07:07:32.376572 containerd[1456]: time="2025-08-13T07:07:32.376485952Z" level=error msg="Failed to destroy network for sandbox \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.379497 containerd[1456]: time="2025-08-13T07:07:32.379420386Z" level=error msg="encountered an error cleaning up failed sandbox \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.379649 containerd[1456]: time="2025-08-13T07:07:32.379545127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df6784b98-zfzpv,Uid:7c4ccec3-f907-4938-9a80-cb54e4ef0fc4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.380158 kubelet[2506]: E0813 07:07:32.379973 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.380158 kubelet[2506]: E0813 07:07:32.380042 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df6784b98-zfzpv" Aug 13 07:07:32.380158 kubelet[2506]: E0813 07:07:32.380071 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df6784b98-zfzpv" Aug 13 07:07:32.380832 kubelet[2506]: E0813 07:07:32.380119 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6df6784b98-zfzpv_calico-apiserver(7c4ccec3-f907-4938-9a80-cb54e4ef0fc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6df6784b98-zfzpv_calico-apiserver(7c4ccec3-f907-4938-9a80-cb54e4ef0fc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6df6784b98-zfzpv" podUID="7c4ccec3-f907-4938-9a80-cb54e4ef0fc4" Aug 13 07:07:32.405422 containerd[1456]: time="2025-08-13T07:07:32.405281823Z" level=error msg="Failed to destroy network for sandbox \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.405885 containerd[1456]: time="2025-08-13T07:07:32.405769836Z" level=error msg="encountered an error cleaning up failed sandbox \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.405885 containerd[1456]: time="2025-08-13T07:07:32.405852739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-flgjr,Uid:94eead8f-716f-4f57-a31e-047b0ab9c02f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.406484 kubelet[2506]: E0813 07:07:32.406284 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.406484 kubelet[2506]: E0813 07:07:32.406368 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-flgjr" Aug 13 07:07:32.406484 kubelet[2506]: E0813 07:07:32.406394 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-flgjr" Aug 13 07:07:32.407582 kubelet[2506]: E0813 07:07:32.407216 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-flgjr_kube-system(94eead8f-716f-4f57-a31e-047b0ab9c02f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-flgjr_kube-system(94eead8f-716f-4f57-a31e-047b0ab9c02f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-flgjr" podUID="94eead8f-716f-4f57-a31e-047b0ab9c02f" Aug 13 07:07:32.420462 containerd[1456]: time="2025-08-13T07:07:32.420395644Z" level=error msg="Failed to destroy network for sandbox \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.420989 containerd[1456]: time="2025-08-13T07:07:32.420834806Z" level=error msg="encountered an error cleaning up failed sandbox \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.420989 containerd[1456]: time="2025-08-13T07:07:32.420923923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-67ggw,Uid:536d979e-0e84-4095-adcc-e89aae57b3e3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.422096 kubelet[2506]: E0813 07:07:32.421249 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.422096 kubelet[2506]: E0813 07:07:32.421342 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-67ggw" Aug 13 07:07:32.422096 kubelet[2506]: E0813 07:07:32.421367 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-67ggw" Aug 13 07:07:32.422328 kubelet[2506]: E0813 07:07:32.421431 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-67ggw_calico-system(536d979e-0e84-4095-adcc-e89aae57b3e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-67ggw_calico-system(536d979e-0e84-4095-adcc-e89aae57b3e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-67ggw" podUID="536d979e-0e84-4095-adcc-e89aae57b3e3" Aug 13 07:07:32.446541 containerd[1456]: time="2025-08-13T07:07:32.446471248Z" level=error msg="Failed to destroy network for sandbox \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.446983 containerd[1456]: time="2025-08-13T07:07:32.446938103Z" level=error msg="encountered an error cleaning up failed sandbox \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.447069 containerd[1456]: time="2025-08-13T07:07:32.447038802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ff464d979-6n8l7,Uid:6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.448632 kubelet[2506]: E0813 07:07:32.447344 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.448632 kubelet[2506]: E0813 07:07:32.447504 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ff464d979-6n8l7" Aug 13 07:07:32.448632 kubelet[2506]: E0813 07:07:32.447535 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ff464d979-6n8l7" Aug 13 07:07:32.448816 kubelet[2506]: E0813 07:07:32.447588 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6ff464d979-6n8l7_calico-system(6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6ff464d979-6n8l7_calico-system(6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6ff464d979-6n8l7" podUID="6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929" Aug 13 07:07:32.467565 containerd[1456]: time="2025-08-13T07:07:32.467350440Z" level=error msg="Failed to destroy network for sandbox \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.472373 containerd[1456]: time="2025-08-13T07:07:32.472260070Z" level=error msg="encountered an error cleaning up failed sandbox \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.472517 containerd[1456]: time="2025-08-13T07:07:32.472421684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df6784b98-8v5cf,Uid:bbc0eece-7b19-4b32-8aa8-4f52057a212b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.474805 kubelet[2506]: E0813 07:07:32.473557 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.474805 kubelet[2506]: E0813 07:07:32.473628 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df6784b98-8v5cf" Aug 13 07:07:32.474805 kubelet[2506]: E0813 07:07:32.473688 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df6784b98-8v5cf" Aug 13 07:07:32.475032 kubelet[2506]: E0813 07:07:32.473755 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6df6784b98-8v5cf_calico-apiserver(bbc0eece-7b19-4b32-8aa8-4f52057a212b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6df6784b98-8v5cf_calico-apiserver(bbc0eece-7b19-4b32-8aa8-4f52057a212b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6df6784b98-8v5cf" podUID="bbc0eece-7b19-4b32-8aa8-4f52057a212b" Aug 13 07:07:32.484399 containerd[1456]: time="2025-08-13T07:07:32.484281156Z" level=error msg="Failed to destroy network for sandbox \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.485048 containerd[1456]: time="2025-08-13T07:07:32.485006384Z" level=error msg="encountered an error cleaning up failed sandbox \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.485167 containerd[1456]: time="2025-08-13T07:07:32.485114635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qxnq4,Uid:22db522d-1126-4583-97ae-d9ff192443f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.485871 kubelet[2506]: E0813 07:07:32.485467 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:32.485871 kubelet[2506]: E0813 07:07:32.485532 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qxnq4" Aug 13 07:07:32.485871 kubelet[2506]: E0813 07:07:32.485556 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qxnq4" Aug 13 07:07:32.486348 kubelet[2506]: E0813 07:07:32.485610 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qxnq4_kube-system(22db522d-1126-4583-97ae-d9ff192443f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qxnq4_kube-system(22db522d-1126-4583-97ae-d9ff192443f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qxnq4" podUID="22db522d-1126-4583-97ae-d9ff192443f7" Aug 13 07:07:32.779012 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e-shm.mount: Deactivated successfully. Aug 13 07:07:32.859182 kubelet[2506]: I0813 07:07:32.858371 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:07:32.861200 kubelet[2506]: I0813 07:07:32.861175 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:07:32.866028 containerd[1456]: time="2025-08-13T07:07:32.865907948Z" level=info msg="StopPodSandbox for \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\"" Aug 13 07:07:32.868797 containerd[1456]: time="2025-08-13T07:07:32.868222955Z" level=info msg="StopPodSandbox for \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\"" Aug 13 07:07:32.868797 containerd[1456]: time="2025-08-13T07:07:32.868460225Z" level=info msg="Ensure that sandbox 2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc in task-service has been cleanup successfully" Aug 13 07:07:32.878233 kubelet[2506]: I0813 07:07:32.878193 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:07:32.881512 containerd[1456]: time="2025-08-13T07:07:32.881455075Z" level=info msg="Ensure that sandbox f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c in task-service has been cleanup successfully" Aug 13 07:07:32.882477 containerd[1456]: time="2025-08-13T07:07:32.882280940Z" level=info msg="StopPodSandbox for \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\"" Aug 13 07:07:32.887181 kubelet[2506]: I0813 07:07:32.886542 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:07:32.888800 containerd[1456]: time="2025-08-13T07:07:32.887862635Z" level=info msg="StopPodSandbox for \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\"" Aug 13 07:07:32.888800 containerd[1456]: time="2025-08-13T07:07:32.888096364Z" level=info msg="Ensure that sandbox 85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744 in task-service has been cleanup successfully" Aug 13 07:07:32.892913 containerd[1456]: time="2025-08-13T07:07:32.892829714Z" level=info msg="Ensure that sandbox 0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678 in task-service has been cleanup successfully" Aug 13 07:07:32.898057 kubelet[2506]: I0813 07:07:32.898022 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:07:32.900414 containerd[1456]: time="2025-08-13T07:07:32.900030657Z" level=info msg="StopPodSandbox for \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\"" Aug 13 07:07:32.902809 containerd[1456]: time="2025-08-13T07:07:32.902649868Z" level=info msg="Ensure that sandbox 7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e in task-service has been cleanup successfully" Aug 13 07:07:32.905851 kubelet[2506]: I0813 07:07:32.904749 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:07:32.909224 containerd[1456]: time="2025-08-13T07:07:32.909172645Z" level=info msg="StopPodSandbox for \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\"" Aug 13 07:07:32.914529 containerd[1456]: time="2025-08-13T07:07:32.914478462Z" level=info msg="Ensure that sandbox c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0 in task-service has been cleanup successfully" Aug 13 07:07:32.918373 kubelet[2506]: I0813 07:07:32.917091 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:07:32.926553 containerd[1456]: time="2025-08-13T07:07:32.926473013Z" level=info msg="StopPodSandbox for \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\"" Aug 13 07:07:32.929330 containerd[1456]: time="2025-08-13T07:07:32.929222419Z" level=info msg="Ensure that sandbox b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53 in task-service has been cleanup successfully" Aug 13 07:07:32.933922 kubelet[2506]: I0813 07:07:32.933866 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:07:32.937604 containerd[1456]: time="2025-08-13T07:07:32.937305444Z" level=info msg="StopPodSandbox for \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\"" Aug 13 07:07:32.940850 containerd[1456]: time="2025-08-13T07:07:32.940786675Z" level=info msg="Ensure that sandbox 44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a in task-service has been cleanup successfully" Aug 13 07:07:33.098359 containerd[1456]: time="2025-08-13T07:07:33.097539466Z" level=error msg="StopPodSandbox for \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\" failed" error="failed to destroy network for sandbox \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.098508 kubelet[2506]: E0813 07:07:33.097833 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:07:33.098508 kubelet[2506]: E0813 07:07:33.097915 2506 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744"} Aug 13 07:07:33.098508 kubelet[2506]: E0813 07:07:33.097999 2506 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c4ccec3-f907-4938-9a80-cb54e4ef0fc4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.098508 kubelet[2506]: E0813 07:07:33.098035 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c4ccec3-f907-4938-9a80-cb54e4ef0fc4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6df6784b98-zfzpv" podUID="7c4ccec3-f907-4938-9a80-cb54e4ef0fc4" Aug 13 07:07:33.127541 containerd[1456]: time="2025-08-13T07:07:33.127462584Z" level=error msg="StopPodSandbox for \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\" failed" error="failed to destroy network for sandbox \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.128993 kubelet[2506]: E0813 07:07:33.128924 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:07:33.129254 kubelet[2506]: E0813 07:07:33.128992 2506 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53"} Aug 13 07:07:33.129254 kubelet[2506]: E0813 07:07:33.129040 2506 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2a94fe9f-df0a-43ab-ad0b-a3eba03e2144\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.129254 kubelet[2506]: E0813 07:07:33.129078 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2a94fe9f-df0a-43ab-ad0b-a3eba03e2144\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b8dbbdc94-vgwsd" podUID="2a94fe9f-df0a-43ab-ad0b-a3eba03e2144" Aug 13 07:07:33.130232 kubelet[2506]: E0813 07:07:33.129680 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:07:33.130232 kubelet[2506]: E0813 07:07:33.129733 2506 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a"} Aug 13 07:07:33.130232 kubelet[2506]: E0813 07:07:33.129789 2506 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bbc0eece-7b19-4b32-8aa8-4f52057a212b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.130232 kubelet[2506]: E0813 07:07:33.129820 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bbc0eece-7b19-4b32-8aa8-4f52057a212b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6df6784b98-8v5cf" podUID="bbc0eece-7b19-4b32-8aa8-4f52057a212b" Aug 13 07:07:33.130637 containerd[1456]: time="2025-08-13T07:07:33.129264286Z" level=error msg="StopPodSandbox for \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\" failed" error="failed to destroy network for sandbox \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.130637 containerd[1456]: time="2025-08-13T07:07:33.129496876Z" level=error msg="StopPodSandbox for \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\" failed" error="failed to destroy network for sandbox \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.130752 kubelet[2506]: E0813 07:07:33.129689 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:07:33.130752 kubelet[2506]: E0813 07:07:33.129906 2506 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc"} Aug 13 07:07:33.130752 kubelet[2506]: E0813 07:07:33.129941 2506 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"536d979e-0e84-4095-adcc-e89aae57b3e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.130752 kubelet[2506]: E0813 07:07:33.129968 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"536d979e-0e84-4095-adcc-e89aae57b3e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-67ggw" podUID="536d979e-0e84-4095-adcc-e89aae57b3e3" Aug 13 07:07:33.135969 containerd[1456]: time="2025-08-13T07:07:33.135861592Z" level=error msg="StopPodSandbox for \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\" failed" error="failed to destroy network for sandbox \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.136715 kubelet[2506]: E0813 07:07:33.136321 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:07:33.136715 kubelet[2506]: E0813 07:07:33.136572 2506 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0"} Aug 13 07:07:33.136715 kubelet[2506]: E0813 07:07:33.136627 2506 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94eead8f-716f-4f57-a31e-047b0ab9c02f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.136715 kubelet[2506]: E0813 07:07:33.136666 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94eead8f-716f-4f57-a31e-047b0ab9c02f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-flgjr" podUID="94eead8f-716f-4f57-a31e-047b0ab9c02f" Aug 13 07:07:33.139405 containerd[1456]: time="2025-08-13T07:07:33.139210796Z" level=error msg="StopPodSandbox for \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\" failed" error="failed to destroy network for sandbox \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.139648 kubelet[2506]: E0813 07:07:33.139549 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:07:33.139648 kubelet[2506]: E0813 07:07:33.139614 2506 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c"} Aug 13 07:07:33.139803 kubelet[2506]: E0813 07:07:33.139660 2506 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22db522d-1126-4583-97ae-d9ff192443f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.139803 kubelet[2506]: E0813 07:07:33.139693 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22db522d-1126-4583-97ae-d9ff192443f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qxnq4" podUID="22db522d-1126-4583-97ae-d9ff192443f7" Aug 13 07:07:33.143206 containerd[1456]: time="2025-08-13T07:07:33.142451594Z" level=error msg="StopPodSandbox for \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\" failed" error="failed to destroy network for sandbox \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.143381 kubelet[2506]: E0813 07:07:33.142767 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:07:33.143381 kubelet[2506]: E0813 07:07:33.142824 2506 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678"} Aug 13 07:07:33.143381 kubelet[2506]: E0813 07:07:33.142857 2506 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.143381 kubelet[2506]: E0813 07:07:33.142880 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6ff464d979-6n8l7" podUID="6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929" Aug 13 07:07:33.146053 containerd[1456]: time="2025-08-13T07:07:33.145998674Z" level=error msg="StopPodSandbox for \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\" failed" error="failed to destroy network for sandbox \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:07:33.146581 kubelet[2506]: E0813 07:07:33.146334 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:07:33.146581 kubelet[2506]: E0813 07:07:33.146390 2506 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e"} Aug 13 07:07:33.146581 kubelet[2506]: E0813 07:07:33.146435 2506 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94837213-7248-4886-ac34-73ab8173c672\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:07:33.146581 kubelet[2506]: E0813 07:07:33.146458 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94837213-7248-4886-ac34-73ab8173c672\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n2xth" podUID="94837213-7248-4886-ac34-73ab8173c672" Aug 13 07:07:39.378804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2663688544.mount: Deactivated successfully. Aug 13 07:07:39.513498 containerd[1456]: time="2025-08-13T07:07:39.503563646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:07:39.514286 containerd[1456]: time="2025-08-13T07:07:39.514237042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:39.565244 containerd[1456]: time="2025-08-13T07:07:39.565192850Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:39.578824 containerd[1456]: time="2025-08-13T07:07:39.578555491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.708903129s" Aug 13 07:07:39.578824 containerd[1456]: time="2025-08-13T07:07:39.578633484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:07:39.589269 containerd[1456]: time="2025-08-13T07:07:39.589010127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:39.699487 containerd[1456]: time="2025-08-13T07:07:39.699345374Z" level=info msg="CreateContainer within sandbox \"56cde0ecfc0b7c8e0fe410d16e29dac59d8bd67c5a458acc0778da578d6680e1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:07:39.857747 containerd[1456]: time="2025-08-13T07:07:39.857680251Z" level=info msg="CreateContainer within sandbox \"56cde0ecfc0b7c8e0fe410d16e29dac59d8bd67c5a458acc0778da578d6680e1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d0d98e757dcc3b60d3ff494ccc49b72038054527af33588b2b220eb3c8bd5580\"" Aug 13 07:07:39.868901 containerd[1456]: time="2025-08-13T07:07:39.867540465Z" level=info msg="StartContainer for \"d0d98e757dcc3b60d3ff494ccc49b72038054527af33588b2b220eb3c8bd5580\"" Aug 13 07:07:40.005514 systemd[1]: Started cri-containerd-d0d98e757dcc3b60d3ff494ccc49b72038054527af33588b2b220eb3c8bd5580.scope - libcontainer container d0d98e757dcc3b60d3ff494ccc49b72038054527af33588b2b220eb3c8bd5580. Aug 13 07:07:40.070270 containerd[1456]: time="2025-08-13T07:07:40.070212179Z" level=info msg="StartContainer for \"d0d98e757dcc3b60d3ff494ccc49b72038054527af33588b2b220eb3c8bd5580\" returns successfully" Aug 13 07:07:40.223498 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:07:40.224767 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:07:40.474008 containerd[1456]: time="2025-08-13T07:07:40.473925531Z" level=info msg="StopPodSandbox for \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\"" Aug 13 07:07:40.808892 containerd[1456]: 2025-08-13 07:07:40.587 [INFO][3786] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:07:40.808892 containerd[1456]: 2025-08-13 07:07:40.587 [INFO][3786] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" iface="eth0" netns="/var/run/netns/cni-72c476ae-7989-7d34-7ac1-bcff3c84a4e2" Aug 13 07:07:40.808892 containerd[1456]: 2025-08-13 07:07:40.588 [INFO][3786] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" iface="eth0" netns="/var/run/netns/cni-72c476ae-7989-7d34-7ac1-bcff3c84a4e2" Aug 13 07:07:40.808892 containerd[1456]: 2025-08-13 07:07:40.589 [INFO][3786] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" iface="eth0" netns="/var/run/netns/cni-72c476ae-7989-7d34-7ac1-bcff3c84a4e2" Aug 13 07:07:40.808892 containerd[1456]: 2025-08-13 07:07:40.589 [INFO][3786] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:07:40.808892 containerd[1456]: 2025-08-13 07:07:40.589 [INFO][3786] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:07:40.808892 containerd[1456]: 2025-08-13 07:07:40.779 [INFO][3795] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" HandleID="k8s-pod-network.0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--6ff464d979--6n8l7-eth0" Aug 13 07:07:40.808892 containerd[1456]: 2025-08-13 07:07:40.782 [INFO][3795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:40.808892 containerd[1456]: 2025-08-13 07:07:40.783 [INFO][3795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:40.808892 containerd[1456]: 2025-08-13 07:07:40.796 [WARNING][3795] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" HandleID="k8s-pod-network.0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--6ff464d979--6n8l7-eth0" Aug 13 07:07:40.808892 containerd[1456]: 2025-08-13 07:07:40.796 [INFO][3795] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" HandleID="k8s-pod-network.0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--6ff464d979--6n8l7-eth0" Aug 13 07:07:40.808892 containerd[1456]: 2025-08-13 07:07:40.800 [INFO][3795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:40.808892 containerd[1456]: 2025-08-13 07:07:40.803 [INFO][3786] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:07:40.808892 containerd[1456]: time="2025-08-13T07:07:40.808819242Z" level=info msg="TearDown network for sandbox \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\" successfully" Aug 13 07:07:40.811827 containerd[1456]: time="2025-08-13T07:07:40.809416600Z" level=info msg="StopPodSandbox for \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\" returns successfully" Aug 13 07:07:40.815455 systemd[1]: run-netns-cni\x2d72c476ae\x2d7989\x2d7d34\x2d7ac1\x2dbcff3c84a4e2.mount: Deactivated successfully. Aug 13 07:07:40.887779 kubelet[2506]: I0813 07:07:40.887274 2506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929-whisker-ca-bundle\") pod \"6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929\" (UID: \"6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929\") " Aug 13 07:07:40.887779 kubelet[2506]: I0813 07:07:40.887399 2506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mss8\" (UniqueName: \"kubernetes.io/projected/6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929-kube-api-access-4mss8\") pod \"6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929\" (UID: \"6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929\") " Aug 13 07:07:40.887779 kubelet[2506]: I0813 07:07:40.887435 2506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929-whisker-backend-key-pair\") pod \"6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929\" (UID: \"6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929\") " Aug 13 07:07:40.918263 systemd[1]: var-lib-kubelet-pods-6e5ce7c1\x2d4815\x2d4ccb\x2dbf3e\x2d09ecfaf12929-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4mss8.mount: Deactivated successfully. Aug 13 07:07:40.920257 kubelet[2506]: I0813 07:07:40.918497 2506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929-kube-api-access-4mss8" (OuterVolumeSpecName: "kube-api-access-4mss8") pod "6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929" (UID: "6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929"). InnerVolumeSpecName "kube-api-access-4mss8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:07:40.921964 kubelet[2506]: I0813 07:07:40.918215 2506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929" (UID: "6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:07:40.926185 kubelet[2506]: I0813 07:07:40.925340 2506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929" (UID: "6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:07:40.926606 systemd[1]: var-lib-kubelet-pods-6e5ce7c1\x2d4815\x2d4ccb\x2dbf3e\x2d09ecfaf12929-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:07:40.992104 systemd[1]: Removed slice kubepods-besteffort-pod6e5ce7c1_4815_4ccb_bf3e_09ecfaf12929.slice - libcontainer container kubepods-besteffort-pod6e5ce7c1_4815_4ccb_bf3e_09ecfaf12929.slice. Aug 13 07:07:40.996964 kubelet[2506]: I0813 07:07:40.996846 2506 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929-whisker-backend-key-pair\") on node \"ci-4081.3.5-5-1812e6c6f4\" DevicePath \"\"" Aug 13 07:07:40.996964 kubelet[2506]: I0813 07:07:40.996884 2506 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929-whisker-ca-bundle\") on node \"ci-4081.3.5-5-1812e6c6f4\" DevicePath \"\"" Aug 13 07:07:40.996964 kubelet[2506]: I0813 07:07:40.996895 2506 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4mss8\" (UniqueName: \"kubernetes.io/projected/6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929-kube-api-access-4mss8\") on node \"ci-4081.3.5-5-1812e6c6f4\" DevicePath \"\"" Aug 13 07:07:41.126159 kubelet[2506]: I0813 07:07:41.113991 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4d7bw" podStartSLOduration=2.772046812 podStartE2EDuration="20.103544733s" podCreationTimestamp="2025-08-13 07:07:21 +0000 UTC" firstStartedPulling="2025-08-13 07:07:22.261200214 +0000 UTC m=+22.885461902" lastFinishedPulling="2025-08-13 07:07:39.592698184 +0000 UTC m=+40.216959823" observedRunningTime="2025-08-13 07:07:41.055486224 +0000 UTC m=+41.679747873" watchObservedRunningTime="2025-08-13 07:07:41.103544733 +0000 UTC m=+41.727806381" Aug 13 07:07:41.232939 systemd[1]: Created slice kubepods-besteffort-pod06066f34_ec44_4702_bc2a_1e72c07b9b45.slice - libcontainer container kubepods-besteffort-pod06066f34_ec44_4702_bc2a_1e72c07b9b45.slice. Aug 13 07:07:41.303235 kubelet[2506]: I0813 07:07:41.303083 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06066f34-ec44-4702-bc2a-1e72c07b9b45-whisker-backend-key-pair\") pod \"whisker-54b9957c77-7bhd5\" (UID: \"06066f34-ec44-4702-bc2a-1e72c07b9b45\") " pod="calico-system/whisker-54b9957c77-7bhd5" Aug 13 07:07:41.303235 kubelet[2506]: I0813 07:07:41.303166 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06066f34-ec44-4702-bc2a-1e72c07b9b45-whisker-ca-bundle\") pod \"whisker-54b9957c77-7bhd5\" (UID: \"06066f34-ec44-4702-bc2a-1e72c07b9b45\") " pod="calico-system/whisker-54b9957c77-7bhd5" Aug 13 07:07:41.303235 kubelet[2506]: I0813 07:07:41.303199 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9btwc\" (UniqueName: \"kubernetes.io/projected/06066f34-ec44-4702-bc2a-1e72c07b9b45-kube-api-access-9btwc\") pod \"whisker-54b9957c77-7bhd5\" (UID: \"06066f34-ec44-4702-bc2a-1e72c07b9b45\") " pod="calico-system/whisker-54b9957c77-7bhd5" Aug 13 07:07:41.539832 containerd[1456]: time="2025-08-13T07:07:41.539622661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b9957c77-7bhd5,Uid:06066f34-ec44-4702-bc2a-1e72c07b9b45,Namespace:calico-system,Attempt:0,}" Aug 13 07:07:41.550011 kubelet[2506]: I0813 07:07:41.549953 2506 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929" path="/var/lib/kubelet/pods/6e5ce7c1-4815-4ccb-bf3e-09ecfaf12929/volumes" Aug 13 07:07:41.734248 systemd-networkd[1361]: califf119eb9ac8: Link UP Aug 13 07:07:41.734492 systemd-networkd[1361]: califf119eb9ac8: Gained carrier Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.604 [INFO][3839] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.620 [INFO][3839] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-eth0 whisker-54b9957c77- calico-system 06066f34-ec44-4702-bc2a-1e72c07b9b45 974 0 2025-08-13 07:07:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54b9957c77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-5-1812e6c6f4 whisker-54b9957c77-7bhd5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califf119eb9ac8 [] [] }} ContainerID="aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" Namespace="calico-system" Pod="whisker-54b9957c77-7bhd5" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-" Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.620 [INFO][3839] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" Namespace="calico-system" Pod="whisker-54b9957c77-7bhd5" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-eth0" Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.659 [INFO][3850] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" HandleID="k8s-pod-network.aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-eth0" Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.660 [INFO][3850] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" HandleID="k8s-pod-network.aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb250), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-5-1812e6c6f4", "pod":"whisker-54b9957c77-7bhd5", "timestamp":"2025-08-13 07:07:41.659912991 +0000 UTC"}, Hostname:"ci-4081.3.5-5-1812e6c6f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.660 [INFO][3850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.660 [INFO][3850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.660 [INFO][3850] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-5-1812e6c6f4' Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.670 [INFO][3850] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.680 [INFO][3850] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.686 [INFO][3850] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.690 [INFO][3850] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.695 [INFO][3850] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.695 [INFO][3850] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.698 [INFO][3850] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895 Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.704 [INFO][3850] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.716 [INFO][3850] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.193/26] block=192.168.16.192/26 handle="k8s-pod-network.aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.717 [INFO][3850] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.193/26] handle="k8s-pod-network.aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.717 [INFO][3850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:41.761237 containerd[1456]: 2025-08-13 07:07:41.717 [INFO][3850] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.193/26] IPv6=[] ContainerID="aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" HandleID="k8s-pod-network.aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-eth0" Aug 13 07:07:41.763890 containerd[1456]: 2025-08-13 07:07:41.721 [INFO][3839] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" Namespace="calico-system" Pod="whisker-54b9957c77-7bhd5" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-eth0", GenerateName:"whisker-54b9957c77-", Namespace:"calico-system", SelfLink:"", UID:"06066f34-ec44-4702-bc2a-1e72c07b9b45", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54b9957c77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"", Pod:"whisker-54b9957c77-7bhd5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califf119eb9ac8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:41.763890 containerd[1456]: 2025-08-13 07:07:41.721 [INFO][3839] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.193/32] ContainerID="aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" Namespace="calico-system" Pod="whisker-54b9957c77-7bhd5" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-eth0" Aug 13 07:07:41.763890 containerd[1456]: 2025-08-13 07:07:41.721 [INFO][3839] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf119eb9ac8 ContainerID="aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" Namespace="calico-system" Pod="whisker-54b9957c77-7bhd5" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-eth0" Aug 13 07:07:41.763890 containerd[1456]: 2025-08-13 07:07:41.735 [INFO][3839] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" Namespace="calico-system" Pod="whisker-54b9957c77-7bhd5" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-eth0" Aug 13 07:07:41.763890 containerd[1456]: 2025-08-13 07:07:41.735 [INFO][3839] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" Namespace="calico-system" Pod="whisker-54b9957c77-7bhd5" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-eth0", GenerateName:"whisker-54b9957c77-", Namespace:"calico-system", SelfLink:"", UID:"06066f34-ec44-4702-bc2a-1e72c07b9b45", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54b9957c77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895", Pod:"whisker-54b9957c77-7bhd5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califf119eb9ac8", MAC:"9e:03:78:a8:ba:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:41.763890 containerd[1456]: 2025-08-13 07:07:41.754 [INFO][3839] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895" Namespace="calico-system" Pod="whisker-54b9957c77-7bhd5" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--54b9957c77--7bhd5-eth0" Aug 13 07:07:41.793714 containerd[1456]: time="2025-08-13T07:07:41.793485390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:41.793714 containerd[1456]: time="2025-08-13T07:07:41.793566076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:41.793714 containerd[1456]: time="2025-08-13T07:07:41.793580671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:41.795926 containerd[1456]: time="2025-08-13T07:07:41.795664461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:41.817461 systemd[1]: Started cri-containerd-aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895.scope - libcontainer container aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895. Aug 13 07:07:41.896516 containerd[1456]: time="2025-08-13T07:07:41.896399324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b9957c77-7bhd5,Uid:06066f34-ec44-4702-bc2a-1e72c07b9b45,Namespace:calico-system,Attempt:0,} returns sandbox id \"aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895\"" Aug 13 07:07:41.900435 containerd[1456]: time="2025-08-13T07:07:41.899923051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:07:43.490887 containerd[1456]: time="2025-08-13T07:07:43.490818453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:43.493166 containerd[1456]: time="2025-08-13T07:07:43.492682775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:07:43.494262 containerd[1456]: time="2025-08-13T07:07:43.494007458Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:43.500055 containerd[1456]: time="2025-08-13T07:07:43.499872354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:43.502838 containerd[1456]: time="2025-08-13T07:07:43.502071786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.602110046s" Aug 13 07:07:43.503364 containerd[1456]: time="2025-08-13T07:07:43.503197737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:07:43.511147 containerd[1456]: time="2025-08-13T07:07:43.510830470Z" level=info msg="CreateContainer within sandbox \"aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:07:43.529995 containerd[1456]: time="2025-08-13T07:07:43.529576267Z" level=info msg="CreateContainer within sandbox \"aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"50914cd3dfe19b1261512c3c2b999530faffd6f4256e5219ecdafea523c48b97\"" Aug 13 07:07:43.533038 containerd[1456]: time="2025-08-13T07:07:43.532881405Z" level=info msg="StartContainer for \"50914cd3dfe19b1261512c3c2b999530faffd6f4256e5219ecdafea523c48b97\"" Aug 13 07:07:43.602383 systemd[1]: run-containerd-runc-k8s.io-50914cd3dfe19b1261512c3c2b999530faffd6f4256e5219ecdafea523c48b97-runc.d1VRsK.mount: Deactivated successfully. Aug 13 07:07:43.611450 systemd[1]: Started cri-containerd-50914cd3dfe19b1261512c3c2b999530faffd6f4256e5219ecdafea523c48b97.scope - libcontainer container 50914cd3dfe19b1261512c3c2b999530faffd6f4256e5219ecdafea523c48b97. Aug 13 07:07:43.695181 containerd[1456]: time="2025-08-13T07:07:43.692211457Z" level=info msg="StartContainer for \"50914cd3dfe19b1261512c3c2b999530faffd6f4256e5219ecdafea523c48b97\" returns successfully" Aug 13 07:07:43.705216 containerd[1456]: time="2025-08-13T07:07:43.705124009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:07:43.732032 systemd-networkd[1361]: califf119eb9ac8: Gained IPv6LL Aug 13 07:07:44.548883 containerd[1456]: time="2025-08-13T07:07:44.548813758Z" level=info msg="StopPodSandbox for \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\"" Aug 13 07:07:44.557343 containerd[1456]: time="2025-08-13T07:07:44.557285261Z" level=info msg="StopPodSandbox for \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\"" Aug 13 07:07:44.742574 containerd[1456]: 2025-08-13 07:07:44.647 [INFO][4106] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:07:44.742574 containerd[1456]: 2025-08-13 07:07:44.648 [INFO][4106] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" iface="eth0" netns="/var/run/netns/cni-38508a9a-8b46-4842-4bc1-a408190a772a" Aug 13 07:07:44.742574 containerd[1456]: 2025-08-13 07:07:44.648 [INFO][4106] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" iface="eth0" netns="/var/run/netns/cni-38508a9a-8b46-4842-4bc1-a408190a772a" Aug 13 07:07:44.742574 containerd[1456]: 2025-08-13 07:07:44.649 [INFO][4106] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" iface="eth0" netns="/var/run/netns/cni-38508a9a-8b46-4842-4bc1-a408190a772a" Aug 13 07:07:44.742574 containerd[1456]: 2025-08-13 07:07:44.649 [INFO][4106] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:07:44.742574 containerd[1456]: 2025-08-13 07:07:44.650 [INFO][4106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:07:44.742574 containerd[1456]: 2025-08-13 07:07:44.714 [INFO][4122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" HandleID="k8s-pod-network.7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:07:44.742574 containerd[1456]: 2025-08-13 07:07:44.714 [INFO][4122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:44.742574 containerd[1456]: 2025-08-13 07:07:44.714 [INFO][4122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:44.742574 containerd[1456]: 2025-08-13 07:07:44.728 [WARNING][4122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" HandleID="k8s-pod-network.7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:07:44.742574 containerd[1456]: 2025-08-13 07:07:44.728 [INFO][4122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" HandleID="k8s-pod-network.7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:07:44.742574 containerd[1456]: 2025-08-13 07:07:44.731 [INFO][4122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:44.742574 containerd[1456]: 2025-08-13 07:07:44.737 [INFO][4106] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:07:44.743180 containerd[1456]: time="2025-08-13T07:07:44.743065357Z" level=info msg="TearDown network for sandbox \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\" successfully" Aug 13 07:07:44.743180 containerd[1456]: time="2025-08-13T07:07:44.743099322Z" level=info msg="StopPodSandbox for \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\" returns successfully" Aug 13 07:07:44.746788 containerd[1456]: time="2025-08-13T07:07:44.746367294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2xth,Uid:94837213-7248-4886-ac34-73ab8173c672,Namespace:calico-system,Attempt:1,}" Aug 13 07:07:44.752993 systemd[1]: run-netns-cni\x2d38508a9a\x2d8b46\x2d4842\x2d4bc1\x2da408190a772a.mount: Deactivated successfully. Aug 13 07:07:44.792831 containerd[1456]: 2025-08-13 07:07:44.672 [INFO][4105] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:07:44.792831 containerd[1456]: 2025-08-13 07:07:44.672 [INFO][4105] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" iface="eth0" netns="/var/run/netns/cni-d50790e1-a2ea-fef9-6bf0-6552dcc36362" Aug 13 07:07:44.792831 containerd[1456]: 2025-08-13 07:07:44.673 [INFO][4105] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" iface="eth0" netns="/var/run/netns/cni-d50790e1-a2ea-fef9-6bf0-6552dcc36362" Aug 13 07:07:44.792831 containerd[1456]: 2025-08-13 07:07:44.674 [INFO][4105] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" iface="eth0" netns="/var/run/netns/cni-d50790e1-a2ea-fef9-6bf0-6552dcc36362" Aug 13 07:07:44.792831 containerd[1456]: 2025-08-13 07:07:44.674 [INFO][4105] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:07:44.792831 containerd[1456]: 2025-08-13 07:07:44.674 [INFO][4105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:07:44.792831 containerd[1456]: 2025-08-13 07:07:44.747 [INFO][4131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" HandleID="k8s-pod-network.85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:07:44.792831 containerd[1456]: 2025-08-13 07:07:44.748 [INFO][4131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:44.792831 containerd[1456]: 2025-08-13 07:07:44.748 [INFO][4131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:44.792831 containerd[1456]: 2025-08-13 07:07:44.760 [WARNING][4131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" HandleID="k8s-pod-network.85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:07:44.792831 containerd[1456]: 2025-08-13 07:07:44.760 [INFO][4131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" HandleID="k8s-pod-network.85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:07:44.792831 containerd[1456]: 2025-08-13 07:07:44.765 [INFO][4131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:44.792831 containerd[1456]: 2025-08-13 07:07:44.776 [INFO][4105] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:07:44.794918 containerd[1456]: time="2025-08-13T07:07:44.793003890Z" level=info msg="TearDown network for sandbox \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\" successfully" Aug 13 07:07:44.794918 containerd[1456]: time="2025-08-13T07:07:44.793030947Z" level=info msg="StopPodSandbox for \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\" returns successfully" Aug 13 07:07:44.800711 systemd[1]: run-netns-cni\x2dd50790e1\x2da2ea\x2dfef9\x2d6bf0\x2d6552dcc36362.mount: Deactivated successfully. Aug 13 07:07:44.801374 containerd[1456]: time="2025-08-13T07:07:44.801104817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df6784b98-zfzpv,Uid:7c4ccec3-f907-4938-9a80-cb54e4ef0fc4,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:07:45.022384 systemd-networkd[1361]: cali43276127c1a: Link UP Aug 13 07:07:45.026468 systemd-networkd[1361]: cali43276127c1a: Gained carrier Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.820 [INFO][4144] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.844 [INFO][4144] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0 csi-node-driver- calico-system 94837213-7248-4886-ac34-73ab8173c672 991 0 2025-08-13 07:07:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-5-1812e6c6f4 csi-node-driver-n2xth eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali43276127c1a [] [] }} ContainerID="7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" Namespace="calico-system" Pod="csi-node-driver-n2xth" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-" Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.844 [INFO][4144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" Namespace="calico-system" Pod="csi-node-driver-n2xth" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.928 [INFO][4167] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" HandleID="k8s-pod-network.7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.929 [INFO][4167] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" HandleID="k8s-pod-network.7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-5-1812e6c6f4", "pod":"csi-node-driver-n2xth", "timestamp":"2025-08-13 07:07:44.928924003 +0000 UTC"}, Hostname:"ci-4081.3.5-5-1812e6c6f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.929 [INFO][4167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.930 [INFO][4167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.930 [INFO][4167] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-5-1812e6c6f4' Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.941 [INFO][4167] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.951 [INFO][4167] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.962 [INFO][4167] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.967 [INFO][4167] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.980 [INFO][4167] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.980 [INFO][4167] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.983 [INFO][4167] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:44.991 [INFO][4167] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:45.001 [INFO][4167] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.194/26] block=192.168.16.192/26 handle="k8s-pod-network.7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:45.001 [INFO][4167] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.194/26] handle="k8s-pod-network.7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:45.001 [INFO][4167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:45.053169 containerd[1456]: 2025-08-13 07:07:45.001 [INFO][4167] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.194/26] IPv6=[] ContainerID="7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" HandleID="k8s-pod-network.7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:07:45.055753 containerd[1456]: 2025-08-13 07:07:45.010 [INFO][4144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" Namespace="calico-system" Pod="csi-node-driver-n2xth" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94837213-7248-4886-ac34-73ab8173c672", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"", Pod:"csi-node-driver-n2xth", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43276127c1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:45.055753 containerd[1456]: 2025-08-13 07:07:45.010 [INFO][4144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.194/32] ContainerID="7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" Namespace="calico-system" Pod="csi-node-driver-n2xth" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:07:45.055753 containerd[1456]: 2025-08-13 07:07:45.010 [INFO][4144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43276127c1a ContainerID="7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" Namespace="calico-system" Pod="csi-node-driver-n2xth" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:07:45.055753 containerd[1456]: 2025-08-13 07:07:45.023 [INFO][4144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" Namespace="calico-system" Pod="csi-node-driver-n2xth" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:07:45.055753 containerd[1456]: 2025-08-13 07:07:45.023 [INFO][4144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" Namespace="calico-system" Pod="csi-node-driver-n2xth" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94837213-7248-4886-ac34-73ab8173c672", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f", Pod:"csi-node-driver-n2xth", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43276127c1a", MAC:"72:5a:e5:e3:ac:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:45.055753 containerd[1456]: 2025-08-13 07:07:45.049 [INFO][4144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f" Namespace="calico-system" Pod="csi-node-driver-n2xth" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:07:45.118852 containerd[1456]: time="2025-08-13T07:07:45.117706222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:45.118852 containerd[1456]: time="2025-08-13T07:07:45.117811693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:45.118852 containerd[1456]: time="2025-08-13T07:07:45.117827777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:45.121190 containerd[1456]: time="2025-08-13T07:07:45.119609601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:45.133547 systemd-networkd[1361]: califb0be3de289: Link UP Aug 13 07:07:45.134657 systemd-networkd[1361]: califb0be3de289: Gained carrier Aug 13 07:07:45.164418 systemd[1]: Started cri-containerd-7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f.scope - libcontainer container 7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f. Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:44.897 [INFO][4156] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:44.921 [INFO][4156] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0 calico-apiserver-6df6784b98- calico-apiserver 7c4ccec3-f907-4938-9a80-cb54e4ef0fc4 992 0 2025-08-13 07:07:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6df6784b98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-5-1812e6c6f4 calico-apiserver-6df6784b98-zfzpv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb0be3de289 [] [] }} ContainerID="6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-zfzpv" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-" Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:44.921 [INFO][4156] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-zfzpv" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:44.992 [INFO][4178] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" HandleID="k8s-pod-network.6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:44.993 [INFO][4178] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" HandleID="k8s-pod-network.6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-5-1812e6c6f4", "pod":"calico-apiserver-6df6784b98-zfzpv", "timestamp":"2025-08-13 07:07:44.992873124 +0000 UTC"}, Hostname:"ci-4081.3.5-5-1812e6c6f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:44.993 [INFO][4178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.002 [INFO][4178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.003 [INFO][4178] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-5-1812e6c6f4' Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.060 [INFO][4178] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.076 [INFO][4178] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.087 [INFO][4178] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.092 [INFO][4178] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.096 [INFO][4178] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.097 [INFO][4178] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.099 [INFO][4178] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921 Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.108 [INFO][4178] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.120 [INFO][4178] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.195/26] block=192.168.16.192/26 handle="k8s-pod-network.6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.120 [INFO][4178] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.195/26] handle="k8s-pod-network.6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.120 [INFO][4178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:45.166450 containerd[1456]: 2025-08-13 07:07:45.120 [INFO][4178] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.195/26] IPv6=[] ContainerID="6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" HandleID="k8s-pod-network.6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:07:45.167920 containerd[1456]: 2025-08-13 07:07:45.127 [INFO][4156] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-zfzpv" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0", GenerateName:"calico-apiserver-6df6784b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c4ccec3-f907-4938-9a80-cb54e4ef0fc4", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df6784b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"", Pod:"calico-apiserver-6df6784b98-zfzpv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb0be3de289", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:45.167920 containerd[1456]: 2025-08-13 07:07:45.127 [INFO][4156] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.195/32] ContainerID="6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-zfzpv" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:07:45.167920 containerd[1456]: 2025-08-13 07:07:45.127 [INFO][4156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb0be3de289 ContainerID="6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-zfzpv" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:07:45.167920 containerd[1456]: 2025-08-13 07:07:45.142 [INFO][4156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-zfzpv" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:07:45.167920 containerd[1456]: 2025-08-13 07:07:45.143 [INFO][4156] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-zfzpv" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0", GenerateName:"calico-apiserver-6df6784b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c4ccec3-f907-4938-9a80-cb54e4ef0fc4", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df6784b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921", Pod:"calico-apiserver-6df6784b98-zfzpv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb0be3de289", MAC:"0a:6e:a4:ae:1a:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:45.167920 containerd[1456]: 2025-08-13 07:07:45.159 [INFO][4156] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-zfzpv" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:07:45.221817 containerd[1456]: time="2025-08-13T07:07:45.220403214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2xth,Uid:94837213-7248-4886-ac34-73ab8173c672,Namespace:calico-system,Attempt:1,} returns sandbox id \"7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f\"" Aug 13 07:07:45.247282 containerd[1456]: time="2025-08-13T07:07:45.247119981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:45.247773 containerd[1456]: time="2025-08-13T07:07:45.247527462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:45.247773 containerd[1456]: time="2025-08-13T07:07:45.247547228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:45.247773 containerd[1456]: time="2025-08-13T07:07:45.247674349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:45.275571 systemd[1]: Started cri-containerd-6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921.scope - libcontainer container 6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921. Aug 13 07:07:45.338659 containerd[1456]: time="2025-08-13T07:07:45.338536661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df6784b98-zfzpv,Uid:7c4ccec3-f907-4938-9a80-cb54e4ef0fc4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921\"" Aug 13 07:07:45.548959 containerd[1456]: time="2025-08-13T07:07:45.548541959Z" level=info msg="StopPodSandbox for \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\"" Aug 13 07:07:45.687271 containerd[1456]: 2025-08-13 07:07:45.623 [INFO][4287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:07:45.687271 containerd[1456]: 2025-08-13 07:07:45.623 [INFO][4287] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" iface="eth0" netns="/var/run/netns/cni-4b290d58-e2f1-4013-efb0-8711536704cd" Aug 13 07:07:45.687271 containerd[1456]: 2025-08-13 07:07:45.624 [INFO][4287] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" iface="eth0" netns="/var/run/netns/cni-4b290d58-e2f1-4013-efb0-8711536704cd" Aug 13 07:07:45.687271 containerd[1456]: 2025-08-13 07:07:45.624 [INFO][4287] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" iface="eth0" netns="/var/run/netns/cni-4b290d58-e2f1-4013-efb0-8711536704cd" Aug 13 07:07:45.687271 containerd[1456]: 2025-08-13 07:07:45.624 [INFO][4287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:07:45.687271 containerd[1456]: 2025-08-13 07:07:45.624 [INFO][4287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:07:45.687271 containerd[1456]: 2025-08-13 07:07:45.665 [INFO][4295] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" HandleID="k8s-pod-network.2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:07:45.687271 containerd[1456]: 2025-08-13 07:07:45.666 [INFO][4295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:45.687271 containerd[1456]: 2025-08-13 07:07:45.666 [INFO][4295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:45.687271 containerd[1456]: 2025-08-13 07:07:45.677 [WARNING][4295] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" HandleID="k8s-pod-network.2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:07:45.687271 containerd[1456]: 2025-08-13 07:07:45.677 [INFO][4295] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" HandleID="k8s-pod-network.2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:07:45.687271 containerd[1456]: 2025-08-13 07:07:45.680 [INFO][4295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:45.687271 containerd[1456]: 2025-08-13 07:07:45.683 [INFO][4287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:07:45.688059 containerd[1456]: time="2025-08-13T07:07:45.687426179Z" level=info msg="TearDown network for sandbox \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\" successfully" Aug 13 07:07:45.688059 containerd[1456]: time="2025-08-13T07:07:45.687473471Z" level=info msg="StopPodSandbox for \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\" returns successfully" Aug 13 07:07:45.688908 containerd[1456]: time="2025-08-13T07:07:45.688731824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-67ggw,Uid:536d979e-0e84-4095-adcc-e89aae57b3e3,Namespace:calico-system,Attempt:1,}" Aug 13 07:07:45.756094 systemd[1]: run-netns-cni\x2d4b290d58\x2de2f1\x2d4013\x2defb0\x2d8711536704cd.mount: Deactivated successfully. Aug 13 07:07:45.956043 systemd-networkd[1361]: calie13d9b1bfb9: Link UP Aug 13 07:07:45.956345 systemd-networkd[1361]: calie13d9b1bfb9: Gained carrier Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.770 [INFO][4307] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.796 [INFO][4307] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0 goldmane-768f4c5c69- calico-system 536d979e-0e84-4095-adcc-e89aae57b3e3 1004 0 2025-08-13 07:07:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-5-1812e6c6f4 goldmane-768f4c5c69-67ggw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie13d9b1bfb9 [] [] }} ContainerID="fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" Namespace="calico-system" Pod="goldmane-768f4c5c69-67ggw" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-" Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.796 [INFO][4307] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" Namespace="calico-system" Pod="goldmane-768f4c5c69-67ggw" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.882 [INFO][4324] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" HandleID="k8s-pod-network.fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.883 [INFO][4324] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" HandleID="k8s-pod-network.fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-5-1812e6c6f4", "pod":"goldmane-768f4c5c69-67ggw", "timestamp":"2025-08-13 07:07:45.882721426 +0000 UTC"}, Hostname:"ci-4081.3.5-5-1812e6c6f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.883 [INFO][4324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.883 [INFO][4324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.883 [INFO][4324] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-5-1812e6c6f4' Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.894 [INFO][4324] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.901 [INFO][4324] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.911 [INFO][4324] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.917 [INFO][4324] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.922 [INFO][4324] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.922 [INFO][4324] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.927 [INFO][4324] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.935 [INFO][4324] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.943 [INFO][4324] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.196/26] block=192.168.16.192/26 handle="k8s-pod-network.fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.943 [INFO][4324] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.196/26] handle="k8s-pod-network.fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.943 [INFO][4324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:45.984468 containerd[1456]: 2025-08-13 07:07:45.943 [INFO][4324] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.196/26] IPv6=[] ContainerID="fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" HandleID="k8s-pod-network.fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:07:45.985635 containerd[1456]: 2025-08-13 07:07:45.947 [INFO][4307] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" Namespace="calico-system" Pod="goldmane-768f4c5c69-67ggw" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"536d979e-0e84-4095-adcc-e89aae57b3e3", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"", Pod:"goldmane-768f4c5c69-67ggw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie13d9b1bfb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:45.985635 containerd[1456]: 2025-08-13 07:07:45.947 [INFO][4307] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.196/32] ContainerID="fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" Namespace="calico-system" Pod="goldmane-768f4c5c69-67ggw" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:07:45.985635 containerd[1456]: 2025-08-13 07:07:45.947 [INFO][4307] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie13d9b1bfb9 ContainerID="fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" Namespace="calico-system" Pod="goldmane-768f4c5c69-67ggw" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:07:45.985635 containerd[1456]: 2025-08-13 07:07:45.950 [INFO][4307] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" Namespace="calico-system" Pod="goldmane-768f4c5c69-67ggw" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:07:45.985635 containerd[1456]: 2025-08-13 07:07:45.951 [INFO][4307] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" Namespace="calico-system" Pod="goldmane-768f4c5c69-67ggw" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"536d979e-0e84-4095-adcc-e89aae57b3e3", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa", Pod:"goldmane-768f4c5c69-67ggw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie13d9b1bfb9", MAC:"92:2b:7c:5e:9f:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:45.985635 containerd[1456]: 2025-08-13 07:07:45.976 [INFO][4307] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa" Namespace="calico-system" Pod="goldmane-768f4c5c69-67ggw" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:07:46.077205 containerd[1456]: time="2025-08-13T07:07:46.076193262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:46.077205 containerd[1456]: time="2025-08-13T07:07:46.076284511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:46.077205 containerd[1456]: time="2025-08-13T07:07:46.076306297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:46.077205 containerd[1456]: time="2025-08-13T07:07:46.076460360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:46.134105 systemd[1]: run-containerd-runc-k8s.io-fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa-runc.fj75xp.mount: Deactivated successfully. Aug 13 07:07:46.143396 systemd[1]: Started cri-containerd-fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa.scope - libcontainer container fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa. Aug 13 07:07:46.254252 containerd[1456]: time="2025-08-13T07:07:46.252512768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-67ggw,Uid:536d979e-0e84-4095-adcc-e89aae57b3e3,Namespace:calico-system,Attempt:1,} returns sandbox id \"fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa\"" Aug 13 07:07:46.548621 containerd[1456]: time="2025-08-13T07:07:46.548264469Z" level=info msg="StopPodSandbox for \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\"" Aug 13 07:07:46.549763 containerd[1456]: time="2025-08-13T07:07:46.549367368Z" level=info msg="StopPodSandbox for \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\"" Aug 13 07:07:46.736326 systemd-networkd[1361]: califb0be3de289: Gained IPv6LL Aug 13 07:07:46.765005 containerd[1456]: 2025-08-13 07:07:46.664 [INFO][4411] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:07:46.765005 containerd[1456]: 2025-08-13 07:07:46.664 [INFO][4411] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" iface="eth0" netns="/var/run/netns/cni-e615d712-ebee-bccd-8800-56206ce5761c" Aug 13 07:07:46.765005 containerd[1456]: 2025-08-13 07:07:46.664 [INFO][4411] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" iface="eth0" netns="/var/run/netns/cni-e615d712-ebee-bccd-8800-56206ce5761c" Aug 13 07:07:46.765005 containerd[1456]: 2025-08-13 07:07:46.668 [INFO][4411] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" iface="eth0" netns="/var/run/netns/cni-e615d712-ebee-bccd-8800-56206ce5761c" Aug 13 07:07:46.765005 containerd[1456]: 2025-08-13 07:07:46.668 [INFO][4411] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:07:46.765005 containerd[1456]: 2025-08-13 07:07:46.668 [INFO][4411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:07:46.765005 containerd[1456]: 2025-08-13 07:07:46.740 [INFO][4423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" HandleID="k8s-pod-network.f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:07:46.765005 containerd[1456]: 2025-08-13 07:07:46.740 [INFO][4423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:46.765005 containerd[1456]: 2025-08-13 07:07:46.740 [INFO][4423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:46.765005 containerd[1456]: 2025-08-13 07:07:46.752 [WARNING][4423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" HandleID="k8s-pod-network.f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:07:46.765005 containerd[1456]: 2025-08-13 07:07:46.752 [INFO][4423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" HandleID="k8s-pod-network.f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:07:46.765005 containerd[1456]: 2025-08-13 07:07:46.755 [INFO][4423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:46.765005 containerd[1456]: 2025-08-13 07:07:46.760 [INFO][4411] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:07:46.767325 containerd[1456]: time="2025-08-13T07:07:46.767269337Z" level=info msg="TearDown network for sandbox \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\" successfully" Aug 13 07:07:46.767325 containerd[1456]: time="2025-08-13T07:07:46.767324650Z" level=info msg="StopPodSandbox for \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\" returns successfully" Aug 13 07:07:46.770183 systemd[1]: run-netns-cni\x2de615d712\x2debee\x2dbccd\x2d8800\x2d56206ce5761c.mount: Deactivated successfully. Aug 13 07:07:46.771465 kubelet[2506]: E0813 07:07:46.771066 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:46.775388 containerd[1456]: time="2025-08-13T07:07:46.775235006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qxnq4,Uid:22db522d-1126-4583-97ae-d9ff192443f7,Namespace:kube-system,Attempt:1,}" Aug 13 07:07:46.831708 containerd[1456]: 2025-08-13 07:07:46.675 [INFO][4410] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:07:46.831708 containerd[1456]: 2025-08-13 07:07:46.677 [INFO][4410] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" iface="eth0" netns="/var/run/netns/cni-b807209d-06ca-c7a6-4d91-ba0b74a3b1f6" Aug 13 07:07:46.831708 containerd[1456]: 2025-08-13 07:07:46.679 [INFO][4410] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" iface="eth0" netns="/var/run/netns/cni-b807209d-06ca-c7a6-4d91-ba0b74a3b1f6" Aug 13 07:07:46.831708 containerd[1456]: 2025-08-13 07:07:46.679 [INFO][4410] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" iface="eth0" netns="/var/run/netns/cni-b807209d-06ca-c7a6-4d91-ba0b74a3b1f6" Aug 13 07:07:46.831708 containerd[1456]: 2025-08-13 07:07:46.679 [INFO][4410] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:07:46.831708 containerd[1456]: 2025-08-13 07:07:46.679 [INFO][4410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:07:46.831708 containerd[1456]: 2025-08-13 07:07:46.783 [INFO][4428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" HandleID="k8s-pod-network.44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:07:46.831708 containerd[1456]: 2025-08-13 07:07:46.783 [INFO][4428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:46.831708 containerd[1456]: 2025-08-13 07:07:46.783 [INFO][4428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:46.831708 containerd[1456]: 2025-08-13 07:07:46.812 [WARNING][4428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" HandleID="k8s-pod-network.44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:07:46.831708 containerd[1456]: 2025-08-13 07:07:46.812 [INFO][4428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" HandleID="k8s-pod-network.44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:07:46.831708 containerd[1456]: 2025-08-13 07:07:46.815 [INFO][4428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:46.831708 containerd[1456]: 2025-08-13 07:07:46.821 [INFO][4410] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:07:46.837190 systemd[1]: run-netns-cni\x2db807209d\x2d06ca\x2dc7a6\x2d4d91\x2dba0b74a3b1f6.mount: Deactivated successfully. Aug 13 07:07:46.839404 containerd[1456]: time="2025-08-13T07:07:46.839334941Z" level=info msg="TearDown network for sandbox \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\" successfully" Aug 13 07:07:46.839404 containerd[1456]: time="2025-08-13T07:07:46.839397161Z" level=info msg="StopPodSandbox for \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\" returns successfully" Aug 13 07:07:46.841825 containerd[1456]: time="2025-08-13T07:07:46.841772079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df6784b98-8v5cf,Uid:bbc0eece-7b19-4b32-8aa8-4f52057a212b,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:07:46.934164 containerd[1456]: time="2025-08-13T07:07:46.934051645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:46.937801 containerd[1456]: time="2025-08-13T07:07:46.937701089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:07:46.939964 containerd[1456]: time="2025-08-13T07:07:46.938641563Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:46.944922 containerd[1456]: time="2025-08-13T07:07:46.944882063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:46.947844 containerd[1456]: time="2025-08-13T07:07:46.947785478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.242567249s" Aug 13 07:07:46.948233 containerd[1456]: time="2025-08-13T07:07:46.948206884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:07:46.955228 containerd[1456]: time="2025-08-13T07:07:46.954298848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:07:46.958699 containerd[1456]: time="2025-08-13T07:07:46.958646140Z" level=info msg="CreateContainer within sandbox \"aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:07:46.986068 containerd[1456]: time="2025-08-13T07:07:46.985925401Z" level=info msg="CreateContainer within sandbox \"aea3d1b309b9297f924560ebd0d76e2e129fafb5f86c4c0595bd328736f7d895\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"35f364d3d57bf247173ab589c8191122d723dc1022771c64f7bfcd8815028620\"" Aug 13 07:07:46.990154 containerd[1456]: time="2025-08-13T07:07:46.989811476Z" level=info msg="StartContainer for \"35f364d3d57bf247173ab589c8191122d723dc1022771c64f7bfcd8815028620\"" Aug 13 07:07:47.057311 systemd-networkd[1361]: cali43276127c1a: Gained IPv6LL Aug 13 07:07:47.100372 systemd[1]: Started cri-containerd-35f364d3d57bf247173ab589c8191122d723dc1022771c64f7bfcd8815028620.scope - libcontainer container 35f364d3d57bf247173ab589c8191122d723dc1022771c64f7bfcd8815028620. Aug 13 07:07:47.120861 systemd-networkd[1361]: calic72bcdefd72: Link UP Aug 13 07:07:47.122970 systemd-networkd[1361]: calic72bcdefd72: Gained carrier Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:46.903 [INFO][4436] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:46.936 [INFO][4436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0 coredns-674b8bbfcf- kube-system 22db522d-1126-4583-97ae-d9ff192443f7 1012 0 2025-08-13 07:07:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-5-1812e6c6f4 coredns-674b8bbfcf-qxnq4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic72bcdefd72 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" Namespace="kube-system" Pod="coredns-674b8bbfcf-qxnq4" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-" Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:46.938 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" Namespace="kube-system" Pod="coredns-674b8bbfcf-qxnq4" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.024 [INFO][4472] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" HandleID="k8s-pod-network.6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.025 [INFO][4472] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" HandleID="k8s-pod-network.6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000302bc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-5-1812e6c6f4", "pod":"coredns-674b8bbfcf-qxnq4", "timestamp":"2025-08-13 07:07:47.024866924 +0000 UTC"}, Hostname:"ci-4081.3.5-5-1812e6c6f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.025 [INFO][4472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.025 [INFO][4472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.025 [INFO][4472] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-5-1812e6c6f4' Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.039 [INFO][4472] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.048 [INFO][4472] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.063 [INFO][4472] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.071 [INFO][4472] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.077 [INFO][4472] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.078 [INFO][4472] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.082 [INFO][4472] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.092 [INFO][4472] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.108 [INFO][4472] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.197/26] block=192.168.16.192/26 handle="k8s-pod-network.6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.108 [INFO][4472] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.197/26] handle="k8s-pod-network.6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.108 [INFO][4472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:47.148017 containerd[1456]: 2025-08-13 07:07:47.108 [INFO][4472] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.197/26] IPv6=[] ContainerID="6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" HandleID="k8s-pod-network.6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:07:47.150099 containerd[1456]: 2025-08-13 07:07:47.113 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" Namespace="kube-system" Pod="coredns-674b8bbfcf-qxnq4" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"22db522d-1126-4583-97ae-d9ff192443f7", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"", Pod:"coredns-674b8bbfcf-qxnq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic72bcdefd72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:47.150099 containerd[1456]: 2025-08-13 07:07:47.113 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.197/32] ContainerID="6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" Namespace="kube-system" Pod="coredns-674b8bbfcf-qxnq4" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:07:47.150099 containerd[1456]: 2025-08-13 07:07:47.113 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic72bcdefd72 ContainerID="6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" Namespace="kube-system" Pod="coredns-674b8bbfcf-qxnq4" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:07:47.150099 containerd[1456]: 2025-08-13 07:07:47.121 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" Namespace="kube-system" Pod="coredns-674b8bbfcf-qxnq4" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:07:47.150099 containerd[1456]: 2025-08-13 07:07:47.122 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" Namespace="kube-system" Pod="coredns-674b8bbfcf-qxnq4" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"22db522d-1126-4583-97ae-d9ff192443f7", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b", Pod:"coredns-674b8bbfcf-qxnq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic72bcdefd72", MAC:"76:ea:1c:bd:c6:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:47.150099 containerd[1456]: 2025-08-13 07:07:47.133 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b" Namespace="kube-system" Pod="coredns-674b8bbfcf-qxnq4" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:07:47.203522 systemd-networkd[1361]: cali94a741be5d5: Link UP Aug 13 07:07:47.203678 systemd-networkd[1361]: cali94a741be5d5: Gained carrier Aug 13 07:07:47.238072 containerd[1456]: time="2025-08-13T07:07:47.237640521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:47.238072 containerd[1456]: time="2025-08-13T07:07:47.237796091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:47.238072 containerd[1456]: time="2025-08-13T07:07:47.237807177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:47.241310 containerd[1456]: time="2025-08-13T07:07:47.239661396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:46.940 [INFO][4447] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:46.971 [INFO][4447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0 calico-apiserver-6df6784b98- calico-apiserver bbc0eece-7b19-4b32-8aa8-4f52057a212b 1013 0 2025-08-13 07:07:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6df6784b98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-5-1812e6c6f4 calico-apiserver-6df6784b98-8v5cf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali94a741be5d5 [] [] }} ContainerID="7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-8v5cf" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-" Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:46.971 [INFO][4447] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-8v5cf" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.052 [INFO][4479] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" HandleID="k8s-pod-network.7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.054 [INFO][4479] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" HandleID="k8s-pod-network.7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5b90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-5-1812e6c6f4", "pod":"calico-apiserver-6df6784b98-8v5cf", "timestamp":"2025-08-13 07:07:47.052015022 +0000 UTC"}, Hostname:"ci-4081.3.5-5-1812e6c6f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.054 [INFO][4479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.108 [INFO][4479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.108 [INFO][4479] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-5-1812e6c6f4' Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.140 [INFO][4479] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.157 [INFO][4479] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.165 [INFO][4479] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.168 [INFO][4479] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.173 [INFO][4479] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.173 [INFO][4479] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.175 [INFO][4479] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.181 [INFO][4479] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.191 [INFO][4479] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.198/26] block=192.168.16.192/26 handle="k8s-pod-network.7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.191 [INFO][4479] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.198/26] handle="k8s-pod-network.7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.191 [INFO][4479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:47.246645 containerd[1456]: 2025-08-13 07:07:47.191 [INFO][4479] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.198/26] IPv6=[] ContainerID="7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" HandleID="k8s-pod-network.7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:07:47.250713 containerd[1456]: 2025-08-13 07:07:47.197 [INFO][4447] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-8v5cf" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0", GenerateName:"calico-apiserver-6df6784b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbc0eece-7b19-4b32-8aa8-4f52057a212b", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df6784b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"", Pod:"calico-apiserver-6df6784b98-8v5cf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94a741be5d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:47.250713 containerd[1456]: 2025-08-13 07:07:47.197 [INFO][4447] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.198/32] ContainerID="7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-8v5cf" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:07:47.250713 containerd[1456]: 2025-08-13 07:07:47.197 [INFO][4447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94a741be5d5 ContainerID="7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-8v5cf" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:07:47.250713 containerd[1456]: 2025-08-13 07:07:47.203 [INFO][4447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-8v5cf" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:07:47.250713 containerd[1456]: 2025-08-13 07:07:47.209 [INFO][4447] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-8v5cf" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0", GenerateName:"calico-apiserver-6df6784b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbc0eece-7b19-4b32-8aa8-4f52057a212b", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df6784b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c", Pod:"calico-apiserver-6df6784b98-8v5cf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94a741be5d5", MAC:"2e:f4:79:2d:25:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:47.250713 containerd[1456]: 2025-08-13 07:07:47.229 [INFO][4447] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c" Namespace="calico-apiserver" Pod="calico-apiserver-6df6784b98-8v5cf" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:07:47.316147 containerd[1456]: time="2025-08-13T07:07:47.316094150Z" level=info msg="StartContainer for \"35f364d3d57bf247173ab589c8191122d723dc1022771c64f7bfcd8815028620\" returns successfully" Aug 13 07:07:47.329412 systemd[1]: Started cri-containerd-6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b.scope - libcontainer container 6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b. Aug 13 07:07:47.358948 containerd[1456]: time="2025-08-13T07:07:47.357710007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:47.359521 containerd[1456]: time="2025-08-13T07:07:47.357802526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:47.359521 containerd[1456]: time="2025-08-13T07:07:47.359487164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:47.361408 containerd[1456]: time="2025-08-13T07:07:47.361315839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:47.395568 systemd[1]: Started cri-containerd-7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c.scope - libcontainer container 7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c. Aug 13 07:07:47.419572 containerd[1456]: time="2025-08-13T07:07:47.418792451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qxnq4,Uid:22db522d-1126-4583-97ae-d9ff192443f7,Namespace:kube-system,Attempt:1,} returns sandbox id \"6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b\"" Aug 13 07:07:47.421608 kubelet[2506]: E0813 07:07:47.421027 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:47.431602 containerd[1456]: time="2025-08-13T07:07:47.430374956Z" level=info msg="CreateContainer within sandbox \"6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:07:47.440392 systemd-networkd[1361]: calie13d9b1bfb9: Gained IPv6LL Aug 13 07:07:47.459083 containerd[1456]: time="2025-08-13T07:07:47.459039391Z" level=info msg="CreateContainer within sandbox \"6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0935e3b0ca1039de4af1652a89176569512874bd63e1ba0c68946931789b15e5\"" Aug 13 07:07:47.462071 containerd[1456]: time="2025-08-13T07:07:47.462024982Z" level=info msg="StartContainer for \"0935e3b0ca1039de4af1652a89176569512874bd63e1ba0c68946931789b15e5\"" Aug 13 07:07:47.540354 systemd[1]: Started cri-containerd-0935e3b0ca1039de4af1652a89176569512874bd63e1ba0c68946931789b15e5.scope - libcontainer container 0935e3b0ca1039de4af1652a89176569512874bd63e1ba0c68946931789b15e5. Aug 13 07:07:47.551247 containerd[1456]: time="2025-08-13T07:07:47.549801983Z" level=info msg="StopPodSandbox for \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\"" Aug 13 07:07:47.558528 containerd[1456]: time="2025-08-13T07:07:47.557582507Z" level=info msg="StopPodSandbox for \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\"" Aug 13 07:07:47.610707 containerd[1456]: time="2025-08-13T07:07:47.610086472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df6784b98-8v5cf,Uid:bbc0eece-7b19-4b32-8aa8-4f52057a212b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c\"" Aug 13 07:07:47.674727 containerd[1456]: time="2025-08-13T07:07:47.674371663Z" level=info msg="StartContainer for \"0935e3b0ca1039de4af1652a89176569512874bd63e1ba0c68946931789b15e5\" returns successfully" Aug 13 07:07:47.794562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32921107.mount: Deactivated successfully. Aug 13 07:07:47.855721 containerd[1456]: 2025-08-13 07:07:47.741 [INFO][4679] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:07:47.855721 containerd[1456]: 2025-08-13 07:07:47.744 [INFO][4679] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" iface="eth0" netns="/var/run/netns/cni-21f76587-85fa-deeb-55b2-16803bf4b00f" Aug 13 07:07:47.855721 containerd[1456]: 2025-08-13 07:07:47.745 [INFO][4679] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" iface="eth0" netns="/var/run/netns/cni-21f76587-85fa-deeb-55b2-16803bf4b00f" Aug 13 07:07:47.855721 containerd[1456]: 2025-08-13 07:07:47.746 [INFO][4679] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" iface="eth0" netns="/var/run/netns/cni-21f76587-85fa-deeb-55b2-16803bf4b00f" Aug 13 07:07:47.855721 containerd[1456]: 2025-08-13 07:07:47.746 [INFO][4679] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:07:47.855721 containerd[1456]: 2025-08-13 07:07:47.746 [INFO][4679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:07:47.855721 containerd[1456]: 2025-08-13 07:07:47.834 [INFO][4705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" HandleID="k8s-pod-network.c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:07:47.855721 containerd[1456]: 2025-08-13 07:07:47.834 [INFO][4705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:47.855721 containerd[1456]: 2025-08-13 07:07:47.834 [INFO][4705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:47.855721 containerd[1456]: 2025-08-13 07:07:47.846 [WARNING][4705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" HandleID="k8s-pod-network.c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:07:47.855721 containerd[1456]: 2025-08-13 07:07:47.846 [INFO][4705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" HandleID="k8s-pod-network.c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:07:47.855721 containerd[1456]: 2025-08-13 07:07:47.848 [INFO][4705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:47.855721 containerd[1456]: 2025-08-13 07:07:47.852 [INFO][4679] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:07:47.857772 containerd[1456]: time="2025-08-13T07:07:47.857316118Z" level=info msg="TearDown network for sandbox \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\" successfully" Aug 13 07:07:47.857772 containerd[1456]: time="2025-08-13T07:07:47.857358179Z" level=info msg="StopPodSandbox for \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\" returns successfully" Aug 13 07:07:47.858475 kubelet[2506]: E0813 07:07:47.858285 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:47.861066 containerd[1456]: time="2025-08-13T07:07:47.860421238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-flgjr,Uid:94eead8f-716f-4f57-a31e-047b0ab9c02f,Namespace:kube-system,Attempt:1,}" Aug 13 07:07:47.867502 systemd[1]: run-netns-cni\x2d21f76587\x2d85fa\x2ddeeb\x2d55b2\x2d16803bf4b00f.mount: Deactivated successfully. Aug 13 07:07:47.888076 containerd[1456]: 2025-08-13 07:07:47.745 [INFO][4676] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:07:47.888076 containerd[1456]: 2025-08-13 07:07:47.746 [INFO][4676] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" iface="eth0" netns="/var/run/netns/cni-ea5fc0b6-3b5d-0e50-3c2a-30cfd87bbecb" Aug 13 07:07:47.888076 containerd[1456]: 2025-08-13 07:07:47.747 [INFO][4676] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" iface="eth0" netns="/var/run/netns/cni-ea5fc0b6-3b5d-0e50-3c2a-30cfd87bbecb" Aug 13 07:07:47.888076 containerd[1456]: 2025-08-13 07:07:47.747 [INFO][4676] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" iface="eth0" netns="/var/run/netns/cni-ea5fc0b6-3b5d-0e50-3c2a-30cfd87bbecb" Aug 13 07:07:47.888076 containerd[1456]: 2025-08-13 07:07:47.747 [INFO][4676] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:07:47.888076 containerd[1456]: 2025-08-13 07:07:47.747 [INFO][4676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:07:47.888076 containerd[1456]: 2025-08-13 07:07:47.845 [INFO][4706] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" HandleID="k8s-pod-network.b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:07:47.888076 containerd[1456]: 2025-08-13 07:07:47.845 [INFO][4706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:47.888076 containerd[1456]: 2025-08-13 07:07:47.848 [INFO][4706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:47.888076 containerd[1456]: 2025-08-13 07:07:47.857 [WARNING][4706] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" HandleID="k8s-pod-network.b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:07:47.888076 containerd[1456]: 2025-08-13 07:07:47.857 [INFO][4706] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" HandleID="k8s-pod-network.b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:07:47.888076 containerd[1456]: 2025-08-13 07:07:47.871 [INFO][4706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:47.888076 containerd[1456]: 2025-08-13 07:07:47.881 [INFO][4676] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:07:47.890418 containerd[1456]: time="2025-08-13T07:07:47.889564049Z" level=info msg="TearDown network for sandbox \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\" successfully" Aug 13 07:07:47.890418 containerd[1456]: time="2025-08-13T07:07:47.889610908Z" level=info msg="StopPodSandbox for \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\" returns successfully" Aug 13 07:07:47.892860 containerd[1456]: time="2025-08-13T07:07:47.892822457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b8dbbdc94-vgwsd,Uid:2a94fe9f-df0a-43ab-ad0b-a3eba03e2144,Namespace:calico-system,Attempt:1,}" Aug 13 07:07:47.897916 systemd[1]: run-netns-cni\x2dea5fc0b6\x2d3b5d\x2d0e50\x2d3c2a\x2d30cfd87bbecb.mount: Deactivated successfully. Aug 13 07:07:48.100782 kubelet[2506]: E0813 07:07:48.100518 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:48.143253 kubelet[2506]: I0813 07:07:48.141946 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qxnq4" podStartSLOduration=42.141920175 podStartE2EDuration="42.141920175s" podCreationTimestamp="2025-08-13 07:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:07:48.131964309 +0000 UTC m=+48.756225957" watchObservedRunningTime="2025-08-13 07:07:48.141920175 +0000 UTC m=+48.766181815" Aug 13 07:07:48.177797 systemd-networkd[1361]: calid08d6f0cd29: Link UP Aug 13 07:07:48.185209 systemd-networkd[1361]: calid08d6f0cd29: Gained carrier Aug 13 07:07:48.206686 kubelet[2506]: I0813 07:07:48.206626 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54b9957c77-7bhd5" podStartSLOduration=2.152837524 podStartE2EDuration="7.206605329s" podCreationTimestamp="2025-08-13 07:07:41 +0000 UTC" firstStartedPulling="2025-08-13 07:07:41.899607713 +0000 UTC m=+42.523869340" lastFinishedPulling="2025-08-13 07:07:46.953375518 +0000 UTC m=+47.577637145" observedRunningTime="2025-08-13 07:07:48.201284688 +0000 UTC m=+48.825546365" watchObservedRunningTime="2025-08-13 07:07:48.206605329 +0000 UTC m=+48.830866974" Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:47.983 [INFO][4721] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:47.998 [INFO][4721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0 calico-kube-controllers-b8dbbdc94- calico-system 2a94fe9f-df0a-43ab-ad0b-a3eba03e2144 1034 0 2025-08-13 07:07:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b8dbbdc94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-5-1812e6c6f4 calico-kube-controllers-b8dbbdc94-vgwsd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid08d6f0cd29 [] [] }} ContainerID="34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" Namespace="calico-system" Pod="calico-kube-controllers-b8dbbdc94-vgwsd" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-" Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:47.999 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" Namespace="calico-system" Pod="calico-kube-controllers-b8dbbdc94-vgwsd" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.062 [INFO][4745] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" HandleID="k8s-pod-network.34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.062 [INFO][4745] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" HandleID="k8s-pod-network.34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-5-1812e6c6f4", "pod":"calico-kube-controllers-b8dbbdc94-vgwsd", "timestamp":"2025-08-13 07:07:48.062612095 +0000 UTC"}, Hostname:"ci-4081.3.5-5-1812e6c6f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.062 [INFO][4745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.062 [INFO][4745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.062 [INFO][4745] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-5-1812e6c6f4' Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.075 [INFO][4745] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.081 [INFO][4745] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.092 [INFO][4745] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.096 [INFO][4745] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.107 [INFO][4745] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.110 [INFO][4745] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.119 [INFO][4745] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.132 [INFO][4745] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.158 [INFO][4745] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.199/26] block=192.168.16.192/26 handle="k8s-pod-network.34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.158 [INFO][4745] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.199/26] handle="k8s-pod-network.34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.158 [INFO][4745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:48.223828 containerd[1456]: 2025-08-13 07:07:48.159 [INFO][4745] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.199/26] IPv6=[] ContainerID="34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" HandleID="k8s-pod-network.34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:07:48.225434 containerd[1456]: 2025-08-13 07:07:48.167 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" Namespace="calico-system" Pod="calico-kube-controllers-b8dbbdc94-vgwsd" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0", GenerateName:"calico-kube-controllers-b8dbbdc94-", Namespace:"calico-system", SelfLink:"", UID:"2a94fe9f-df0a-43ab-ad0b-a3eba03e2144", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b8dbbdc94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"", Pod:"calico-kube-controllers-b8dbbdc94-vgwsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid08d6f0cd29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:48.225434 containerd[1456]: 2025-08-13 07:07:48.167 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.199/32] ContainerID="34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" Namespace="calico-system" Pod="calico-kube-controllers-b8dbbdc94-vgwsd" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:07:48.225434 containerd[1456]: 2025-08-13 07:07:48.167 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid08d6f0cd29 ContainerID="34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" Namespace="calico-system" Pod="calico-kube-controllers-b8dbbdc94-vgwsd" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:07:48.225434 containerd[1456]: 2025-08-13 07:07:48.187 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" Namespace="calico-system" Pod="calico-kube-controllers-b8dbbdc94-vgwsd" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:07:48.225434 containerd[1456]: 2025-08-13 07:07:48.187 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" Namespace="calico-system" Pod="calico-kube-controllers-b8dbbdc94-vgwsd" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0", GenerateName:"calico-kube-controllers-b8dbbdc94-", Namespace:"calico-system", SelfLink:"", UID:"2a94fe9f-df0a-43ab-ad0b-a3eba03e2144", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b8dbbdc94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b", Pod:"calico-kube-controllers-b8dbbdc94-vgwsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid08d6f0cd29", MAC:"fe:bc:c4:ec:42:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:48.225434 containerd[1456]: 2025-08-13 07:07:48.215 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b" Namespace="calico-system" Pod="calico-kube-controllers-b8dbbdc94-vgwsd" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:07:48.281007 containerd[1456]: time="2025-08-13T07:07:48.279763001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:48.281007 containerd[1456]: time="2025-08-13T07:07:48.279878372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:48.281007 containerd[1456]: time="2025-08-13T07:07:48.279891942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:48.286378 containerd[1456]: time="2025-08-13T07:07:48.284298016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:48.329513 systemd[1]: Started cri-containerd-34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b.scope - libcontainer container 34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b. Aug 13 07:07:48.340093 systemd-networkd[1361]: calic72bcdefd72: Gained IPv6LL Aug 13 07:07:48.341387 systemd-networkd[1361]: calibbdc5ce19f0: Link UP Aug 13 07:07:48.346797 systemd-networkd[1361]: calibbdc5ce19f0: Gained carrier Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.023 [INFO][4720] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.048 [INFO][4720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0 coredns-674b8bbfcf- kube-system 94eead8f-716f-4f57-a31e-047b0ab9c02f 1033 0 2025-08-13 07:07:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-5-1812e6c6f4 coredns-674b8bbfcf-flgjr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibbdc5ce19f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" Namespace="kube-system" Pod="coredns-674b8bbfcf-flgjr" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-" Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.049 [INFO][4720] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" Namespace="kube-system" Pod="coredns-674b8bbfcf-flgjr" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.135 [INFO][4753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" HandleID="k8s-pod-network.888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.136 [INFO][4753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" HandleID="k8s-pod-network.888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f110), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-5-1812e6c6f4", "pod":"coredns-674b8bbfcf-flgjr", "timestamp":"2025-08-13 07:07:48.134897201 +0000 UTC"}, Hostname:"ci-4081.3.5-5-1812e6c6f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.136 [INFO][4753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.158 [INFO][4753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.159 [INFO][4753] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-5-1812e6c6f4' Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.215 [INFO][4753] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.253 [INFO][4753] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.267 [INFO][4753] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.270 [INFO][4753] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.277 [INFO][4753] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.277 [INFO][4753] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.281 [INFO][4753] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733 Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.296 [INFO][4753] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.320 [INFO][4753] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.200/26] block=192.168.16.192/26 handle="k8s-pod-network.888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.320 [INFO][4753] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.200/26] handle="k8s-pod-network.888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" host="ci-4081.3.5-5-1812e6c6f4" Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.322 [INFO][4753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:07:48.398207 containerd[1456]: 2025-08-13 07:07:48.322 [INFO][4753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.200/26] IPv6=[] ContainerID="888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" HandleID="k8s-pod-network.888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:07:48.400734 containerd[1456]: 2025-08-13 07:07:48.331 [INFO][4720] cni-plugin/k8s.go 418: Populated endpoint ContainerID="888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" Namespace="kube-system" Pod="coredns-674b8bbfcf-flgjr" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"94eead8f-716f-4f57-a31e-047b0ab9c02f", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"", Pod:"coredns-674b8bbfcf-flgjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbdc5ce19f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:48.400734 containerd[1456]: 2025-08-13 07:07:48.331 [INFO][4720] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.200/32] ContainerID="888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" Namespace="kube-system" Pod="coredns-674b8bbfcf-flgjr" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:07:48.400734 containerd[1456]: 2025-08-13 07:07:48.331 [INFO][4720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbdc5ce19f0 ContainerID="888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" Namespace="kube-system" Pod="coredns-674b8bbfcf-flgjr" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:07:48.400734 containerd[1456]: 2025-08-13 07:07:48.357 [INFO][4720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" Namespace="kube-system" Pod="coredns-674b8bbfcf-flgjr" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:07:48.400734 containerd[1456]: 2025-08-13 07:07:48.358 [INFO][4720] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" Namespace="kube-system" Pod="coredns-674b8bbfcf-flgjr" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"94eead8f-716f-4f57-a31e-047b0ab9c02f", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733", Pod:"coredns-674b8bbfcf-flgjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbdc5ce19f0", MAC:"66:d6:32:1c:94:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:07:48.400734 containerd[1456]: 2025-08-13 07:07:48.389 [INFO][4720] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733" Namespace="kube-system" Pod="coredns-674b8bbfcf-flgjr" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:07:48.401828 systemd-networkd[1361]: cali94a741be5d5: Gained IPv6LL Aug 13 07:07:48.460668 containerd[1456]: time="2025-08-13T07:07:48.460554180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:07:48.460942 containerd[1456]: time="2025-08-13T07:07:48.460643940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:07:48.460942 containerd[1456]: time="2025-08-13T07:07:48.460684105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:48.460942 containerd[1456]: time="2025-08-13T07:07:48.460857238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:07:48.500880 systemd[1]: Started cri-containerd-888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733.scope - libcontainer container 888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733. Aug 13 07:07:48.540962 containerd[1456]: time="2025-08-13T07:07:48.540244166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b8dbbdc94-vgwsd,Uid:2a94fe9f-df0a-43ab-ad0b-a3eba03e2144,Namespace:calico-system,Attempt:1,} returns sandbox id \"34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b\"" Aug 13 07:07:48.646112 containerd[1456]: time="2025-08-13T07:07:48.645845525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-flgjr,Uid:94eead8f-716f-4f57-a31e-047b0ab9c02f,Namespace:kube-system,Attempt:1,} returns sandbox id \"888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733\"" Aug 13 07:07:48.649497 kubelet[2506]: E0813 07:07:48.648994 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:48.661958 containerd[1456]: time="2025-08-13T07:07:48.660633311Z" level=info msg="CreateContainer within sandbox \"888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:07:48.691622 containerd[1456]: time="2025-08-13T07:07:48.691568295Z" level=info msg="CreateContainer within sandbox \"888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5cac30e3c83d377d908dfa6e42667ab14ef505e0c2dc4f1d97f95bd2920334fd\"" Aug 13 07:07:48.693479 containerd[1456]: time="2025-08-13T07:07:48.693438322Z" level=info msg="StartContainer for \"5cac30e3c83d377d908dfa6e42667ab14ef505e0c2dc4f1d97f95bd2920334fd\"" Aug 13 07:07:48.775485 systemd[1]: Started cri-containerd-5cac30e3c83d377d908dfa6e42667ab14ef505e0c2dc4f1d97f95bd2920334fd.scope - libcontainer container 5cac30e3c83d377d908dfa6e42667ab14ef505e0c2dc4f1d97f95bd2920334fd. Aug 13 07:07:48.867753 containerd[1456]: time="2025-08-13T07:07:48.867650335Z" level=info msg="StartContainer for \"5cac30e3c83d377d908dfa6e42667ab14ef505e0c2dc4f1d97f95bd2920334fd\" returns successfully" Aug 13 07:07:48.959458 containerd[1456]: time="2025-08-13T07:07:48.959252678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:48.964459 containerd[1456]: time="2025-08-13T07:07:48.964376596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:07:48.965732 containerd[1456]: time="2025-08-13T07:07:48.965677536Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:48.971906 containerd[1456]: time="2025-08-13T07:07:48.971841784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:48.975298 containerd[1456]: time="2025-08-13T07:07:48.975217480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.01980223s" Aug 13 07:07:48.975298 containerd[1456]: time="2025-08-13T07:07:48.975282094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:07:48.978611 containerd[1456]: time="2025-08-13T07:07:48.977924168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:07:48.983060 containerd[1456]: time="2025-08-13T07:07:48.982873582Z" level=info msg="CreateContainer within sandbox \"7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:07:49.031298 containerd[1456]: time="2025-08-13T07:07:49.029622814Z" level=info msg="CreateContainer within sandbox \"7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"28e2d7246c39cde105e3ffd6e160b591de57218945483e1e44feaa779c34b483\"" Aug 13 07:07:49.032167 containerd[1456]: time="2025-08-13T07:07:49.031895621Z" level=info msg="StartContainer for \"28e2d7246c39cde105e3ffd6e160b591de57218945483e1e44feaa779c34b483\"" Aug 13 07:07:49.100494 systemd[1]: Started cri-containerd-28e2d7246c39cde105e3ffd6e160b591de57218945483e1e44feaa779c34b483.scope - libcontainer container 28e2d7246c39cde105e3ffd6e160b591de57218945483e1e44feaa779c34b483. Aug 13 07:07:49.147284 kubelet[2506]: E0813 07:07:49.146944 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:49.160380 kubelet[2506]: E0813 07:07:49.160082 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:49.211592 containerd[1456]: time="2025-08-13T07:07:49.211288224Z" level=info msg="StartContainer for \"28e2d7246c39cde105e3ffd6e160b591de57218945483e1e44feaa779c34b483\" returns successfully" Aug 13 07:07:49.424568 systemd-networkd[1361]: calid08d6f0cd29: Gained IPv6LL Aug 13 07:07:50.000505 systemd-networkd[1361]: calibbdc5ce19f0: Gained IPv6LL Aug 13 07:07:50.165228 kubelet[2506]: E0813 07:07:50.164724 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:50.167162 kubelet[2506]: E0813 07:07:50.166906 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:50.774877 kubelet[2506]: I0813 07:07:50.774068 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:07:50.775992 kubelet[2506]: E0813 07:07:50.775831 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:50.840740 kubelet[2506]: I0813 07:07:50.840610 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-flgjr" podStartSLOduration=44.840476935 podStartE2EDuration="44.840476935s" podCreationTimestamp="2025-08-13 07:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:07:49.193915012 +0000 UTC m=+49.818176661" watchObservedRunningTime="2025-08-13 07:07:50.840476935 +0000 UTC m=+51.464738586" Aug 13 07:07:51.167337 kubelet[2506]: E0813 07:07:51.167305 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:07:52.678338 kernel: bpftool[5045]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:07:53.388827 systemd-networkd[1361]: vxlan.calico: Link UP Aug 13 07:07:53.388838 systemd-networkd[1361]: vxlan.calico: Gained carrier Aug 13 07:07:53.489644 containerd[1456]: time="2025-08-13T07:07:53.488472908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:53.490997 containerd[1456]: time="2025-08-13T07:07:53.490907725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:07:53.493121 containerd[1456]: time="2025-08-13T07:07:53.493069389Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:53.500217 containerd[1456]: time="2025-08-13T07:07:53.500005709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:53.505323 containerd[1456]: time="2025-08-13T07:07:53.504411884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.526444385s" Aug 13 07:07:53.505323 containerd[1456]: time="2025-08-13T07:07:53.504463165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:07:53.508316 containerd[1456]: time="2025-08-13T07:07:53.507766739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:07:53.513897 containerd[1456]: time="2025-08-13T07:07:53.513861509Z" level=info msg="CreateContainer within sandbox \"6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:07:53.548730 containerd[1456]: time="2025-08-13T07:07:53.548218995Z" level=info msg="CreateContainer within sandbox \"6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a3aadf4657a21a088a6af0da79203fcbdb172378cd1b219404c5caa1f41e3d6e\"" Aug 13 07:07:53.550826 containerd[1456]: time="2025-08-13T07:07:53.549948627Z" level=info msg="StartContainer for \"a3aadf4657a21a088a6af0da79203fcbdb172378cd1b219404c5caa1f41e3d6e\"" Aug 13 07:07:53.666400 systemd[1]: Started cri-containerd-a3aadf4657a21a088a6af0da79203fcbdb172378cd1b219404c5caa1f41e3d6e.scope - libcontainer container a3aadf4657a21a088a6af0da79203fcbdb172378cd1b219404c5caa1f41e3d6e. Aug 13 07:07:53.815038 containerd[1456]: time="2025-08-13T07:07:53.814975387Z" level=info msg="StartContainer for \"a3aadf4657a21a088a6af0da79203fcbdb172378cd1b219404c5caa1f41e3d6e\" returns successfully" Aug 13 07:07:54.663705 systemd[1]: Started sshd@7-64.227.105.235:22-139.178.89.65:59944.service - OpenSSH per-connection server daemon (139.178.89.65:59944). Aug 13 07:07:54.817201 sshd[5173]: Accepted publickey for core from 139.178.89.65 port 59944 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:07:54.818992 sshd[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:54.826954 systemd-logind[1448]: New session 8 of user core. Aug 13 07:07:54.838451 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:07:55.188442 kubelet[2506]: I0813 07:07:55.187715 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:07:55.250570 systemd-networkd[1361]: vxlan.calico: Gained IPv6LL Aug 13 07:07:55.702281 sshd[5173]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:55.712532 systemd[1]: sshd@7-64.227.105.235:22-139.178.89.65:59944.service: Deactivated successfully. Aug 13 07:07:55.716899 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:07:55.719057 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:07:55.720806 systemd-logind[1448]: Removed session 8. Aug 13 07:07:57.733566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2442613719.mount: Deactivated successfully. Aug 13 07:07:58.447847 containerd[1456]: time="2025-08-13T07:07:58.447786610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:58.450670 containerd[1456]: time="2025-08-13T07:07:58.449928787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:07:58.455440 containerd[1456]: time="2025-08-13T07:07:58.455367874Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:58.493626 containerd[1456]: time="2025-08-13T07:07:58.493510034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:58.496529 containerd[1456]: time="2025-08-13T07:07:58.496442867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.986428407s" Aug 13 07:07:58.496529 containerd[1456]: time="2025-08-13T07:07:58.496517705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:07:58.500102 containerd[1456]: time="2025-08-13T07:07:58.498386471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:07:58.514810 containerd[1456]: time="2025-08-13T07:07:58.514660933Z" level=info msg="CreateContainer within sandbox \"fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:07:58.732600 containerd[1456]: time="2025-08-13T07:07:58.731493143Z" level=info msg="CreateContainer within sandbox \"fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8c1b3f85309d4b36b773b2fe69988b4c1bc841134a97af4186707687e7d5116b\"" Aug 13 07:07:58.734232 containerd[1456]: time="2025-08-13T07:07:58.733647827Z" level=info msg="StartContainer for \"8c1b3f85309d4b36b773b2fe69988b4c1bc841134a97af4186707687e7d5116b\"" Aug 13 07:07:58.963415 systemd[1]: Started cri-containerd-8c1b3f85309d4b36b773b2fe69988b4c1bc841134a97af4186707687e7d5116b.scope - libcontainer container 8c1b3f85309d4b36b773b2fe69988b4c1bc841134a97af4186707687e7d5116b. Aug 13 07:07:58.977752 containerd[1456]: time="2025-08-13T07:07:58.977692069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:07:58.979692 containerd[1456]: time="2025-08-13T07:07:58.979595137Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:07:58.981339 containerd[1456]: time="2025-08-13T07:07:58.980432682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 481.992946ms" Aug 13 07:07:58.981339 containerd[1456]: time="2025-08-13T07:07:58.980481249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:07:58.987633 containerd[1456]: time="2025-08-13T07:07:58.986455345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:07:58.990045 containerd[1456]: time="2025-08-13T07:07:58.990004771Z" level=info msg="CreateContainer within sandbox \"7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:07:59.008178 containerd[1456]: time="2025-08-13T07:07:59.008101604Z" level=info msg="CreateContainer within sandbox \"7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"163c1b721ca42ae799d4916edc7db731586f900fd98b3050dc2bb927fd873c66\"" Aug 13 07:07:59.009336 containerd[1456]: time="2025-08-13T07:07:59.009297947Z" level=info msg="StartContainer for \"163c1b721ca42ae799d4916edc7db731586f900fd98b3050dc2bb927fd873c66\"" Aug 13 07:07:59.080700 containerd[1456]: time="2025-08-13T07:07:59.080660372Z" level=info msg="StartContainer for \"8c1b3f85309d4b36b773b2fe69988b4c1bc841134a97af4186707687e7d5116b\" returns successfully" Aug 13 07:07:59.092303 systemd[1]: Started cri-containerd-163c1b721ca42ae799d4916edc7db731586f900fd98b3050dc2bb927fd873c66.scope - libcontainer container 163c1b721ca42ae799d4916edc7db731586f900fd98b3050dc2bb927fd873c66. Aug 13 07:07:59.258619 containerd[1456]: time="2025-08-13T07:07:59.258472260Z" level=info msg="StartContainer for \"163c1b721ca42ae799d4916edc7db731586f900fd98b3050dc2bb927fd873c66\" returns successfully" Aug 13 07:07:59.287160 kubelet[2506]: I0813 07:07:59.276874 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6df6784b98-zfzpv" podStartSLOduration=33.104334982 podStartE2EDuration="41.271719792s" podCreationTimestamp="2025-08-13 07:07:18 +0000 UTC" firstStartedPulling="2025-08-13 07:07:45.340044436 +0000 UTC m=+45.964306063" lastFinishedPulling="2025-08-13 07:07:53.507429195 +0000 UTC m=+54.131690873" observedRunningTime="2025-08-13 07:07:54.203629961 +0000 UTC m=+54.827891609" watchObservedRunningTime="2025-08-13 07:07:59.271719792 +0000 UTC m=+59.895981440" Aug 13 07:07:59.975862 containerd[1456]: time="2025-08-13T07:07:59.974962318Z" level=info msg="StopPodSandbox for \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\"" Aug 13 07:08:00.500733 systemd[1]: run-containerd-runc-k8s.io-8c1b3f85309d4b36b773b2fe69988b4c1bc841134a97af4186707687e7d5116b-runc.mtibg2.mount: Deactivated successfully. Aug 13 07:08:00.551946 kubelet[2506]: I0813 07:08:00.551871 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6df6784b98-8v5cf" podStartSLOduration=31.187482476 podStartE2EDuration="42.551830518s" podCreationTimestamp="2025-08-13 07:07:18 +0000 UTC" firstStartedPulling="2025-08-13 07:07:47.618103347 +0000 UTC m=+48.242364988" lastFinishedPulling="2025-08-13 07:07:58.9824514 +0000 UTC m=+59.606713030" observedRunningTime="2025-08-13 07:08:00.528291069 +0000 UTC m=+61.152552721" watchObservedRunningTime="2025-08-13 07:08:00.551830518 +0000 UTC m=+61.176092179" Aug 13 07:08:00.554936 kubelet[2506]: I0813 07:08:00.553718 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-67ggw" podStartSLOduration=27.316731422 podStartE2EDuration="39.553691567s" podCreationTimestamp="2025-08-13 07:07:21 +0000 UTC" firstStartedPulling="2025-08-13 07:07:46.260736916 +0000 UTC m=+46.884998556" lastFinishedPulling="2025-08-13 07:07:58.497697061 +0000 UTC m=+59.121958701" observedRunningTime="2025-08-13 07:07:59.286622227 +0000 UTC m=+59.910883875" watchObservedRunningTime="2025-08-13 07:08:00.553691567 +0000 UTC m=+61.177953222" Aug 13 07:08:00.807582 systemd[1]: Started sshd@8-64.227.105.235:22-139.178.89.65:51424.service - OpenSSH per-connection server daemon (139.178.89.65:51424). Aug 13 07:08:01.075255 containerd[1456]: 2025-08-13 07:08:00.545 [WARNING][5322] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0", GenerateName:"calico-apiserver-6df6784b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c4ccec3-f907-4938-9a80-cb54e4ef0fc4", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df6784b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921", Pod:"calico-apiserver-6df6784b98-zfzpv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb0be3de289", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:01.075255 containerd[1456]: 2025-08-13 07:08:00.551 [INFO][5322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:08:01.075255 containerd[1456]: 2025-08-13 07:08:00.551 [INFO][5322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" iface="eth0" netns="" Aug 13 07:08:01.075255 containerd[1456]: 2025-08-13 07:08:00.551 [INFO][5322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:08:01.075255 containerd[1456]: 2025-08-13 07:08:00.551 [INFO][5322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:08:01.075255 containerd[1456]: 2025-08-13 07:08:01.012 [INFO][5345] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" HandleID="k8s-pod-network.85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:08:01.075255 containerd[1456]: 2025-08-13 07:08:01.017 [INFO][5345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:01.075255 containerd[1456]: 2025-08-13 07:08:01.017 [INFO][5345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:01.075255 containerd[1456]: 2025-08-13 07:08:01.049 [WARNING][5345] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" HandleID="k8s-pod-network.85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:08:01.075255 containerd[1456]: 2025-08-13 07:08:01.049 [INFO][5345] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" HandleID="k8s-pod-network.85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:08:01.075255 containerd[1456]: 2025-08-13 07:08:01.060 [INFO][5345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:01.075255 containerd[1456]: 2025-08-13 07:08:01.071 [INFO][5322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:08:01.077310 containerd[1456]: time="2025-08-13T07:08:01.075107009Z" level=info msg="TearDown network for sandbox \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\" successfully" Aug 13 07:08:01.077310 containerd[1456]: time="2025-08-13T07:08:01.075880814Z" level=info msg="StopPodSandbox for \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\" returns successfully" Aug 13 07:08:01.107830 sshd[5356]: Accepted publickey for core from 139.178.89.65 port 51424 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:01.118734 sshd[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:01.140515 systemd-logind[1448]: New session 9 of user core. Aug 13 07:08:01.148103 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:08:01.420283 containerd[1456]: time="2025-08-13T07:08:01.420211264Z" level=info msg="RemovePodSandbox for \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\"" Aug 13 07:08:01.429426 containerd[1456]: time="2025-08-13T07:08:01.429102416Z" level=info msg="Forcibly stopping sandbox \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\"" Aug 13 07:08:01.469028 kubelet[2506]: I0813 07:08:01.456843 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:08:01.898924 containerd[1456]: 2025-08-13 07:08:01.734 [WARNING][5378] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0", GenerateName:"calico-apiserver-6df6784b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c4ccec3-f907-4938-9a80-cb54e4ef0fc4", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df6784b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"6eec21c468500d9163e69eb1a29b716a1fdf4ba2fc873baad444b4a635827921", Pod:"calico-apiserver-6df6784b98-zfzpv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb0be3de289", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:01.898924 containerd[1456]: 2025-08-13 07:08:01.735 [INFO][5378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:08:01.898924 containerd[1456]: 2025-08-13 07:08:01.735 [INFO][5378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" iface="eth0" netns="" Aug 13 07:08:01.898924 containerd[1456]: 2025-08-13 07:08:01.735 [INFO][5378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:08:01.898924 containerd[1456]: 2025-08-13 07:08:01.735 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:08:01.898924 containerd[1456]: 2025-08-13 07:08:01.859 [INFO][5386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" HandleID="k8s-pod-network.85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:08:01.898924 containerd[1456]: 2025-08-13 07:08:01.861 [INFO][5386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:01.898924 containerd[1456]: 2025-08-13 07:08:01.861 [INFO][5386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:01.898924 containerd[1456]: 2025-08-13 07:08:01.872 [WARNING][5386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" HandleID="k8s-pod-network.85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:08:01.898924 containerd[1456]: 2025-08-13 07:08:01.872 [INFO][5386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" HandleID="k8s-pod-network.85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--zfzpv-eth0" Aug 13 07:08:01.898924 containerd[1456]: 2025-08-13 07:08:01.877 [INFO][5386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:01.898924 containerd[1456]: 2025-08-13 07:08:01.893 [INFO][5378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744" Aug 13 07:08:01.902124 containerd[1456]: time="2025-08-13T07:08:01.899934304Z" level=info msg="TearDown network for sandbox \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\" successfully" Aug 13 07:08:01.930853 containerd[1456]: time="2025-08-13T07:08:01.928147220Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:01.963942 containerd[1456]: time="2025-08-13T07:08:01.963088500Z" level=info msg="RemovePodSandbox \"85fbfa1080056efbf8b31b01e3bf31efa119acfaae2526717bf81561f051c744\" returns successfully" Aug 13 07:08:01.989913 containerd[1456]: time="2025-08-13T07:08:01.989859626Z" level=info msg="StopPodSandbox for \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\"" Aug 13 07:08:02.358977 containerd[1456]: 2025-08-13 07:08:02.154 [WARNING][5401] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"94eead8f-716f-4f57-a31e-047b0ab9c02f", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733", Pod:"coredns-674b8bbfcf-flgjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbdc5ce19f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:02.358977 containerd[1456]: 2025-08-13 07:08:02.156 [INFO][5401] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:08:02.358977 containerd[1456]: 2025-08-13 07:08:02.157 [INFO][5401] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" iface="eth0" netns="" Aug 13 07:08:02.358977 containerd[1456]: 2025-08-13 07:08:02.157 [INFO][5401] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:08:02.358977 containerd[1456]: 2025-08-13 07:08:02.157 [INFO][5401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:08:02.358977 containerd[1456]: 2025-08-13 07:08:02.288 [INFO][5408] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" HandleID="k8s-pod-network.c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:08:02.358977 containerd[1456]: 2025-08-13 07:08:02.289 [INFO][5408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:02.358977 containerd[1456]: 2025-08-13 07:08:02.289 [INFO][5408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:02.358977 containerd[1456]: 2025-08-13 07:08:02.324 [WARNING][5408] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" HandleID="k8s-pod-network.c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:08:02.358977 containerd[1456]: 2025-08-13 07:08:02.324 [INFO][5408] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" HandleID="k8s-pod-network.c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:08:02.358977 containerd[1456]: 2025-08-13 07:08:02.329 [INFO][5408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:02.358977 containerd[1456]: 2025-08-13 07:08:02.342 [INFO][5401] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:08:02.358977 containerd[1456]: time="2025-08-13T07:08:02.358323119Z" level=info msg="TearDown network for sandbox \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\" successfully" Aug 13 07:08:02.358977 containerd[1456]: time="2025-08-13T07:08:02.358362070Z" level=info msg="StopPodSandbox for \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\" returns successfully" Aug 13 07:08:02.373500 containerd[1456]: time="2025-08-13T07:08:02.360820783Z" level=info msg="RemovePodSandbox for \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\"" Aug 13 07:08:02.373500 containerd[1456]: time="2025-08-13T07:08:02.360876368Z" level=info msg="Forcibly stopping sandbox \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\"" Aug 13 07:08:02.646072 sshd[5356]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:02.654392 systemd[1]: sshd@8-64.227.105.235:22-139.178.89.65:51424.service: Deactivated successfully. Aug 13 07:08:02.661708 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:08:02.671626 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:08:02.674747 systemd-logind[1448]: Removed session 9. Aug 13 07:08:02.723169 containerd[1456]: 2025-08-13 07:08:02.549 [WARNING][5423] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"94eead8f-716f-4f57-a31e-047b0ab9c02f", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"888e423bda0889015ae6cc96b03c80d581253de38ca9d5ccd3694be0b050a733", Pod:"coredns-674b8bbfcf-flgjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbdc5ce19f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:02.723169 containerd[1456]: 2025-08-13 07:08:02.562 [INFO][5423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:08:02.723169 containerd[1456]: 2025-08-13 07:08:02.563 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" iface="eth0" netns="" Aug 13 07:08:02.723169 containerd[1456]: 2025-08-13 07:08:02.563 [INFO][5423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:08:02.723169 containerd[1456]: 2025-08-13 07:08:02.563 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:08:02.723169 containerd[1456]: 2025-08-13 07:08:02.687 [INFO][5431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" HandleID="k8s-pod-network.c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:08:02.723169 containerd[1456]: 2025-08-13 07:08:02.687 [INFO][5431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:02.723169 containerd[1456]: 2025-08-13 07:08:02.688 [INFO][5431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:02.723169 containerd[1456]: 2025-08-13 07:08:02.698 [WARNING][5431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" HandleID="k8s-pod-network.c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:08:02.723169 containerd[1456]: 2025-08-13 07:08:02.698 [INFO][5431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" HandleID="k8s-pod-network.c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--flgjr-eth0" Aug 13 07:08:02.723169 containerd[1456]: 2025-08-13 07:08:02.704 [INFO][5431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:02.723169 containerd[1456]: 2025-08-13 07:08:02.715 [INFO][5423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0" Aug 13 07:08:02.723169 containerd[1456]: time="2025-08-13T07:08:02.718980311Z" level=info msg="TearDown network for sandbox \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\" successfully" Aug 13 07:08:02.769329 containerd[1456]: time="2025-08-13T07:08:02.769179817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:02.769897 containerd[1456]: time="2025-08-13T07:08:02.769632214Z" level=info msg="RemovePodSandbox \"c4f02629939bacf1e1eae22e92b1582bbf812a628095967ceaa859db375b29e0\" returns successfully" Aug 13 07:08:02.781159 containerd[1456]: time="2025-08-13T07:08:02.781015175Z" level=info msg="StopPodSandbox for \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\"" Aug 13 07:08:02.963028 containerd[1456]: 2025-08-13 07:08:02.885 [WARNING][5447] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0", GenerateName:"calico-kube-controllers-b8dbbdc94-", Namespace:"calico-system", SelfLink:"", UID:"2a94fe9f-df0a-43ab-ad0b-a3eba03e2144", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b8dbbdc94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b", Pod:"calico-kube-controllers-b8dbbdc94-vgwsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid08d6f0cd29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:02.963028 containerd[1456]: 2025-08-13 07:08:02.887 [INFO][5447] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:08:02.963028 containerd[1456]: 2025-08-13 07:08:02.887 [INFO][5447] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" iface="eth0" netns="" Aug 13 07:08:02.963028 containerd[1456]: 2025-08-13 07:08:02.887 [INFO][5447] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:08:02.963028 containerd[1456]: 2025-08-13 07:08:02.887 [INFO][5447] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:08:02.963028 containerd[1456]: 2025-08-13 07:08:02.938 [INFO][5454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" HandleID="k8s-pod-network.b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:08:02.963028 containerd[1456]: 2025-08-13 07:08:02.939 [INFO][5454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:02.963028 containerd[1456]: 2025-08-13 07:08:02.939 [INFO][5454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:02.963028 containerd[1456]: 2025-08-13 07:08:02.953 [WARNING][5454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" HandleID="k8s-pod-network.b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:08:02.963028 containerd[1456]: 2025-08-13 07:08:02.953 [INFO][5454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" HandleID="k8s-pod-network.b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:08:02.963028 containerd[1456]: 2025-08-13 07:08:02.956 [INFO][5454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:02.963028 containerd[1456]: 2025-08-13 07:08:02.959 [INFO][5447] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:08:02.963028 containerd[1456]: time="2025-08-13T07:08:02.962661921Z" level=info msg="TearDown network for sandbox \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\" successfully" Aug 13 07:08:02.963028 containerd[1456]: time="2025-08-13T07:08:02.962699829Z" level=info msg="StopPodSandbox for \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\" returns successfully" Aug 13 07:08:03.008675 containerd[1456]: time="2025-08-13T07:08:03.008223241Z" level=info msg="RemovePodSandbox for \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\"" Aug 13 07:08:03.008675 containerd[1456]: time="2025-08-13T07:08:03.008280994Z" level=info msg="Forcibly stopping sandbox \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\"" Aug 13 07:08:03.136235 containerd[1456]: 2025-08-13 07:08:03.067 [WARNING][5468] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0", GenerateName:"calico-kube-controllers-b8dbbdc94-", Namespace:"calico-system", SelfLink:"", UID:"2a94fe9f-df0a-43ab-ad0b-a3eba03e2144", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b8dbbdc94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b", Pod:"calico-kube-controllers-b8dbbdc94-vgwsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid08d6f0cd29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:03.136235 containerd[1456]: 2025-08-13 07:08:03.067 [INFO][5468] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:08:03.136235 containerd[1456]: 2025-08-13 07:08:03.068 [INFO][5468] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" iface="eth0" netns="" Aug 13 07:08:03.136235 containerd[1456]: 2025-08-13 07:08:03.068 [INFO][5468] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:08:03.136235 containerd[1456]: 2025-08-13 07:08:03.068 [INFO][5468] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:08:03.136235 containerd[1456]: 2025-08-13 07:08:03.114 [INFO][5476] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" HandleID="k8s-pod-network.b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:08:03.136235 containerd[1456]: 2025-08-13 07:08:03.115 [INFO][5476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:03.136235 containerd[1456]: 2025-08-13 07:08:03.115 [INFO][5476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:03.136235 containerd[1456]: 2025-08-13 07:08:03.125 [WARNING][5476] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" HandleID="k8s-pod-network.b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:08:03.136235 containerd[1456]: 2025-08-13 07:08:03.125 [INFO][5476] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" HandleID="k8s-pod-network.b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--kube--controllers--b8dbbdc94--vgwsd-eth0" Aug 13 07:08:03.136235 containerd[1456]: 2025-08-13 07:08:03.128 [INFO][5476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:03.136235 containerd[1456]: 2025-08-13 07:08:03.132 [INFO][5468] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53" Aug 13 07:08:03.138110 containerd[1456]: time="2025-08-13T07:08:03.136526664Z" level=info msg="TearDown network for sandbox \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\" successfully" Aug 13 07:08:03.143560 containerd[1456]: time="2025-08-13T07:08:03.143456684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:03.144321 containerd[1456]: time="2025-08-13T07:08:03.143874626Z" level=info msg="RemovePodSandbox \"b72c7061371c0a37e2942cfce40b2a569eade4e264977e08f738be4210a07f53\" returns successfully" Aug 13 07:08:03.145848 containerd[1456]: time="2025-08-13T07:08:03.144956884Z" level=info msg="StopPodSandbox for \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\"" Aug 13 07:08:03.310846 containerd[1456]: 2025-08-13 07:08:03.203 [WARNING][5490] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"22db522d-1126-4583-97ae-d9ff192443f7", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b", Pod:"coredns-674b8bbfcf-qxnq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic72bcdefd72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:03.310846 containerd[1456]: 2025-08-13 07:08:03.204 [INFO][5490] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:08:03.310846 containerd[1456]: 2025-08-13 07:08:03.204 [INFO][5490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" iface="eth0" netns="" Aug 13 07:08:03.310846 containerd[1456]: 2025-08-13 07:08:03.204 [INFO][5490] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:08:03.310846 containerd[1456]: 2025-08-13 07:08:03.204 [INFO][5490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:08:03.310846 containerd[1456]: 2025-08-13 07:08:03.276 [INFO][5497] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" HandleID="k8s-pod-network.f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:08:03.310846 containerd[1456]: 2025-08-13 07:08:03.276 [INFO][5497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:03.310846 containerd[1456]: 2025-08-13 07:08:03.276 [INFO][5497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:03.310846 containerd[1456]: 2025-08-13 07:08:03.287 [WARNING][5497] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" HandleID="k8s-pod-network.f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:08:03.310846 containerd[1456]: 2025-08-13 07:08:03.287 [INFO][5497] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" HandleID="k8s-pod-network.f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:08:03.310846 containerd[1456]: 2025-08-13 07:08:03.294 [INFO][5497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:03.310846 containerd[1456]: 2025-08-13 07:08:03.301 [INFO][5490] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:08:03.310846 containerd[1456]: time="2025-08-13T07:08:03.310570690Z" level=info msg="TearDown network for sandbox \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\" successfully" Aug 13 07:08:03.310846 containerd[1456]: time="2025-08-13T07:08:03.310607945Z" level=info msg="StopPodSandbox for \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\" returns successfully" Aug 13 07:08:03.315317 containerd[1456]: time="2025-08-13T07:08:03.315267577Z" level=info msg="RemovePodSandbox for \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\"" Aug 13 07:08:03.315962 containerd[1456]: time="2025-08-13T07:08:03.315402150Z" level=info msg="Forcibly stopping sandbox \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\"" Aug 13 07:08:03.624156 containerd[1456]: 2025-08-13 07:08:03.489 [WARNING][5515] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"22db522d-1126-4583-97ae-d9ff192443f7", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"6c37366b9f8d0227509b9d12377f40658fd8ef2ed0b30c21c401205b1281829b", Pod:"coredns-674b8bbfcf-qxnq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic72bcdefd72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:03.624156 containerd[1456]: 2025-08-13 07:08:03.489 [INFO][5515] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:08:03.624156 containerd[1456]: 2025-08-13 07:08:03.489 [INFO][5515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" iface="eth0" netns="" Aug 13 07:08:03.624156 containerd[1456]: 2025-08-13 07:08:03.489 [INFO][5515] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:08:03.624156 containerd[1456]: 2025-08-13 07:08:03.489 [INFO][5515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:08:03.624156 containerd[1456]: 2025-08-13 07:08:03.596 [INFO][5522] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" HandleID="k8s-pod-network.f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:08:03.624156 containerd[1456]: 2025-08-13 07:08:03.596 [INFO][5522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:03.624156 containerd[1456]: 2025-08-13 07:08:03.597 [INFO][5522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:03.624156 containerd[1456]: 2025-08-13 07:08:03.612 [WARNING][5522] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" HandleID="k8s-pod-network.f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:08:03.624156 containerd[1456]: 2025-08-13 07:08:03.612 [INFO][5522] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" HandleID="k8s-pod-network.f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-coredns--674b8bbfcf--qxnq4-eth0" Aug 13 07:08:03.624156 containerd[1456]: 2025-08-13 07:08:03.616 [INFO][5522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:03.624156 containerd[1456]: 2025-08-13 07:08:03.619 [INFO][5515] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c" Aug 13 07:08:03.628431 containerd[1456]: time="2025-08-13T07:08:03.624219350Z" level=info msg="TearDown network for sandbox \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\" successfully" Aug 13 07:08:03.628431 containerd[1456]: time="2025-08-13T07:08:03.628411902Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:03.628551 containerd[1456]: time="2025-08-13T07:08:03.628509368Z" level=info msg="RemovePodSandbox \"f9e6c3649fde0c0dbc44a1e80afa9a4d8067acf5790e9af66c28cbd4c099119c\" returns successfully" Aug 13 07:08:03.630567 containerd[1456]: time="2025-08-13T07:08:03.629173698Z" level=info msg="StopPodSandbox for \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\"" Aug 13 07:08:03.885346 containerd[1456]: 2025-08-13 07:08:03.732 [WARNING][5536] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--6ff464d979--6n8l7-eth0" Aug 13 07:08:03.885346 containerd[1456]: 2025-08-13 07:08:03.734 [INFO][5536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:08:03.885346 containerd[1456]: 2025-08-13 07:08:03.734 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" iface="eth0" netns="" Aug 13 07:08:03.885346 containerd[1456]: 2025-08-13 07:08:03.734 [INFO][5536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:08:03.885346 containerd[1456]: 2025-08-13 07:08:03.734 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:08:03.885346 containerd[1456]: 2025-08-13 07:08:03.834 [INFO][5543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" HandleID="k8s-pod-network.0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--6ff464d979--6n8l7-eth0" Aug 13 07:08:03.885346 containerd[1456]: 2025-08-13 07:08:03.838 [INFO][5543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:03.885346 containerd[1456]: 2025-08-13 07:08:03.838 [INFO][5543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:03.885346 containerd[1456]: 2025-08-13 07:08:03.857 [WARNING][5543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" HandleID="k8s-pod-network.0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--6ff464d979--6n8l7-eth0" Aug 13 07:08:03.885346 containerd[1456]: 2025-08-13 07:08:03.857 [INFO][5543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" HandleID="k8s-pod-network.0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--6ff464d979--6n8l7-eth0" Aug 13 07:08:03.885346 containerd[1456]: 2025-08-13 07:08:03.862 [INFO][5543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:03.885346 containerd[1456]: 2025-08-13 07:08:03.868 [INFO][5536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:08:03.885346 containerd[1456]: time="2025-08-13T07:08:03.884980940Z" level=info msg="TearDown network for sandbox \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\" successfully" Aug 13 07:08:03.885346 containerd[1456]: time="2025-08-13T07:08:03.885036812Z" level=info msg="StopPodSandbox for \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\" returns successfully" Aug 13 07:08:03.905791 containerd[1456]: time="2025-08-13T07:08:03.905610454Z" level=info msg="RemovePodSandbox for \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\"" Aug 13 07:08:03.905791 containerd[1456]: time="2025-08-13T07:08:03.905675573Z" level=info msg="Forcibly stopping sandbox \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\"" Aug 13 07:08:04.182083 containerd[1456]: 2025-08-13 07:08:04.064 [WARNING][5558] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" WorkloadEndpoint="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--6ff464d979--6n8l7-eth0" Aug 13 07:08:04.182083 containerd[1456]: 2025-08-13 07:08:04.065 [INFO][5558] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:08:04.182083 containerd[1456]: 2025-08-13 07:08:04.065 [INFO][5558] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" iface="eth0" netns="" Aug 13 07:08:04.182083 containerd[1456]: 2025-08-13 07:08:04.065 [INFO][5558] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:08:04.182083 containerd[1456]: 2025-08-13 07:08:04.065 [INFO][5558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:08:04.182083 containerd[1456]: 2025-08-13 07:08:04.140 [INFO][5565] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" HandleID="k8s-pod-network.0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--6ff464d979--6n8l7-eth0" Aug 13 07:08:04.182083 containerd[1456]: 2025-08-13 07:08:04.140 [INFO][5565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:04.182083 containerd[1456]: 2025-08-13 07:08:04.140 [INFO][5565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:04.182083 containerd[1456]: 2025-08-13 07:08:04.162 [WARNING][5565] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" HandleID="k8s-pod-network.0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--6ff464d979--6n8l7-eth0" Aug 13 07:08:04.182083 containerd[1456]: 2025-08-13 07:08:04.162 [INFO][5565] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" HandleID="k8s-pod-network.0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-whisker--6ff464d979--6n8l7-eth0" Aug 13 07:08:04.182083 containerd[1456]: 2025-08-13 07:08:04.167 [INFO][5565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:04.182083 containerd[1456]: 2025-08-13 07:08:04.173 [INFO][5558] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678" Aug 13 07:08:04.182083 containerd[1456]: time="2025-08-13T07:08:04.180118523Z" level=info msg="TearDown network for sandbox \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\" successfully" Aug 13 07:08:04.185329 containerd[1456]: time="2025-08-13T07:08:04.184098840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:04.185329 containerd[1456]: time="2025-08-13T07:08:04.184327643Z" level=info msg="RemovePodSandbox \"0c830f15fe9973e8454cba23b9ddb044972f577fc92db1adcb519400d4d79678\" returns successfully" Aug 13 07:08:04.189085 containerd[1456]: time="2025-08-13T07:08:04.188828412Z" level=info msg="StopPodSandbox for \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\"" Aug 13 07:08:04.548064 containerd[1456]: 2025-08-13 07:08:04.332 [WARNING][5579] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"536d979e-0e84-4095-adcc-e89aae57b3e3", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa", Pod:"goldmane-768f4c5c69-67ggw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie13d9b1bfb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:04.548064 containerd[1456]: 2025-08-13 07:08:04.332 [INFO][5579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:08:04.548064 containerd[1456]: 2025-08-13 07:08:04.332 [INFO][5579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" iface="eth0" netns="" Aug 13 07:08:04.548064 containerd[1456]: 2025-08-13 07:08:04.332 [INFO][5579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:08:04.548064 containerd[1456]: 2025-08-13 07:08:04.332 [INFO][5579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:08:04.548064 containerd[1456]: 2025-08-13 07:08:04.484 [INFO][5586] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" HandleID="k8s-pod-network.2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:08:04.548064 containerd[1456]: 2025-08-13 07:08:04.489 [INFO][5586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:04.548064 containerd[1456]: 2025-08-13 07:08:04.489 [INFO][5586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:04.548064 containerd[1456]: 2025-08-13 07:08:04.517 [WARNING][5586] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" HandleID="k8s-pod-network.2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:08:04.548064 containerd[1456]: 2025-08-13 07:08:04.519 [INFO][5586] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" HandleID="k8s-pod-network.2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:08:04.548064 containerd[1456]: 2025-08-13 07:08:04.527 [INFO][5586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:04.548064 containerd[1456]: 2025-08-13 07:08:04.536 [INFO][5579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:08:04.552175 containerd[1456]: time="2025-08-13T07:08:04.549351826Z" level=info msg="TearDown network for sandbox \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\" successfully" Aug 13 07:08:04.552500 containerd[1456]: time="2025-08-13T07:08:04.552261946Z" level=info msg="StopPodSandbox for \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\" returns successfully" Aug 13 07:08:04.554483 containerd[1456]: time="2025-08-13T07:08:04.553594336Z" level=info msg="RemovePodSandbox for \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\"" Aug 13 07:08:04.554483 containerd[1456]: time="2025-08-13T07:08:04.553649768Z" level=info msg="Forcibly stopping sandbox \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\"" Aug 13 07:08:04.885518 containerd[1456]: 2025-08-13 07:08:04.736 [WARNING][5601] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"536d979e-0e84-4095-adcc-e89aae57b3e3", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"fd8b9c483709a231114725b92bc937cc7a8ad912f8bac295de9a9b290ffe9bfa", Pod:"goldmane-768f4c5c69-67ggw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie13d9b1bfb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:04.885518 containerd[1456]: 2025-08-13 07:08:04.736 [INFO][5601] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:08:04.885518 containerd[1456]: 2025-08-13 07:08:04.736 [INFO][5601] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" iface="eth0" netns="" Aug 13 07:08:04.885518 containerd[1456]: 2025-08-13 07:08:04.736 [INFO][5601] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:08:04.885518 containerd[1456]: 2025-08-13 07:08:04.736 [INFO][5601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:08:04.885518 containerd[1456]: 2025-08-13 07:08:04.854 [INFO][5612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" HandleID="k8s-pod-network.2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:08:04.885518 containerd[1456]: 2025-08-13 07:08:04.856 [INFO][5612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:04.885518 containerd[1456]: 2025-08-13 07:08:04.856 [INFO][5612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:04.885518 containerd[1456]: 2025-08-13 07:08:04.870 [WARNING][5612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" HandleID="k8s-pod-network.2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:08:04.885518 containerd[1456]: 2025-08-13 07:08:04.871 [INFO][5612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" HandleID="k8s-pod-network.2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-goldmane--768f4c5c69--67ggw-eth0" Aug 13 07:08:04.885518 containerd[1456]: 2025-08-13 07:08:04.875 [INFO][5612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:04.885518 containerd[1456]: 2025-08-13 07:08:04.880 [INFO][5601] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc" Aug 13 07:08:04.888994 containerd[1456]: time="2025-08-13T07:08:04.886400806Z" level=info msg="TearDown network for sandbox \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\" successfully" Aug 13 07:08:04.891570 containerd[1456]: time="2025-08-13T07:08:04.891382047Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:04.891570 containerd[1456]: time="2025-08-13T07:08:04.891490000Z" level=info msg="RemovePodSandbox \"2fc574b8f625f6e1218df557556801a495b177f6b9a9190c6a5b9c7860a2b8dc\" returns successfully" Aug 13 07:08:04.892320 containerd[1456]: time="2025-08-13T07:08:04.892028266Z" level=info msg="StopPodSandbox for \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\"" Aug 13 07:08:05.073114 containerd[1456]: 2025-08-13 07:08:04.970 [WARNING][5628] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94837213-7248-4886-ac34-73ab8173c672", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f", Pod:"csi-node-driver-n2xth", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43276127c1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:05.073114 containerd[1456]: 2025-08-13 07:08:04.970 [INFO][5628] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:08:05.073114 containerd[1456]: 2025-08-13 07:08:04.970 [INFO][5628] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" iface="eth0" netns="" Aug 13 07:08:05.073114 containerd[1456]: 2025-08-13 07:08:04.971 [INFO][5628] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:08:05.073114 containerd[1456]: 2025-08-13 07:08:04.971 [INFO][5628] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:08:05.073114 containerd[1456]: 2025-08-13 07:08:05.047 [INFO][5636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" HandleID="k8s-pod-network.7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:08:05.073114 containerd[1456]: 2025-08-13 07:08:05.048 [INFO][5636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:05.073114 containerd[1456]: 2025-08-13 07:08:05.048 [INFO][5636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:05.073114 containerd[1456]: 2025-08-13 07:08:05.060 [WARNING][5636] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" HandleID="k8s-pod-network.7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:08:05.073114 containerd[1456]: 2025-08-13 07:08:05.060 [INFO][5636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" HandleID="k8s-pod-network.7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:08:05.073114 containerd[1456]: 2025-08-13 07:08:05.062 [INFO][5636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:05.073114 containerd[1456]: 2025-08-13 07:08:05.065 [INFO][5628] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:08:05.073114 containerd[1456]: time="2025-08-13T07:08:05.072936464Z" level=info msg="TearDown network for sandbox \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\" successfully" Aug 13 07:08:05.073114 containerd[1456]: time="2025-08-13T07:08:05.072971310Z" level=info msg="StopPodSandbox for \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\" returns successfully" Aug 13 07:08:05.074125 containerd[1456]: time="2025-08-13T07:08:05.073816447Z" level=info msg="RemovePodSandbox for \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\"" Aug 13 07:08:05.074125 containerd[1456]: time="2025-08-13T07:08:05.073851471Z" level=info msg="Forcibly stopping sandbox \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\"" Aug 13 07:08:05.294782 containerd[1456]: 2025-08-13 07:08:05.162 [WARNING][5651] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94837213-7248-4886-ac34-73ab8173c672", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f", Pod:"csi-node-driver-n2xth", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43276127c1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:05.294782 containerd[1456]: 2025-08-13 07:08:05.163 [INFO][5651] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:08:05.294782 containerd[1456]: 2025-08-13 07:08:05.163 [INFO][5651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" iface="eth0" netns="" Aug 13 07:08:05.294782 containerd[1456]: 2025-08-13 07:08:05.163 [INFO][5651] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:08:05.294782 containerd[1456]: 2025-08-13 07:08:05.163 [INFO][5651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:08:05.294782 containerd[1456]: 2025-08-13 07:08:05.272 [INFO][5659] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" HandleID="k8s-pod-network.7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:08:05.294782 containerd[1456]: 2025-08-13 07:08:05.273 [INFO][5659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:05.294782 containerd[1456]: 2025-08-13 07:08:05.273 [INFO][5659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:05.294782 containerd[1456]: 2025-08-13 07:08:05.283 [WARNING][5659] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" HandleID="k8s-pod-network.7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:08:05.294782 containerd[1456]: 2025-08-13 07:08:05.284 [INFO][5659] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" HandleID="k8s-pod-network.7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-csi--node--driver--n2xth-eth0" Aug 13 07:08:05.294782 containerd[1456]: 2025-08-13 07:08:05.286 [INFO][5659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:05.294782 containerd[1456]: 2025-08-13 07:08:05.291 [INFO][5651] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e" Aug 13 07:08:05.294782 containerd[1456]: time="2025-08-13T07:08:05.294728807Z" level=info msg="TearDown network for sandbox \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\" successfully" Aug 13 07:08:05.300987 containerd[1456]: time="2025-08-13T07:08:05.300767294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:05.300987 containerd[1456]: time="2025-08-13T07:08:05.300873158Z" level=info msg="RemovePodSandbox \"7310fabde3c99120f57e7e17dbbedc2086b4fe35207ddf6bcedc1fc463d0c81e\" returns successfully" Aug 13 07:08:05.302718 containerd[1456]: time="2025-08-13T07:08:05.302251997Z" level=info msg="StopPodSandbox for \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\"" Aug 13 07:08:05.494921 containerd[1456]: 2025-08-13 07:08:05.386 [WARNING][5673] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0", GenerateName:"calico-apiserver-6df6784b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbc0eece-7b19-4b32-8aa8-4f52057a212b", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df6784b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c", Pod:"calico-apiserver-6df6784b98-8v5cf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94a741be5d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:05.494921 containerd[1456]: 2025-08-13 07:08:05.387 [INFO][5673] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:08:05.494921 containerd[1456]: 2025-08-13 07:08:05.387 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" iface="eth0" netns="" Aug 13 07:08:05.494921 containerd[1456]: 2025-08-13 07:08:05.387 [INFO][5673] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:08:05.494921 containerd[1456]: 2025-08-13 07:08:05.387 [INFO][5673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:08:05.494921 containerd[1456]: 2025-08-13 07:08:05.460 [INFO][5680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" HandleID="k8s-pod-network.44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:08:05.494921 containerd[1456]: 2025-08-13 07:08:05.460 [INFO][5680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:05.494921 containerd[1456]: 2025-08-13 07:08:05.460 [INFO][5680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:05.494921 containerd[1456]: 2025-08-13 07:08:05.471 [WARNING][5680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" HandleID="k8s-pod-network.44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:08:05.494921 containerd[1456]: 2025-08-13 07:08:05.471 [INFO][5680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" HandleID="k8s-pod-network.44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:08:05.494921 containerd[1456]: 2025-08-13 07:08:05.476 [INFO][5680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:05.494921 containerd[1456]: 2025-08-13 07:08:05.488 [INFO][5673] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:08:05.497413 containerd[1456]: time="2025-08-13T07:08:05.496312047Z" level=info msg="TearDown network for sandbox \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\" successfully" Aug 13 07:08:05.497413 containerd[1456]: time="2025-08-13T07:08:05.496354623Z" level=info msg="StopPodSandbox for \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\" returns successfully" Aug 13 07:08:05.528002 containerd[1456]: time="2025-08-13T07:08:05.527501228Z" level=info msg="RemovePodSandbox for \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\"" Aug 13 07:08:05.528002 containerd[1456]: time="2025-08-13T07:08:05.527562746Z" level=info msg="Forcibly stopping sandbox \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\"" Aug 13 07:08:05.788526 containerd[1456]: 2025-08-13 07:08:05.670 [WARNING][5694] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0", GenerateName:"calico-apiserver-6df6784b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbc0eece-7b19-4b32-8aa8-4f52057a212b", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df6784b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-5-1812e6c6f4", ContainerID:"7db0dc82d931515607cd68b8867a5930f5ea4a967d28178fc5f6d0fa84e64f9c", Pod:"calico-apiserver-6df6784b98-8v5cf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94a741be5d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:08:05.788526 containerd[1456]: 2025-08-13 07:08:05.670 [INFO][5694] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:08:05.788526 containerd[1456]: 2025-08-13 07:08:05.670 [INFO][5694] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" iface="eth0" netns="" Aug 13 07:08:05.788526 containerd[1456]: 2025-08-13 07:08:05.670 [INFO][5694] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:08:05.788526 containerd[1456]: 2025-08-13 07:08:05.670 [INFO][5694] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:08:05.788526 containerd[1456]: 2025-08-13 07:08:05.746 [INFO][5702] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" HandleID="k8s-pod-network.44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:08:05.788526 containerd[1456]: 2025-08-13 07:08:05.747 [INFO][5702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:08:05.788526 containerd[1456]: 2025-08-13 07:08:05.748 [INFO][5702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:08:05.788526 containerd[1456]: 2025-08-13 07:08:05.766 [WARNING][5702] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" HandleID="k8s-pod-network.44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:08:05.788526 containerd[1456]: 2025-08-13 07:08:05.773 [INFO][5702] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" HandleID="k8s-pod-network.44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Workload="ci--4081.3.5--5--1812e6c6f4-k8s-calico--apiserver--6df6784b98--8v5cf-eth0" Aug 13 07:08:05.788526 containerd[1456]: 2025-08-13 07:08:05.777 [INFO][5702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:08:05.788526 containerd[1456]: 2025-08-13 07:08:05.782 [INFO][5694] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a" Aug 13 07:08:05.789872 containerd[1456]: time="2025-08-13T07:08:05.789451477Z" level=info msg="TearDown network for sandbox \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\" successfully" Aug 13 07:08:05.797938 containerd[1456]: time="2025-08-13T07:08:05.797866401Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:08:05.818243 containerd[1456]: time="2025-08-13T07:08:05.818169114Z" level=info msg="RemovePodSandbox \"44e19f375f6508e6b600a7f85e4351adca187f3fdd24c7a910b3274a06aa619a\" returns successfully" Aug 13 07:08:06.001385 containerd[1456]: time="2025-08-13T07:08:05.993850683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:08:06.004402 containerd[1456]: time="2025-08-13T07:08:06.004273897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:06.037677 containerd[1456]: time="2025-08-13T07:08:06.037617298Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:06.050379 containerd[1456]: time="2025-08-13T07:08:06.049411751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 7.05213181s" Aug 13 07:08:06.050379 containerd[1456]: time="2025-08-13T07:08:06.049492185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:08:06.057424 containerd[1456]: time="2025-08-13T07:08:06.057371298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:06.080654 containerd[1456]: time="2025-08-13T07:08:06.080025078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:08:06.440273 containerd[1456]: time="2025-08-13T07:08:06.440109967Z" level=info msg="CreateContainer within sandbox \"34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:08:06.663192 containerd[1456]: time="2025-08-13T07:08:06.662759181Z" level=info msg="CreateContainer within sandbox \"34637c227b09cc60b32bc6158acfc2867d07e4d9448ad02e524e622f5f24553b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7414f284b28d5a703b21d04e6a08df65327198dd7ed6a4c51c9134ee8cc04674\"" Aug 13 07:08:06.678609 containerd[1456]: time="2025-08-13T07:08:06.677500190Z" level=info msg="StartContainer for \"7414f284b28d5a703b21d04e6a08df65327198dd7ed6a4c51c9134ee8cc04674\"" Aug 13 07:08:07.051663 systemd[1]: Started cri-containerd-7414f284b28d5a703b21d04e6a08df65327198dd7ed6a4c51c9134ee8cc04674.scope - libcontainer container 7414f284b28d5a703b21d04e6a08df65327198dd7ed6a4c51c9134ee8cc04674. Aug 13 07:08:07.152153 containerd[1456]: time="2025-08-13T07:08:07.151774642Z" level=info msg="StartContainer for \"7414f284b28d5a703b21d04e6a08df65327198dd7ed6a4c51c9134ee8cc04674\" returns successfully" Aug 13 07:08:07.666232 systemd[1]: Started sshd@9-64.227.105.235:22-139.178.89.65:51432.service - OpenSSH per-connection server daemon (139.178.89.65:51432). Aug 13 07:08:07.797833 sshd[5760]: Accepted publickey for core from 139.178.89.65 port 51432 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:07.800173 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:07.808327 systemd-logind[1448]: New session 10 of user core. Aug 13 07:08:07.812640 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:08:07.971796 kubelet[2506]: I0813 07:08:07.958695 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b8dbbdc94-vgwsd" podStartSLOduration=28.413989375 podStartE2EDuration="45.93381582s" podCreationTimestamp="2025-08-13 07:07:22 +0000 UTC" firstStartedPulling="2025-08-13 07:07:48.543900463 +0000 UTC m=+49.168162089" lastFinishedPulling="2025-08-13 07:08:06.063726893 +0000 UTC m=+66.687988534" observedRunningTime="2025-08-13 07:08:07.925928414 +0000 UTC m=+68.550190065" watchObservedRunningTime="2025-08-13 07:08:07.93381582 +0000 UTC m=+68.558077468" Aug 13 07:08:08.659543 sshd[5760]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:08.674524 systemd[1]: sshd@9-64.227.105.235:22-139.178.89.65:51432.service: Deactivated successfully. Aug 13 07:08:08.678058 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:08:08.681872 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:08:08.686934 systemd[1]: Started sshd@10-64.227.105.235:22-139.178.89.65:51440.service - OpenSSH per-connection server daemon (139.178.89.65:51440). Aug 13 07:08:08.690108 systemd-logind[1448]: Removed session 10. Aug 13 07:08:08.780972 sshd[5774]: Accepted publickey for core from 139.178.89.65 port 51440 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:08.783504 sshd[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:08.793680 systemd-logind[1448]: New session 11 of user core. Aug 13 07:08:08.801677 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:08:08.905035 systemd[1]: run-containerd-runc-k8s.io-7414f284b28d5a703b21d04e6a08df65327198dd7ed6a4c51c9134ee8cc04674-runc.3NeZWw.mount: Deactivated successfully. Aug 13 07:08:09.171422 sshd[5774]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:09.189718 systemd[1]: sshd@10-64.227.105.235:22-139.178.89.65:51440.service: Deactivated successfully. Aug 13 07:08:09.197809 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:08:09.201046 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:08:09.217917 systemd[1]: Started sshd@11-64.227.105.235:22-139.178.89.65:48312.service - OpenSSH per-connection server daemon (139.178.89.65:48312). Aug 13 07:08:09.220580 systemd-logind[1448]: Removed session 11. Aug 13 07:08:09.318470 sshd[5805]: Accepted publickey for core from 139.178.89.65 port 48312 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:09.320295 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:09.331667 systemd-logind[1448]: New session 12 of user core. Aug 13 07:08:09.339494 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:08:09.497511 kubelet[2506]: I0813 07:08:09.486813 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:08:09.556407 sshd[5805]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:09.565782 systemd[1]: sshd@11-64.227.105.235:22-139.178.89.65:48312.service: Deactivated successfully. Aug 13 07:08:09.570342 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:08:09.572468 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:08:09.574936 systemd-logind[1448]: Removed session 12. Aug 13 07:08:10.656143 containerd[1456]: time="2025-08-13T07:08:10.656058328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:10.657345 containerd[1456]: time="2025-08-13T07:08:10.657156051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:08:10.658055 containerd[1456]: time="2025-08-13T07:08:10.657974185Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:10.660269 containerd[1456]: time="2025-08-13T07:08:10.660189259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:10.661038 containerd[1456]: time="2025-08-13T07:08:10.661001213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 4.580924485s" Aug 13 07:08:10.661123 containerd[1456]: time="2025-08-13T07:08:10.661041911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:08:10.678816 containerd[1456]: time="2025-08-13T07:08:10.678605799Z" level=info msg="CreateContainer within sandbox \"7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:08:10.714269 containerd[1456]: time="2025-08-13T07:08:10.714097383Z" level=info msg="CreateContainer within sandbox \"7be5df643d4bb79e836a4ded4586c9739517876c318ee4d0eedb181c02437b5f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b5e2e8e5bc4e37592957be9baf19550e70ca7626a6986952b52ad7633e298369\"" Aug 13 07:08:10.716325 containerd[1456]: time="2025-08-13T07:08:10.715093815Z" level=info msg="StartContainer for \"b5e2e8e5bc4e37592957be9baf19550e70ca7626a6986952b52ad7633e298369\"" Aug 13 07:08:10.759606 systemd[1]: Started cri-containerd-b5e2e8e5bc4e37592957be9baf19550e70ca7626a6986952b52ad7633e298369.scope - libcontainer container b5e2e8e5bc4e37592957be9baf19550e70ca7626a6986952b52ad7633e298369. Aug 13 07:08:10.804204 containerd[1456]: time="2025-08-13T07:08:10.804152680Z" level=info msg="StartContainer for \"b5e2e8e5bc4e37592957be9baf19550e70ca7626a6986952b52ad7633e298369\" returns successfully" Aug 13 07:08:11.792391 kubelet[2506]: I0813 07:08:11.790611 2506 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:08:11.794143 kubelet[2506]: I0813 07:08:11.794023 2506 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:08:14.575642 systemd[1]: Started sshd@12-64.227.105.235:22-139.178.89.65:48322.service - OpenSSH per-connection server daemon (139.178.89.65:48322). Aug 13 07:08:14.732205 sshd[5895]: Accepted publickey for core from 139.178.89.65 port 48322 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:14.739449 sshd[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:14.750721 systemd-logind[1448]: New session 13 of user core. Aug 13 07:08:14.754381 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:08:15.263626 sshd[5895]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:15.268303 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:08:15.269219 systemd[1]: sshd@12-64.227.105.235:22-139.178.89.65:48322.service: Deactivated successfully. Aug 13 07:08:15.272580 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:08:15.275453 systemd-logind[1448]: Removed session 13. Aug 13 07:08:17.548385 kubelet[2506]: E0813 07:08:17.548187 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:18.547105 kubelet[2506]: E0813 07:08:18.547051 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:20.282628 systemd[1]: Started sshd@13-64.227.105.235:22-139.178.89.65:55036.service - OpenSSH per-connection server daemon (139.178.89.65:55036). Aug 13 07:08:20.343098 sshd[5909]: Accepted publickey for core from 139.178.89.65 port 55036 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:20.345550 sshd[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:20.351309 systemd-logind[1448]: New session 14 of user core. Aug 13 07:08:20.358380 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:08:20.626086 sshd[5909]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:20.632490 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:08:20.633464 systemd[1]: sshd@13-64.227.105.235:22-139.178.89.65:55036.service: Deactivated successfully. Aug 13 07:08:20.635940 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:08:20.638710 systemd-logind[1448]: Removed session 14. Aug 13 07:08:20.766469 kubelet[2506]: I0813 07:08:20.766414 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:08:20.806068 kubelet[2506]: I0813 07:08:20.804651 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-n2xth" podStartSLOduration=34.366716316 podStartE2EDuration="59.801664542s" podCreationTimestamp="2025-08-13 07:07:21 +0000 UTC" firstStartedPulling="2025-08-13 07:07:45.227693734 +0000 UTC m=+45.851955373" lastFinishedPulling="2025-08-13 07:08:10.662641972 +0000 UTC m=+71.286903599" observedRunningTime="2025-08-13 07:08:10.862340797 +0000 UTC m=+71.486602442" watchObservedRunningTime="2025-08-13 07:08:20.801664542 +0000 UTC m=+81.425926218" Aug 13 07:08:25.651535 systemd[1]: Started sshd@14-64.227.105.235:22-139.178.89.65:55042.service - OpenSSH per-connection server daemon (139.178.89.65:55042). Aug 13 07:08:25.775710 sshd[5924]: Accepted publickey for core from 139.178.89.65 port 55042 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:25.778256 sshd[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:25.785340 systemd-logind[1448]: New session 15 of user core. Aug 13 07:08:25.791467 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:08:26.293344 sshd[5924]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:26.301497 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:08:26.301585 systemd[1]: sshd@14-64.227.105.235:22-139.178.89.65:55042.service: Deactivated successfully. Aug 13 07:08:26.305512 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:08:26.310664 systemd-logind[1448]: Removed session 15. Aug 13 07:08:28.546822 kubelet[2506]: E0813 07:08:28.546753 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:30.546946 kubelet[2506]: E0813 07:08:30.546896 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:31.311799 systemd[1]: Started sshd@15-64.227.105.235:22-139.178.89.65:59412.service - OpenSSH per-connection server daemon (139.178.89.65:59412). Aug 13 07:08:31.436242 sshd[5976]: Accepted publickey for core from 139.178.89.65 port 59412 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:31.439698 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:31.447683 systemd-logind[1448]: New session 16 of user core. Aug 13 07:08:31.452433 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:08:31.949424 sshd[5976]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:31.971251 systemd[1]: sshd@15-64.227.105.235:22-139.178.89.65:59412.service: Deactivated successfully. Aug 13 07:08:31.974832 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:08:31.977243 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:08:31.982573 systemd[1]: Started sshd@16-64.227.105.235:22-139.178.89.65:59424.service - OpenSSH per-connection server daemon (139.178.89.65:59424). Aug 13 07:08:31.986578 systemd-logind[1448]: Removed session 16. Aug 13 07:08:32.048344 sshd[5989]: Accepted publickey for core from 139.178.89.65 port 59424 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:32.050463 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:32.057034 systemd-logind[1448]: New session 17 of user core. Aug 13 07:08:32.062474 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:08:32.400494 sshd[5989]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:32.410885 systemd[1]: sshd@16-64.227.105.235:22-139.178.89.65:59424.service: Deactivated successfully. Aug 13 07:08:32.413640 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:08:32.415979 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:08:32.421653 systemd[1]: Started sshd@17-64.227.105.235:22-139.178.89.65:59426.service - OpenSSH per-connection server daemon (139.178.89.65:59426). Aug 13 07:08:32.423093 systemd-logind[1448]: Removed session 17. Aug 13 07:08:32.511616 sshd[6000]: Accepted publickey for core from 139.178.89.65 port 59426 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:32.514253 sshd[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:32.520353 systemd-logind[1448]: New session 18 of user core. Aug 13 07:08:32.529467 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:08:33.290671 sshd[6000]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:33.316515 systemd[1]: Started sshd@18-64.227.105.235:22-139.178.89.65:59436.service - OpenSSH per-connection server daemon (139.178.89.65:59436). Aug 13 07:08:33.318633 systemd[1]: sshd@17-64.227.105.235:22-139.178.89.65:59426.service: Deactivated successfully. Aug 13 07:08:33.325728 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:08:33.331106 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:08:33.334723 systemd-logind[1448]: Removed session 18. Aug 13 07:08:33.419355 sshd[6013]: Accepted publickey for core from 139.178.89.65 port 59436 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:33.422892 sshd[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:33.430597 systemd-logind[1448]: New session 19 of user core. Aug 13 07:08:33.436399 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:08:34.150833 sshd[6013]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:34.166532 systemd[1]: Started sshd@19-64.227.105.235:22-139.178.89.65:59450.service - OpenSSH per-connection server daemon (139.178.89.65:59450). Aug 13 07:08:34.167222 systemd[1]: sshd@18-64.227.105.235:22-139.178.89.65:59436.service: Deactivated successfully. Aug 13 07:08:34.170722 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:08:34.174803 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:08:34.186644 systemd-logind[1448]: Removed session 19. Aug 13 07:08:34.262230 sshd[6028]: Accepted publickey for core from 139.178.89.65 port 59450 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:34.264958 sshd[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:34.270682 systemd-logind[1448]: New session 20 of user core. Aug 13 07:08:34.277446 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:08:34.474920 sshd[6028]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:34.481879 systemd[1]: sshd@19-64.227.105.235:22-139.178.89.65:59450.service: Deactivated successfully. Aug 13 07:08:34.485192 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:08:34.486816 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:08:34.488816 systemd-logind[1448]: Removed session 20. Aug 13 07:08:39.498595 systemd[1]: Started sshd@20-64.227.105.235:22-139.178.89.65:40556.service - OpenSSH per-connection server daemon (139.178.89.65:40556). Aug 13 07:08:39.631757 sshd[6073]: Accepted publickey for core from 139.178.89.65 port 40556 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:39.640370 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:39.647166 systemd-logind[1448]: New session 21 of user core. Aug 13 07:08:39.652406 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:08:40.324580 sshd[6073]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:40.330049 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:08:40.330995 systemd[1]: sshd@20-64.227.105.235:22-139.178.89.65:40556.service: Deactivated successfully. Aug 13 07:08:40.333643 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:08:40.335858 systemd-logind[1448]: Removed session 21. Aug 13 07:08:45.345756 systemd[1]: Started sshd@21-64.227.105.235:22-139.178.89.65:40562.service - OpenSSH per-connection server daemon (139.178.89.65:40562). Aug 13 07:08:45.431738 sshd[6109]: Accepted publickey for core from 139.178.89.65 port 40562 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:45.435252 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:45.441779 systemd-logind[1448]: New session 22 of user core. Aug 13 07:08:45.446392 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:08:46.016769 sshd[6109]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:46.031838 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:08:46.032333 systemd[1]: sshd@21-64.227.105.235:22-139.178.89.65:40562.service: Deactivated successfully. Aug 13 07:08:46.035216 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:08:46.038306 systemd-logind[1448]: Removed session 22. Aug 13 07:08:51.040215 systemd[1]: Started sshd@22-64.227.105.235:22-139.178.89.65:55378.service - OpenSSH per-connection server daemon (139.178.89.65:55378). Aug 13 07:08:51.181189 sshd[6124]: Accepted publickey for core from 139.178.89.65 port 55378 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:51.182822 sshd[6124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:51.192483 systemd-logind[1448]: New session 23 of user core. Aug 13 07:08:51.202461 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:08:51.602827 sshd[6124]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:51.611413 systemd[1]: sshd@22-64.227.105.235:22-139.178.89.65:55378.service: Deactivated successfully. Aug 13 07:08:51.615982 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:08:51.618312 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:08:51.620207 systemd-logind[1448]: Removed session 23.