Jul 6 23:51:32.943864 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:51:32.943894 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:51:32.943908 kernel: BIOS-provided physical RAM map: Jul 6 23:51:32.943915 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 6 23:51:32.943921 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 6 23:51:32.943928 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 6 23:51:32.943936 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jul 6 23:51:32.943943 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jul 6 23:51:32.943950 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:51:32.943960 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 6 23:51:32.943967 kernel: NX (Execute Disable) protection: active Jul 6 23:51:32.943974 kernel: APIC: Static calls initialized Jul 6 23:51:32.943985 kernel: SMBIOS 2.8 present. Jul 6 23:51:32.943992 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 6 23:51:32.944001 kernel: Hypervisor detected: KVM Jul 6 23:51:32.944013 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:51:32.944024 kernel: kvm-clock: using sched offset of 2892548829 cycles Jul 6 23:51:32.944032 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:51:32.944040 kernel: tsc: Detected 2494.146 MHz processor Jul 6 23:51:32.944049 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:51:32.944057 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:51:32.945163 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jul 6 23:51:32.945177 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 6 23:51:32.945186 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:51:32.945201 kernel: ACPI: Early table checksum verification disabled Jul 6 23:51:32.945209 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jul 6 23:51:32.945217 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:32.945225 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:32.945233 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:32.945242 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 6 23:51:32.945250 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:32.945258 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:32.945266 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:32.945277 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:51:32.945285 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 6 23:51:32.945293 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 6 23:51:32.945301 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 6 23:51:32.945309 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 6 23:51:32.945317 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 6 23:51:32.945325 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 6 23:51:32.945342 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 6 23:51:32.945350 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:51:32.945358 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:51:32.945367 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 6 23:51:32.945375 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 6 23:51:32.945389 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jul 6 23:51:32.945398 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jul 6 23:51:32.945410 kernel: Zone ranges: Jul 6 23:51:32.945419 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:51:32.945427 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jul 6 23:51:32.945436 kernel: Normal empty Jul 6 23:51:32.945444 kernel: Movable zone start for each node Jul 6 23:51:32.945452 kernel: Early memory node ranges Jul 6 23:51:32.945461 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 6 23:51:32.945469 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jul 6 23:51:32.945477 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jul 6 23:51:32.945489 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:51:32.945497 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 6 23:51:32.945508 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jul 6 23:51:32.945517 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:51:32.945525 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:51:32.945533 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:51:32.945541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:51:32.945550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:51:32.945558 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:51:32.945570 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:51:32.945578 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:51:32.945586 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:51:32.945595 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:51:32.945603 kernel: TSC deadline timer available Jul 6 23:51:32.945611 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:51:32.945619 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:51:32.945628 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 6 23:51:32.945638 kernel: Booting paravirtualized kernel on KVM Jul 6 23:51:32.945647 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:51:32.945659 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:51:32.945667 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:51:32.945676 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:51:32.945684 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:51:32.945692 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 6 23:51:32.945702 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:51:32.945711 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:51:32.945723 kernel: random: crng init done Jul 6 23:51:32.945732 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:51:32.945740 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:51:32.945748 kernel: Fallback order for Node 0: 0 Jul 6 23:51:32.945756 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jul 6 23:51:32.945765 kernel: Policy zone: DMA32 Jul 6 23:51:32.945773 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:51:32.945782 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 125148K reserved, 0K cma-reserved) Jul 6 23:51:32.945790 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:51:32.945803 kernel: Kernel/User page tables isolation: enabled Jul 6 23:51:32.945811 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:51:32.945819 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:51:32.945828 kernel: Dynamic Preempt: voluntary Jul 6 23:51:32.945836 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:51:32.945845 kernel: rcu: RCU event tracing is enabled. Jul 6 23:51:32.945854 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:51:32.945862 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:51:32.945870 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:51:32.945879 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:51:32.945891 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:51:32.945899 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:51:32.945908 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 6 23:51:32.945916 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:51:32.945926 kernel: Console: colour VGA+ 80x25 Jul 6 23:51:32.945935 kernel: printk: console [tty0] enabled Jul 6 23:51:32.945943 kernel: printk: console [ttyS0] enabled Jul 6 23:51:32.945951 kernel: ACPI: Core revision 20230628 Jul 6 23:51:32.945960 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:51:32.945972 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:51:32.945980 kernel: x2apic enabled Jul 6 23:51:32.945988 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:51:32.945997 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:51:32.946005 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39fcb9af, max_idle_ns: 440795211412 ns Jul 6 23:51:32.946014 kernel: Calibrating delay loop (skipped) preset value.. 4988.29 BogoMIPS (lpj=2494146) Jul 6 23:51:32.946022 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 6 23:51:32.946030 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 6 23:51:32.946053 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:51:32.946075 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:51:32.946084 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:51:32.946097 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 6 23:51:32.946105 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:51:32.946114 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:51:32.946122 kernel: MDS: Mitigation: Clear CPU buffers Jul 6 23:51:32.946131 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:51:32.946140 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:51:32.946155 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:51:32.946164 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:51:32.946173 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:51:32.946182 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:51:32.946191 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 6 23:51:32.946200 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:51:32.946208 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:51:32.946218 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:51:32.946230 kernel: landlock: Up and running. Jul 6 23:51:32.946239 kernel: SELinux: Initializing. Jul 6 23:51:32.946248 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:51:32.946257 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:51:32.946267 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 6 23:51:32.946275 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:51:32.946284 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:51:32.946294 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:51:32.946302 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 6 23:51:32.946316 kernel: signal: max sigframe size: 1776 Jul 6 23:51:32.946325 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:51:32.946334 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:51:32.946343 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:51:32.946352 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:51:32.946360 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:51:32.946369 kernel: .... node #0, CPUs: #1 Jul 6 23:51:32.946378 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:51:32.946389 kernel: smpboot: Max logical packages: 1 Jul 6 23:51:32.946402 kernel: smpboot: Total of 2 processors activated (9976.58 BogoMIPS) Jul 6 23:51:32.946411 kernel: devtmpfs: initialized Jul 6 23:51:32.946420 kernel: x86/mm: Memory block size: 128MB Jul 6 23:51:32.946429 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:51:32.946438 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:51:32.946447 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:51:32.946456 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:51:32.946465 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:51:32.946474 kernel: audit: type=2000 audit(1751845891.965:1): state=initialized audit_enabled=0 res=1 Jul 6 23:51:32.946486 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:51:32.946495 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:51:32.946504 kernel: cpuidle: using governor menu Jul 6 23:51:32.946513 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:51:32.946522 kernel: dca service started, version 1.12.1 Jul 6 23:51:32.946530 kernel: PCI: Using configuration type 1 for base access Jul 6 23:51:32.946539 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:51:32.946548 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:51:32.946557 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:51:32.946570 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:51:32.946579 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:51:32.946588 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:51:32.946596 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:51:32.946605 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:51:32.946614 kernel: ACPI: Interpreter enabled Jul 6 23:51:32.946623 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:51:32.946632 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:51:32.946640 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:51:32.946654 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:51:32.946662 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 6 23:51:32.946671 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:51:32.946898 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:51:32.947008 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 6 23:51:32.947119 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 6 23:51:32.947131 kernel: acpiphp: Slot [3] registered Jul 6 23:51:32.947146 kernel: acpiphp: Slot [4] registered Jul 6 23:51:32.947155 kernel: acpiphp: Slot [5] registered Jul 6 23:51:32.947164 kernel: acpiphp: Slot [6] registered Jul 6 23:51:32.947173 kernel: acpiphp: Slot [7] registered Jul 6 23:51:32.947182 kernel: acpiphp: Slot [8] registered Jul 6 23:51:32.947191 kernel: acpiphp: Slot [9] registered Jul 6 23:51:32.947200 kernel: acpiphp: Slot [10] registered Jul 6 23:51:32.947209 kernel: acpiphp: Slot [11] registered Jul 6 23:51:32.947218 kernel: acpiphp: Slot [12] registered Jul 6 23:51:32.947227 kernel: acpiphp: Slot [13] registered Jul 6 23:51:32.947240 kernel: acpiphp: Slot [14] registered Jul 6 23:51:32.947249 kernel: acpiphp: Slot [15] registered Jul 6 23:51:32.947257 kernel: acpiphp: Slot [16] registered Jul 6 23:51:32.947266 kernel: acpiphp: Slot [17] registered Jul 6 23:51:32.947275 kernel: acpiphp: Slot [18] registered Jul 6 23:51:32.947285 kernel: acpiphp: Slot [19] registered Jul 6 23:51:32.947293 kernel: acpiphp: Slot [20] registered Jul 6 23:51:32.947302 kernel: acpiphp: Slot [21] registered Jul 6 23:51:32.947311 kernel: acpiphp: Slot [22] registered Jul 6 23:51:32.947323 kernel: acpiphp: Slot [23] registered Jul 6 23:51:32.947332 kernel: acpiphp: Slot [24] registered Jul 6 23:51:32.947341 kernel: acpiphp: Slot [25] registered Jul 6 23:51:32.947349 kernel: acpiphp: Slot [26] registered Jul 6 23:51:32.947358 kernel: acpiphp: Slot [27] registered Jul 6 23:51:32.947366 kernel: acpiphp: Slot [28] registered Jul 6 23:51:32.947375 kernel: acpiphp: Slot [29] registered Jul 6 23:51:32.947384 kernel: acpiphp: Slot [30] registered Jul 6 23:51:32.947392 kernel: acpiphp: Slot [31] registered Jul 6 23:51:32.947405 kernel: PCI host bridge to bus 0000:00 Jul 6 23:51:32.947541 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:51:32.947648 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:51:32.947735 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:51:32.947821 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 6 23:51:32.947909 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 6 23:51:32.947994 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:51:32.948160 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 6 23:51:32.948274 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 6 23:51:32.948387 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 6 23:51:32.948488 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jul 6 23:51:32.948587 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 6 23:51:32.948685 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 6 23:51:32.948783 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 6 23:51:32.948888 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 6 23:51:32.949002 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jul 6 23:51:32.949191 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jul 6 23:51:32.949306 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 6 23:51:32.949404 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 6 23:51:32.949501 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 6 23:51:32.949615 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 6 23:51:32.949713 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 6 23:51:32.949812 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 6 23:51:32.949908 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jul 6 23:51:32.950003 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 6 23:51:32.950111 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:51:32.950228 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:51:32.950334 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jul 6 23:51:32.950476 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jul 6 23:51:32.950575 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 6 23:51:32.950693 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:51:32.950792 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jul 6 23:51:32.950890 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jul 6 23:51:32.950987 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 6 23:51:32.951155 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jul 6 23:51:32.951307 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jul 6 23:51:32.951403 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jul 6 23:51:32.951537 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 6 23:51:32.951713 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:51:32.951829 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jul 6 23:51:32.951938 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jul 6 23:51:32.952034 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 6 23:51:32.952193 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:51:32.952294 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jul 6 23:51:32.952403 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jul 6 23:51:32.952547 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jul 6 23:51:32.952705 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jul 6 23:51:32.952836 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jul 6 23:51:32.952937 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 6 23:51:32.952949 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:51:32.952959 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:51:32.952969 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:51:32.952978 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:51:32.952987 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 6 23:51:32.952996 kernel: iommu: Default domain type: Translated Jul 6 23:51:32.953010 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:51:32.953020 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:51:32.953029 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:51:32.953038 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 6 23:51:32.953047 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jul 6 23:51:32.953168 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 6 23:51:32.953268 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 6 23:51:32.953364 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:51:32.953381 kernel: vgaarb: loaded Jul 6 23:51:32.953390 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:51:32.953400 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:51:32.953408 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:51:32.953418 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:51:32.953427 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:51:32.953436 kernel: pnp: PnP ACPI init Jul 6 23:51:32.953445 kernel: pnp: PnP ACPI: found 4 devices Jul 6 23:51:32.953454 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:51:32.953467 kernel: NET: Registered PF_INET protocol family Jul 6 23:51:32.953476 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:51:32.953485 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 6 23:51:32.953494 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:51:32.953503 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:51:32.953512 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:51:32.953522 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 6 23:51:32.953530 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:51:32.953540 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:51:32.953553 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:51:32.953562 kernel: NET: Registered PF_XDP protocol family Jul 6 23:51:32.953666 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:51:32.953755 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:51:32.953850 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:51:32.953939 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 6 23:51:32.954026 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 6 23:51:32.954294 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 6 23:51:32.954431 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 6 23:51:32.954445 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 6 23:51:32.954565 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 29281 usecs Jul 6 23:51:32.954579 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:51:32.954588 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:51:32.954599 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39fcb9af, max_idle_ns: 440795211412 ns Jul 6 23:51:32.954608 kernel: Initialise system trusted keyrings Jul 6 23:51:32.954617 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 6 23:51:32.954631 kernel: Key type asymmetric registered Jul 6 23:51:32.954640 kernel: Asymmetric key parser 'x509' registered Jul 6 23:51:32.954650 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:51:32.954659 kernel: io scheduler mq-deadline registered Jul 6 23:51:32.954668 kernel: io scheduler kyber registered Jul 6 23:51:32.954677 kernel: io scheduler bfq registered Jul 6 23:51:32.954687 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:51:32.954696 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 6 23:51:32.954705 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 6 23:51:32.954714 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 6 23:51:32.954727 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:51:32.954736 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:51:32.954745 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:51:32.954754 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:51:32.954763 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:51:32.954884 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 6 23:51:32.954897 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:51:32.954986 kernel: rtc_cmos 00:03: registered as rtc0 Jul 6 23:51:32.955092 kernel: rtc_cmos 00:03: setting system clock to 2025-07-06T23:51:32 UTC (1751845892) Jul 6 23:51:32.955182 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 6 23:51:32.955193 kernel: intel_pstate: CPU model not supported Jul 6 23:51:32.955203 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:51:32.955211 kernel: Segment Routing with IPv6 Jul 6 23:51:32.955220 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:51:32.955229 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:51:32.955238 kernel: Key type dns_resolver registered Jul 6 23:51:32.955251 kernel: IPI shorthand broadcast: enabled Jul 6 23:51:32.955260 kernel: sched_clock: Marking stable (927003295, 110637112)->(1135648728, -98008321) Jul 6 23:51:32.955269 kernel: registered taskstats version 1 Jul 6 23:51:32.955278 kernel: Loading compiled-in X.509 certificates Jul 6 23:51:32.955288 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:51:32.955297 kernel: Key type .fscrypt registered Jul 6 23:51:32.955306 kernel: Key type fscrypt-provisioning registered Jul 6 23:51:32.955314 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:51:32.955323 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:51:32.955336 kernel: ima: No architecture policies found Jul 6 23:51:32.955345 kernel: clk: Disabling unused clocks Jul 6 23:51:32.955354 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:51:32.955363 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:51:32.955372 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:51:32.955405 kernel: Run /init as init process Jul 6 23:51:32.955418 kernel: with arguments: Jul 6 23:51:32.955428 kernel: /init Jul 6 23:51:32.955441 kernel: with environment: Jul 6 23:51:32.955454 kernel: HOME=/ Jul 6 23:51:32.955463 kernel: TERM=linux Jul 6 23:51:32.955472 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:51:32.955484 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:51:32.955497 systemd[1]: Detected virtualization kvm. Jul 6 23:51:32.955507 systemd[1]: Detected architecture x86-64. Jul 6 23:51:32.955516 systemd[1]: Running in initrd. Jul 6 23:51:32.955542 systemd[1]: No hostname configured, using default hostname. Jul 6 23:51:32.955556 systemd[1]: Hostname set to . Jul 6 23:51:32.955566 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:51:32.955576 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:51:32.955586 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:51:32.955596 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:51:32.955607 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:51:32.955617 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:51:32.955627 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:51:32.955641 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:51:32.955652 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:51:32.955663 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:51:32.955673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:51:32.955683 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:51:32.955693 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:51:32.955707 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:51:32.955717 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:51:32.955727 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:51:32.955740 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:51:32.955750 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:51:32.955760 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:51:32.955774 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:51:32.955784 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:51:32.955794 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:51:32.955804 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:51:32.955814 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:51:32.955824 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:51:32.955834 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:51:32.955844 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:51:32.955858 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:51:32.955868 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:51:32.955879 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:51:32.955888 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:51:32.955898 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:51:32.955909 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:51:32.955919 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:51:32.955969 systemd-journald[183]: Collecting audit messages is disabled. Jul 6 23:51:32.955994 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:51:32.956011 systemd-journald[183]: Journal started Jul 6 23:51:32.956034 systemd-journald[183]: Runtime Journal (/run/log/journal/4b206f58e28946e99873062da26bc099) is 4.9M, max 39.3M, 34.4M free. Jul 6 23:51:32.954571 systemd-modules-load[184]: Inserted module 'overlay' Jul 6 23:51:32.982089 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:51:32.982120 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:51:32.988109 kernel: Bridge firewalling registered Jul 6 23:51:32.985625 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:32.986693 systemd-modules-load[184]: Inserted module 'br_netfilter' Jul 6 23:51:32.987791 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:51:32.999358 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:51:33.001204 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:51:33.004345 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:51:33.005116 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:51:33.009837 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:51:33.029816 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:51:33.034688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:51:33.046316 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:51:33.047683 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:51:33.048847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:51:33.057380 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:51:33.074914 dracut-cmdline[216]: dracut-dracut-053 Jul 6 23:51:33.079090 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:51:33.094329 systemd-resolved[219]: Positive Trust Anchors: Jul 6 23:51:33.094348 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:51:33.094386 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:51:33.097537 systemd-resolved[219]: Defaulting to hostname 'linux'. Jul 6 23:51:33.098886 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:51:33.099912 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:51:33.170113 kernel: SCSI subsystem initialized Jul 6 23:51:33.180106 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:51:33.191110 kernel: iscsi: registered transport (tcp) Jul 6 23:51:33.213106 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:51:33.213217 kernel: QLogic iSCSI HBA Driver Jul 6 23:51:33.261514 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:51:33.269383 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:51:33.297184 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:51:33.297287 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:51:33.299147 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:51:33.343122 kernel: raid6: avx2x4 gen() 24670 MB/s Jul 6 23:51:33.359152 kernel: raid6: avx2x2 gen() 25035 MB/s Jul 6 23:51:33.376360 kernel: raid6: avx2x1 gen() 21804 MB/s Jul 6 23:51:33.376481 kernel: raid6: using algorithm avx2x2 gen() 25035 MB/s Jul 6 23:51:33.394376 kernel: raid6: .... xor() 20615 MB/s, rmw enabled Jul 6 23:51:33.394508 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:51:33.418114 kernel: xor: automatically using best checksumming function avx Jul 6 23:51:33.584225 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:51:33.597251 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:51:33.603309 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:51:33.620022 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jul 6 23:51:33.625088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:51:33.634510 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:51:33.651860 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jul 6 23:51:33.699021 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:51:33.706507 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:51:33.823130 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:51:33.830305 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:51:33.861891 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:51:33.864002 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:51:33.866014 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:51:33.866928 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:51:33.876654 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:51:33.895103 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jul 6 23:51:33.906160 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 6 23:51:33.906796 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:51:33.924102 kernel: scsi host0: Virtio SCSI HBA Jul 6 23:51:33.927300 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:51:33.935441 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:51:33.935779 kernel: GPT:9289727 != 125829119 Jul 6 23:51:33.935839 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:51:33.936338 kernel: GPT:9289727 != 125829119 Jul 6 23:51:33.937140 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:51:33.938573 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:51:33.974450 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:51:33.975288 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:51:33.977819 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:51:33.978446 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:51:33.978614 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:33.985647 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jul 6 23:51:33.985880 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jul 6 23:51:33.979127 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:51:33.994997 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:51:33.999121 kernel: ACPI: bus type USB registered Jul 6 23:51:33.999198 kernel: usbcore: registered new interface driver usbfs Jul 6 23:51:33.999221 kernel: usbcore: registered new interface driver hub Jul 6 23:51:34.004110 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:51:34.004202 kernel: libata version 3.00 loaded. Jul 6 23:51:34.007115 kernel: AES CTR mode by8 optimization enabled Jul 6 23:51:34.013519 kernel: usbcore: registered new device driver usb Jul 6 23:51:34.022103 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 6 23:51:34.053113 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (451) Jul 6 23:51:34.065120 kernel: scsi host1: ata_piix Jul 6 23:51:34.069121 kernel: scsi host2: ata_piix Jul 6 23:51:34.069385 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jul 6 23:51:34.069411 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jul 6 23:51:34.079283 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:51:34.102749 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (445) Jul 6 23:51:34.103924 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:34.109581 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:51:34.119315 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:51:34.122980 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:51:34.123642 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:51:34.130443 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:51:34.132253 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:51:34.140651 disk-uuid[531]: Primary Header is updated. Jul 6 23:51:34.140651 disk-uuid[531]: Secondary Entries is updated. Jul 6 23:51:34.140651 disk-uuid[531]: Secondary Header is updated. Jul 6 23:51:34.155097 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:51:34.162103 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:51:34.176231 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:51:34.178480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:51:34.271480 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 6 23:51:34.271833 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 6 23:51:34.271986 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 6 23:51:34.273351 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jul 6 23:51:34.273590 kernel: hub 1-0:1.0: USB hub found Jul 6 23:51:34.274255 kernel: hub 1-0:1.0: 2 ports detected Jul 6 23:51:35.170340 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:51:35.170780 disk-uuid[532]: The operation has completed successfully. Jul 6 23:51:35.221182 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:51:35.221306 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:51:35.230268 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:51:35.243128 sh[563]: Success Jul 6 23:51:35.262086 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:51:35.338535 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:51:35.340840 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:51:35.341825 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:51:35.373545 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:51:35.373651 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:51:35.373667 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:51:35.374654 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:51:35.375355 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:51:35.384583 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:51:35.385784 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:51:35.396347 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:51:35.399274 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:51:35.412224 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:51:35.412294 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:51:35.412308 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:51:35.417118 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:51:35.428542 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:51:35.429381 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:51:35.435268 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:51:35.442990 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:51:35.544051 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:51:35.556511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:51:35.579978 systemd-networkd[748]: lo: Link UP Jul 6 23:51:35.579989 systemd-networkd[748]: lo: Gained carrier Jul 6 23:51:35.584540 systemd-networkd[748]: Enumeration completed Jul 6 23:51:35.584700 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:51:35.585016 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 6 23:51:35.585020 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 6 23:51:35.586396 systemd[1]: Reached target network.target - Network. Jul 6 23:51:35.586901 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:51:35.586906 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:51:35.588815 systemd-networkd[748]: eth0: Link UP Jul 6 23:51:35.588821 systemd-networkd[748]: eth0: Gained carrier Jul 6 23:51:35.591303 ignition[648]: Ignition 2.19.0 Jul 6 23:51:35.588832 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 6 23:51:35.591340 ignition[648]: Stage: fetch-offline Jul 6 23:51:35.592544 systemd-networkd[748]: eth1: Link UP Jul 6 23:51:35.591399 ignition[648]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:35.592548 systemd-networkd[748]: eth1: Gained carrier Jul 6 23:51:35.591410 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:35.592562 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:51:35.591565 ignition[648]: parsed url from cmdline: "" Jul 6 23:51:35.594944 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:51:35.591569 ignition[648]: no config URL provided Jul 6 23:51:35.591575 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:51:35.591587 ignition[648]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:51:35.591594 ignition[648]: failed to fetch config: resource requires networking Jul 6 23:51:35.591890 ignition[648]: Ignition finished successfully Jul 6 23:51:35.602314 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:51:35.606173 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.24/20 acquired from 169.254.169.253 Jul 6 23:51:35.610161 systemd-networkd[748]: eth0: DHCPv4 address 134.199.239.131/20, gateway 134.199.224.1 acquired from 169.254.169.253 Jul 6 23:51:35.622514 ignition[755]: Ignition 2.19.0 Jul 6 23:51:35.622526 ignition[755]: Stage: fetch Jul 6 23:51:35.622718 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:35.622730 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:35.622827 ignition[755]: parsed url from cmdline: "" Jul 6 23:51:35.622831 ignition[755]: no config URL provided Jul 6 23:51:35.622837 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:51:35.622848 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:51:35.622874 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 6 23:51:35.637434 ignition[755]: GET result: OK Jul 6 23:51:35.638296 ignition[755]: parsing config with SHA512: 8942e0df8ca5be5a7d2d847c973c080632786a8854f1c059f08011e38ab27df2e5e5fa71328936a29506bfd411a093fd331045b91ac09825c69a03d4106a9062 Jul 6 23:51:35.644240 unknown[755]: fetched base config from "system" Jul 6 23:51:35.644811 unknown[755]: fetched base config from "system" Jul 6 23:51:35.645253 unknown[755]: fetched user config from "digitalocean" Jul 6 23:51:35.646427 ignition[755]: fetch: fetch complete Jul 6 23:51:35.646436 ignition[755]: fetch: fetch passed Jul 6 23:51:35.646526 ignition[755]: Ignition finished successfully Jul 6 23:51:35.648872 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:51:35.659343 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:51:35.684035 ignition[762]: Ignition 2.19.0 Jul 6 23:51:35.684078 ignition[762]: Stage: kargs Jul 6 23:51:35.684317 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:35.684334 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:35.687198 ignition[762]: kargs: kargs passed Jul 6 23:51:35.687750 ignition[762]: Ignition finished successfully Jul 6 23:51:35.689819 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:51:35.695321 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:51:35.727164 ignition[768]: Ignition 2.19.0 Jul 6 23:51:35.727175 ignition[768]: Stage: disks Jul 6 23:51:35.727409 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:35.727421 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:35.728365 ignition[768]: disks: disks passed Jul 6 23:51:35.729845 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:51:35.728443 ignition[768]: Ignition finished successfully Jul 6 23:51:35.733858 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:51:35.734723 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:51:35.735404 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:51:35.736162 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:51:35.736795 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:51:35.743363 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:51:35.762425 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:51:35.766050 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:51:35.772216 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:51:35.883098 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:51:35.884583 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:51:35.886342 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:51:35.896265 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:51:35.899256 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:51:35.901058 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jul 6 23:51:35.908096 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (784) Jul 6 23:51:35.910259 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:51:35.910337 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:51:35.910353 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:51:35.912317 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:51:35.914567 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:51:35.914630 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:51:35.920095 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:51:35.925401 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:51:35.938160 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:51:35.950014 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:51:35.998926 coreos-metadata[786]: Jul 06 23:51:35.998 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:51:36.011465 coreos-metadata[787]: Jul 06 23:51:36.011 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:51:36.014527 coreos-metadata[786]: Jul 06 23:51:36.011 INFO Fetch successful Jul 6 23:51:36.018406 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jul 6 23:51:36.018518 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jul 6 23:51:36.024013 coreos-metadata[787]: Jul 06 23:51:36.023 INFO Fetch successful Jul 6 23:51:36.029223 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:51:36.032013 coreos-metadata[787]: Jul 06 23:51:36.031 INFO wrote hostname ci-4081.3.4-d-7537ff12ef to /sysroot/etc/hostname Jul 6 23:51:36.035083 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:51:36.039737 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:51:36.045994 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:51:36.051702 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:51:36.173787 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:51:36.179365 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:51:36.181322 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:51:36.209107 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:51:36.225693 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:51:36.262693 ignition[906]: INFO : Ignition 2.19.0 Jul 6 23:51:36.262693 ignition[906]: INFO : Stage: mount Jul 6 23:51:36.264243 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:36.264243 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:36.265585 ignition[906]: INFO : mount: mount passed Jul 6 23:51:36.265585 ignition[906]: INFO : Ignition finished successfully Jul 6 23:51:36.266851 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:51:36.274354 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:51:36.372757 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:51:36.383455 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:51:36.394120 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (916) Jul 6 23:51:36.396699 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:51:36.396802 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:51:36.396847 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:51:36.407104 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:51:36.410174 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:51:36.438556 ignition[933]: INFO : Ignition 2.19.0 Jul 6 23:51:36.438556 ignition[933]: INFO : Stage: files Jul 6 23:51:36.440213 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:36.440213 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:36.442535 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:51:36.443656 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:51:36.443656 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:51:36.448379 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:51:36.449245 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:51:36.450296 unknown[933]: wrote ssh authorized keys file for user: core Jul 6 23:51:36.451178 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:51:36.454682 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:51:36.454682 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 6 23:51:36.737472 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:51:36.849231 systemd-networkd[748]: eth0: Gained IPv6LL Jul 6 23:51:36.916725 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:51:36.916725 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:51:36.918659 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:51:36.918659 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:51:36.918659 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:51:36.918659 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:51:36.918659 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:51:36.918659 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:51:36.918659 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:51:36.918659 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:51:36.918659 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:51:36.918659 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:51:36.918659 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:51:36.918659 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:51:36.918659 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 6 23:51:36.977420 systemd-networkd[748]: eth1: Gained IPv6LL Jul 6 23:51:37.686758 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:51:38.463660 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:51:38.463660 ignition[933]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:51:38.465058 ignition[933]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:51:38.465058 ignition[933]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:51:38.465058 ignition[933]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:51:38.465058 ignition[933]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:51:38.465058 ignition[933]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:51:38.465058 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:51:38.470368 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:51:38.470368 ignition[933]: INFO : files: files passed Jul 6 23:51:38.470368 ignition[933]: INFO : Ignition finished successfully Jul 6 23:51:38.468864 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:51:38.475317 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:51:38.477232 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:51:38.481827 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:51:38.481944 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:51:38.504813 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:51:38.504813 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:51:38.506289 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:51:38.508988 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:51:38.509690 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:51:38.516347 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:51:38.551345 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:51:38.551509 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:51:38.552889 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:51:38.553289 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:51:38.553988 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:51:38.556238 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:51:38.582978 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:51:38.594440 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:51:38.605568 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:51:38.606882 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:51:38.607992 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:51:38.608847 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:51:38.608993 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:51:38.610789 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:51:38.611924 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:51:38.612391 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:51:38.613193 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:51:38.613862 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:51:38.614617 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:51:38.615430 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:51:38.616266 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:51:38.616875 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:51:38.617585 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:51:38.618136 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:51:38.618310 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:51:38.619400 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:51:38.619928 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:51:38.620573 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:51:38.621443 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:51:38.622437 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:51:38.622628 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:51:38.623645 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:51:38.623841 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:51:38.624975 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:51:38.625225 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:51:38.625964 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:51:38.626118 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:51:38.633536 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:51:38.633973 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:51:38.634189 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:51:38.636156 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:51:38.637080 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:51:38.637262 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:51:38.638360 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:51:38.638462 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:51:38.647340 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:51:38.647495 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:51:38.666098 ignition[985]: INFO : Ignition 2.19.0 Jul 6 23:51:38.666098 ignition[985]: INFO : Stage: umount Jul 6 23:51:38.666098 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:51:38.666098 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:51:38.672272 ignition[985]: INFO : umount: umount passed Jul 6 23:51:38.672272 ignition[985]: INFO : Ignition finished successfully Jul 6 23:51:38.669352 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:51:38.672181 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:51:38.672308 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:51:38.673545 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:51:38.673712 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:51:38.677994 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:51:38.678165 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:51:38.679498 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:51:38.679581 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:51:38.680099 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:51:38.680151 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:51:38.680743 systemd[1]: Stopped target network.target - Network. Jul 6 23:51:38.681495 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:51:38.681567 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:51:38.682362 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:51:38.682875 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:51:38.686188 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:51:38.687256 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:51:38.687653 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:51:38.688338 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:51:38.688401 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:51:38.688955 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:51:38.689000 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:51:38.689535 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:51:38.689599 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:51:38.690164 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:51:38.690233 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:51:38.691045 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:51:38.691112 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:51:38.691896 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:51:38.692627 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:51:38.697151 systemd-networkd[748]: eth0: DHCPv6 lease lost Jul 6 23:51:38.701811 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:51:38.702003 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:51:38.702179 systemd-networkd[748]: eth1: DHCPv6 lease lost Jul 6 23:51:38.706484 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:51:38.706643 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:51:38.708597 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:51:38.708653 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:51:38.714254 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:51:38.714630 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:51:38.714707 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:51:38.715156 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:51:38.715217 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:51:38.715628 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:51:38.715673 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:51:38.716032 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:51:38.716131 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:51:38.717014 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:51:38.734990 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:51:38.735269 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:51:38.737403 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:51:38.737581 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:51:38.739025 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:51:38.739142 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:51:38.739613 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:51:38.739650 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:51:38.740311 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:51:38.740358 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:51:38.741404 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:51:38.741453 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:51:38.742193 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:51:38.742243 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:51:38.749420 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:51:38.750952 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:51:38.751032 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:51:38.753725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:51:38.753800 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:38.760408 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:51:38.760541 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:51:38.761545 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:51:38.773886 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:51:38.782432 systemd[1]: Switching root. Jul 6 23:51:38.815283 systemd-journald[183]: Journal stopped Jul 6 23:51:39.967944 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jul 6 23:51:39.968082 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:51:39.968100 kernel: SELinux: policy capability open_perms=1 Jul 6 23:51:39.968116 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:51:39.968128 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:51:39.968140 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:51:39.968152 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:51:39.968164 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:51:39.968176 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:51:39.968188 kernel: audit: type=1403 audit(1751845898.974:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:51:39.968204 systemd[1]: Successfully loaded SELinux policy in 38.653ms. Jul 6 23:51:39.968235 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.820ms. Jul 6 23:51:39.968251 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:51:39.968287 systemd[1]: Detected virtualization kvm. Jul 6 23:51:39.968305 systemd[1]: Detected architecture x86-64. Jul 6 23:51:39.968319 systemd[1]: Detected first boot. Jul 6 23:51:39.968332 systemd[1]: Hostname set to . Jul 6 23:51:39.968344 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:51:39.968362 zram_generator::config[1027]: No configuration found. Jul 6 23:51:39.968380 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:51:39.968393 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:51:39.968406 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:51:39.968420 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:51:39.968435 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:51:39.968449 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:51:39.968462 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:51:39.968475 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:51:39.968488 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:51:39.968506 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:51:39.968519 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:51:39.968532 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:51:39.968545 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:51:39.968558 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:51:39.968571 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:51:39.968589 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:51:39.968603 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:51:39.968616 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:51:39.968632 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:51:39.968645 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:51:39.968657 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:51:39.968671 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:51:39.968684 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:51:39.968696 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:51:39.968712 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:51:39.968730 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:51:39.968743 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:51:39.968756 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:51:39.968768 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:51:39.968781 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:51:39.968795 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:51:39.968836 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:51:39.968849 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:51:39.968865 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:51:39.968878 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:51:39.968891 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:51:39.968910 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:51:39.968922 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:39.968936 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:51:39.968949 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:51:39.968963 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:51:39.968977 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:51:39.968994 systemd[1]: Reached target machines.target - Containers. Jul 6 23:51:39.969007 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:51:39.969021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:51:39.969033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:51:39.969046 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:51:39.969059 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:51:39.993171 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:51:39.993194 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:51:39.993215 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:51:39.993301 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:51:39.993318 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:51:39.993331 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:51:39.993344 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:51:39.993357 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:51:39.993370 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:51:39.993384 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:51:39.993407 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:51:39.993431 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:51:39.993450 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:51:39.993467 kernel: loop: module loaded Jul 6 23:51:39.993488 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:51:39.993506 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:51:39.993518 systemd[1]: Stopped verity-setup.service. Jul 6 23:51:39.993532 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:39.993545 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:51:39.993557 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:51:39.993575 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:51:39.993588 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:51:39.993601 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:51:39.993614 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:51:39.993633 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:51:39.993646 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:51:39.993660 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:51:39.993673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:51:39.993689 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:51:39.993702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:51:39.993717 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:51:39.993730 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:51:39.993764 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:51:39.993778 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:51:39.993791 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:51:39.993803 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:51:39.993816 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:51:39.993828 kernel: ACPI: bus type drm_connector registered Jul 6 23:51:39.993844 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:51:39.993857 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:51:39.993870 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:51:39.993883 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:51:39.993895 kernel: fuse: init (API version 7.39) Jul 6 23:51:39.993907 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:51:39.993920 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:51:39.993958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:51:39.993971 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:51:39.993987 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:51:39.994040 systemd-journald[1096]: Collecting audit messages is disabled. Jul 6 23:51:39.994088 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:51:39.994103 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:51:39.994119 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:51:39.994134 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:51:39.994147 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:51:39.994159 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:51:39.994177 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:51:39.994189 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:51:39.994203 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:51:39.994216 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:51:39.994231 systemd-journald[1096]: Journal started Jul 6 23:51:39.994257 systemd-journald[1096]: Runtime Journal (/run/log/journal/4b206f58e28946e99873062da26bc099) is 4.9M, max 39.3M, 34.4M free. Jul 6 23:51:39.999497 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:51:39.584724 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:51:39.603033 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:51:40.005851 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:51:39.603760 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:51:40.004563 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:51:40.005336 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:51:40.015413 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:51:40.027287 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:51:40.029967 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:51:40.090553 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:51:40.101199 kernel: loop0: detected capacity change from 0 to 229808 Jul 6 23:51:40.103364 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:51:40.104772 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:51:40.105614 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:51:40.107719 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:51:40.125462 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:51:40.126167 systemd-journald[1096]: Time spent on flushing to /var/log/journal/4b206f58e28946e99873062da26bc099 is 49.247ms for 994 entries. Jul 6 23:51:40.126167 systemd-journald[1096]: System Journal (/var/log/journal/4b206f58e28946e99873062da26bc099) is 8.0M, max 195.6M, 187.6M free. Jul 6 23:51:40.192135 systemd-journald[1096]: Received client request to flush runtime journal. Jul 6 23:51:40.192209 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:51:40.192234 kernel: loop1: detected capacity change from 0 to 140768 Jul 6 23:51:40.127150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:51:40.176893 udevadm[1154]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:51:40.196683 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:51:40.222122 kernel: loop2: detected capacity change from 0 to 8 Jul 6 23:51:40.241657 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:51:40.256787 kernel: loop3: detected capacity change from 0 to 142488 Jul 6 23:51:40.256372 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:51:40.316095 kernel: loop4: detected capacity change from 0 to 229808 Jul 6 23:51:40.336233 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jul 6 23:51:40.336256 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jul 6 23:51:40.348750 kernel: loop5: detected capacity change from 0 to 140768 Jul 6 23:51:40.356416 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:51:40.373120 kernel: loop6: detected capacity change from 0 to 8 Jul 6 23:51:40.377252 kernel: loop7: detected capacity change from 0 to 142488 Jul 6 23:51:40.401741 (sd-merge)[1172]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jul 6 23:51:40.402330 (sd-merge)[1172]: Merged extensions into '/usr'. Jul 6 23:51:40.413854 systemd[1]: Reloading requested from client PID 1125 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:51:40.413880 systemd[1]: Reloading... Jul 6 23:51:40.631112 zram_generator::config[1200]: No configuration found. Jul 6 23:51:40.670695 ldconfig[1117]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:51:40.800554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:51:40.851793 systemd[1]: Reloading finished in 433 ms. Jul 6 23:51:40.878078 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:51:40.879187 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:51:40.891340 systemd[1]: Starting ensure-sysext.service... Jul 6 23:51:40.895377 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:51:40.914304 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:51:40.914331 systemd[1]: Reloading... Jul 6 23:51:40.973264 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:51:40.976535 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:51:40.980242 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:51:40.981534 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jul 6 23:51:40.982315 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jul 6 23:51:40.990655 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:51:40.992116 systemd-tmpfiles[1243]: Skipping /boot Jul 6 23:51:41.030446 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:51:41.030612 systemd-tmpfiles[1243]: Skipping /boot Jul 6 23:51:41.054161 zram_generator::config[1270]: No configuration found. Jul 6 23:51:41.216564 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:51:41.270393 systemd[1]: Reloading finished in 355 ms. Jul 6 23:51:41.289503 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:51:41.294804 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:51:41.312458 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:51:41.316847 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:51:41.320317 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:51:41.326314 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:51:41.330316 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:51:41.336438 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:51:41.340671 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:41.340870 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:51:41.349519 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:51:41.354233 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:51:41.365502 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:51:41.366300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:51:41.366461 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:41.370714 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:41.370949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:51:41.371162 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:51:41.371253 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:41.376454 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:41.376758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:51:41.387076 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:51:41.389324 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:51:41.389528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:41.396491 systemd[1]: Finished ensure-sysext.service. Jul 6 23:51:41.397359 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:51:41.413419 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:51:41.417431 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:51:41.428016 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:51:41.429372 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:51:41.430178 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:51:41.431534 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:51:41.433143 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:51:41.447478 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:51:41.455521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:51:41.456434 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:51:41.462432 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:51:41.464174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:51:41.469451 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:51:41.473747 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:51:41.473829 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:51:41.478247 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:51:41.479333 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Jul 6 23:51:41.480941 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:51:41.504938 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:51:41.516380 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:51:41.543190 augenrules[1366]: No rules Jul 6 23:51:41.544387 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:51:41.577519 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:51:41.696258 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:51:41.726120 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1361) Jul 6 23:51:41.728526 systemd-resolved[1319]: Positive Trust Anchors: Jul 6 23:51:41.728551 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:51:41.728589 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:51:41.744931 systemd-resolved[1319]: Using system hostname 'ci-4081.3.4-d-7537ff12ef'. Jul 6 23:51:41.747499 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:51:41.748359 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:51:41.773199 systemd-networkd[1355]: lo: Link UP Jul 6 23:51:41.773209 systemd-networkd[1355]: lo: Gained carrier Jul 6 23:51:41.774906 systemd-networkd[1355]: Enumeration completed Jul 6 23:51:41.775127 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:51:41.775903 systemd[1]: Reached target network.target - Network. Jul 6 23:51:41.776458 systemd-networkd[1355]: eth0: Configuring with /run/systemd/network/10-62:14:96:90:8b:84.network. Jul 6 23:51:41.782232 systemd-networkd[1355]: eth0: Link UP Jul 6 23:51:41.782247 systemd-networkd[1355]: eth0: Gained carrier Jul 6 23:51:41.785344 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:51:41.785875 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:51:41.786367 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:51:41.789031 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jul 6 23:51:41.830265 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jul 6 23:51:41.830944 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:41.831160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:51:41.837376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:51:41.842397 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:51:41.855414 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:51:41.855952 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:51:41.855997 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:51:41.856018 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:51:41.863100 kernel: ISO 9660 Extensions: RRIP_1991A Jul 6 23:51:41.862710 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jul 6 23:51:41.870492 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:51:41.877778 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:51:41.890691 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:51:41.892265 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:51:41.898570 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:51:41.898750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:51:41.899334 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:51:41.915895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:51:41.916265 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:51:41.917039 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:51:41.920373 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 6 23:51:41.920995 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:51:41.929098 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 6 23:51:41.939093 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:51:41.944928 systemd-networkd[1355]: eth1: Configuring with /run/systemd/network/10-4e:5d:38:54:68:b6.network. Jul 6 23:51:41.945827 systemd-networkd[1355]: eth1: Link UP Jul 6 23:51:41.945836 systemd-networkd[1355]: eth1: Gained carrier Jul 6 23:51:41.960096 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 6 23:51:42.103343 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 6 23:51:42.103486 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 6 23:51:42.103748 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:51:42.105581 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:51:42.105544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:51:42.106525 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 6 23:51:42.106580 kernel: [drm] features: -context_init Jul 6 23:51:42.109632 kernel: [drm] number of scanouts: 1 Jul 6 23:51:42.109700 kernel: [drm] number of cap sets: 0 Jul 6 23:51:42.119102 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 6 23:51:42.124151 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 6 23:51:42.124239 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:51:42.123788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:51:42.124123 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:42.132152 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 6 23:51:42.145436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:51:42.160014 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:51:42.160790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:42.171908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:51:42.194835 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:51:42.216805 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:51:42.224960 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:51:42.242570 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:51:42.263920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:51:42.269384 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:51:42.271553 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:51:42.271741 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:51:42.271934 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:51:42.272034 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:51:42.274830 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:51:42.275044 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:51:42.276815 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:51:42.276917 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:51:42.276947 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:51:42.276997 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:51:42.278713 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:51:42.280865 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:51:42.288106 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:51:42.297398 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:51:42.298907 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:51:42.300733 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:51:42.301176 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:51:42.301609 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:51:42.301643 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:51:42.305254 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:51:42.309662 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:51:42.318893 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:51:42.326486 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:51:42.814339 systemd-timesyncd[1334]: Contacted time server 67.217.242.117:123 (0.flatcar.pool.ntp.org). Jul 6 23:51:42.814425 systemd-timesyncd[1334]: Initial clock synchronization to Sun 2025-07-06 23:51:42.814181 UTC. Jul 6 23:51:42.814731 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:51:42.815031 systemd-resolved[1319]: Clock change detected. Flushing caches. Jul 6 23:51:42.826444 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:51:42.826962 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:51:42.830397 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:51:42.837372 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:51:42.845773 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:51:42.851296 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:51:42.865888 jq[1432]: false Jul 6 23:51:42.865879 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:51:42.867985 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:51:42.871147 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:51:42.874234 coreos-metadata[1430]: Jul 06 23:51:42.874 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:51:42.878338 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:51:42.887284 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:51:42.889603 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:51:42.890936 coreos-metadata[1430]: Jul 06 23:51:42.890 INFO Fetch successful Jul 6 23:51:42.894549 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:51:42.895346 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:51:42.907663 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:51:42.907852 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:51:42.922590 dbus-daemon[1431]: [system] SELinux support is enabled Jul 6 23:51:42.923148 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:51:42.937974 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:51:42.938033 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:51:42.938620 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:51:42.938709 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jul 6 23:51:42.938731 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:51:42.946505 extend-filesystems[1433]: Found loop4 Jul 6 23:51:42.958990 extend-filesystems[1433]: Found loop5 Jul 6 23:51:42.958990 extend-filesystems[1433]: Found loop6 Jul 6 23:51:42.958990 extend-filesystems[1433]: Found loop7 Jul 6 23:51:42.958990 extend-filesystems[1433]: Found vda Jul 6 23:51:42.958990 extend-filesystems[1433]: Found vda1 Jul 6 23:51:42.958990 extend-filesystems[1433]: Found vda2 Jul 6 23:51:42.958990 extend-filesystems[1433]: Found vda3 Jul 6 23:51:42.958990 extend-filesystems[1433]: Found usr Jul 6 23:51:42.958990 extend-filesystems[1433]: Found vda4 Jul 6 23:51:42.958990 extend-filesystems[1433]: Found vda6 Jul 6 23:51:42.958990 extend-filesystems[1433]: Found vda7 Jul 6 23:51:42.958990 extend-filesystems[1433]: Found vda9 Jul 6 23:51:42.958990 extend-filesystems[1433]: Checking size of /dev/vda9 Jul 6 23:51:43.024929 update_engine[1442]: I20250706 23:51:42.989972 1442 main.cc:92] Flatcar Update Engine starting Jul 6 23:51:43.024929 update_engine[1442]: I20250706 23:51:43.001317 1442 update_check_scheduler.cc:74] Next update check in 6m3s Jul 6 23:51:42.989485 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:51:43.031486 tar[1445]: linux-amd64/LICENSE Jul 6 23:51:43.031486 tar[1445]: linux-amd64/helm Jul 6 23:51:42.999469 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:51:43.011299 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:51:43.032093 jq[1443]: true Jul 6 23:51:43.023721 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:51:43.024874 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:51:43.025629 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:51:43.027710 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:51:43.041455 extend-filesystems[1433]: Resized partition /dev/vda9 Jul 6 23:51:43.050716 extend-filesystems[1476]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:51:43.063080 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 6 23:51:43.074169 jq[1467]: true Jul 6 23:51:43.121943 systemd-logind[1441]: New seat seat0. Jul 6 23:51:43.130589 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1359) Jul 6 23:51:43.137448 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:51:43.137476 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:51:43.137834 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:51:43.225630 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 6 23:51:43.253682 extend-filesystems[1476]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:51:43.253682 extend-filesystems[1476]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 6 23:51:43.253682 extend-filesystems[1476]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 6 23:51:43.261606 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Jul 6 23:51:43.261606 extend-filesystems[1433]: Found vdb Jul 6 23:51:43.262545 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:51:43.262765 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:51:43.290095 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:51:43.292553 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:51:43.310206 systemd[1]: Starting sshkeys.service... Jul 6 23:51:43.334276 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:51:43.351657 systemd-networkd[1355]: eth0: Gained IPv6LL Jul 6 23:51:43.356131 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:51:43.368157 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:51:43.370399 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:51:43.376838 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:51:43.390532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:51:43.403539 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:51:43.519004 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:51:43.524462 coreos-metadata[1507]: Jul 06 23:51:43.524 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:51:43.545745 coreos-metadata[1507]: Jul 06 23:51:43.544 INFO Fetch successful Jul 6 23:51:43.561411 unknown[1507]: wrote ssh authorized keys file for user: core Jul 6 23:51:43.580121 containerd[1462]: time="2025-07-06T23:51:43.579649623Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:51:43.607907 update-ssh-keys[1523]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:51:43.611770 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:51:43.618307 systemd[1]: Finished sshkeys.service. Jul 6 23:51:43.659438 containerd[1462]: time="2025-07-06T23:51:43.659297770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:43.670230 containerd[1462]: time="2025-07-06T23:51:43.669231603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:51:43.670230 containerd[1462]: time="2025-07-06T23:51:43.669278830Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:51:43.670230 containerd[1462]: time="2025-07-06T23:51:43.669298623Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:51:43.670230 containerd[1462]: time="2025-07-06T23:51:43.669489782Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:51:43.670230 containerd[1462]: time="2025-07-06T23:51:43.669520710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:43.670230 containerd[1462]: time="2025-07-06T23:51:43.669595154Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:51:43.670230 containerd[1462]: time="2025-07-06T23:51:43.669607169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:43.670230 containerd[1462]: time="2025-07-06T23:51:43.669809965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:51:43.670230 containerd[1462]: time="2025-07-06T23:51:43.669879315Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:43.670230 containerd[1462]: time="2025-07-06T23:51:43.669895658Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:51:43.670230 containerd[1462]: time="2025-07-06T23:51:43.669905209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:43.670614 containerd[1462]: time="2025-07-06T23:51:43.669988839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:43.672166 systemd-networkd[1355]: eth1: Gained IPv6LL Jul 6 23:51:43.675011 containerd[1462]: time="2025-07-06T23:51:43.674963718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:51:43.678753 containerd[1462]: time="2025-07-06T23:51:43.678274414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:51:43.678753 containerd[1462]: time="2025-07-06T23:51:43.678322204Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:51:43.678753 containerd[1462]: time="2025-07-06T23:51:43.678499778Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:51:43.678753 containerd[1462]: time="2025-07-06T23:51:43.678565206Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:51:43.682756 containerd[1462]: time="2025-07-06T23:51:43.682715608Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:51:43.684117 containerd[1462]: time="2025-07-06T23:51:43.684087851Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:51:43.684283 containerd[1462]: time="2025-07-06T23:51:43.684268573Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:51:43.688332 containerd[1462]: time="2025-07-06T23:51:43.687118455Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:51:43.688332 containerd[1462]: time="2025-07-06T23:51:43.687179088Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:51:43.688332 containerd[1462]: time="2025-07-06T23:51:43.687438812Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:51:43.688504 containerd[1462]: time="2025-07-06T23:51:43.687847767Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:51:43.688925 containerd[1462]: time="2025-07-06T23:51:43.688692806Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:51:43.688925 containerd[1462]: time="2025-07-06T23:51:43.688872197Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:51:43.688925 containerd[1462]: time="2025-07-06T23:51:43.688889339Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:51:43.688925 containerd[1462]: time="2025-07-06T23:51:43.688903782Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:51:43.689279 containerd[1462]: time="2025-07-06T23:51:43.689259423Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:51:43.689643 containerd[1462]: time="2025-07-06T23:51:43.689332314Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:51:43.689643 containerd[1462]: time="2025-07-06T23:51:43.689350028Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:51:43.689643 containerd[1462]: time="2025-07-06T23:51:43.689370627Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:51:43.689643 containerd[1462]: time="2025-07-06T23:51:43.689383242Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:51:43.689643 containerd[1462]: time="2025-07-06T23:51:43.689415531Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:51:43.689643 containerd[1462]: time="2025-07-06T23:51:43.689432228Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:51:43.689643 containerd[1462]: time="2025-07-06T23:51:43.689458117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.689643 containerd[1462]: time="2025-07-06T23:51:43.689479462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.690833 containerd[1462]: time="2025-07-06T23:51:43.689871918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.690833 containerd[1462]: time="2025-07-06T23:51:43.689900967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.690833 containerd[1462]: time="2025-07-06T23:51:43.689913464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.690833 containerd[1462]: time="2025-07-06T23:51:43.689939165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.690833 containerd[1462]: time="2025-07-06T23:51:43.689952256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.690833 containerd[1462]: time="2025-07-06T23:51:43.689964890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.690833 containerd[1462]: time="2025-07-06T23:51:43.689977519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.690833 containerd[1462]: time="2025-07-06T23:51:43.690011463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.690833 containerd[1462]: time="2025-07-06T23:51:43.690026832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.690833 containerd[1462]: time="2025-07-06T23:51:43.690038451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.690833 containerd[1462]: time="2025-07-06T23:51:43.690075746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.690833 containerd[1462]: time="2025-07-06T23:51:43.690091446Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:51:43.690833 containerd[1462]: time="2025-07-06T23:51:43.690116854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.691299 containerd[1462]: time="2025-07-06T23:51:43.690129275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.691299 containerd[1462]: time="2025-07-06T23:51:43.691220283Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:51:43.691389 containerd[1462]: time="2025-07-06T23:51:43.691376381Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:51:43.694689 containerd[1462]: time="2025-07-06T23:51:43.693092284Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:51:43.694689 containerd[1462]: time="2025-07-06T23:51:43.693116602Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:51:43.694689 containerd[1462]: time="2025-07-06T23:51:43.693129832Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:51:43.694689 containerd[1462]: time="2025-07-06T23:51:43.693139601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.694689 containerd[1462]: time="2025-07-06T23:51:43.693193783Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:51:43.694689 containerd[1462]: time="2025-07-06T23:51:43.693221325Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:51:43.694689 containerd[1462]: time="2025-07-06T23:51:43.693240112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:51:43.694922 containerd[1462]: time="2025-07-06T23:51:43.693556035Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:51:43.694922 containerd[1462]: time="2025-07-06T23:51:43.693622273Z" level=info msg="Connect containerd service" Jul 6 23:51:43.694922 containerd[1462]: time="2025-07-06T23:51:43.693663368Z" level=info msg="using legacy CRI server" Jul 6 23:51:43.694922 containerd[1462]: time="2025-07-06T23:51:43.693670755Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:51:43.694922 containerd[1462]: time="2025-07-06T23:51:43.693790816Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:51:43.701381 containerd[1462]: time="2025-07-06T23:51:43.697271338Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:51:43.701381 containerd[1462]: time="2025-07-06T23:51:43.697707457Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:51:43.701381 containerd[1462]: time="2025-07-06T23:51:43.697756179Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:51:43.701381 containerd[1462]: time="2025-07-06T23:51:43.697821578Z" level=info msg="Start subscribing containerd event" Jul 6 23:51:43.701381 containerd[1462]: time="2025-07-06T23:51:43.697861853Z" level=info msg="Start recovering state" Jul 6 23:51:43.701381 containerd[1462]: time="2025-07-06T23:51:43.697926608Z" level=info msg="Start event monitor" Jul 6 23:51:43.701381 containerd[1462]: time="2025-07-06T23:51:43.697937391Z" level=info msg="Start snapshots syncer" Jul 6 23:51:43.701381 containerd[1462]: time="2025-07-06T23:51:43.697950463Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:51:43.701381 containerd[1462]: time="2025-07-06T23:51:43.697957665Z" level=info msg="Start streaming server" Jul 6 23:51:43.701381 containerd[1462]: time="2025-07-06T23:51:43.698028094Z" level=info msg="containerd successfully booted in 0.120853s" Jul 6 23:51:43.698169 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:51:43.734332 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:51:43.766778 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:51:43.782516 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:51:43.807268 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:51:43.807548 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:51:43.819508 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:51:43.860855 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:51:43.876475 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:51:43.887551 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:51:43.891785 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:51:44.242265 tar[1445]: linux-amd64/README.md Jul 6 23:51:44.272082 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:51:44.740653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:51:44.742626 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:51:44.744138 systemd[1]: Startup finished in 1.072s (kernel) + 6.268s (initrd) + 5.323s (userspace) = 12.664s. Jul 6 23:51:44.750961 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:51:45.402331 kubelet[1554]: E0706 23:51:45.402246 1554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:51:45.405990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:51:45.406197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:51:45.406624 systemd[1]: kubelet.service: Consumed 1.225s CPU time. Jul 6 23:51:47.995282 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:51:47.996786 systemd[1]: Started sshd@0-134.199.239.131:22-139.178.89.65:37462.service - OpenSSH per-connection server daemon (139.178.89.65:37462). Jul 6 23:51:48.079519 sshd[1566]: Accepted publickey for core from 139.178.89.65 port 37462 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:48.082103 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:48.092237 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:51:48.104604 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:51:48.107895 systemd-logind[1441]: New session 1 of user core. Jul 6 23:51:48.124528 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:51:48.135492 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:51:48.139676 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:51:48.254961 systemd[1570]: Queued start job for default target default.target. Jul 6 23:51:48.260451 systemd[1570]: Created slice app.slice - User Application Slice. Jul 6 23:51:48.260492 systemd[1570]: Reached target paths.target - Paths. Jul 6 23:51:48.260508 systemd[1570]: Reached target timers.target - Timers. Jul 6 23:51:48.262605 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:51:48.278457 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:51:48.278621 systemd[1570]: Reached target sockets.target - Sockets. Jul 6 23:51:48.278641 systemd[1570]: Reached target basic.target - Basic System. Jul 6 23:51:48.278726 systemd[1570]: Reached target default.target - Main User Target. Jul 6 23:51:48.278769 systemd[1570]: Startup finished in 129ms. Jul 6 23:51:48.279049 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:51:48.289475 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:51:48.360736 systemd[1]: Started sshd@1-134.199.239.131:22-139.178.89.65:37474.service - OpenSSH per-connection server daemon (139.178.89.65:37474). Jul 6 23:51:48.409733 sshd[1581]: Accepted publickey for core from 139.178.89.65 port 37474 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:48.412024 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:48.417992 systemd-logind[1441]: New session 2 of user core. Jul 6 23:51:48.431395 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:51:48.498377 sshd[1581]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:48.510683 systemd[1]: sshd@1-134.199.239.131:22-139.178.89.65:37474.service: Deactivated successfully. Jul 6 23:51:48.516041 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:51:48.518702 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:51:48.526725 systemd[1]: Started sshd@2-134.199.239.131:22-139.178.89.65:37482.service - OpenSSH per-connection server daemon (139.178.89.65:37482). Jul 6 23:51:48.528532 systemd-logind[1441]: Removed session 2. Jul 6 23:51:48.576558 sshd[1588]: Accepted publickey for core from 139.178.89.65 port 37482 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:48.578870 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:48.584781 systemd-logind[1441]: New session 3 of user core. Jul 6 23:51:48.594387 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:51:48.653383 sshd[1588]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:48.666494 systemd[1]: sshd@2-134.199.239.131:22-139.178.89.65:37482.service: Deactivated successfully. Jul 6 23:51:48.668969 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:51:48.671252 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:51:48.677568 systemd[1]: Started sshd@3-134.199.239.131:22-139.178.89.65:37490.service - OpenSSH per-connection server daemon (139.178.89.65:37490). Jul 6 23:51:48.679567 systemd-logind[1441]: Removed session 3. Jul 6 23:51:48.740494 sshd[1595]: Accepted publickey for core from 139.178.89.65 port 37490 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:48.742657 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:48.748664 systemd-logind[1441]: New session 4 of user core. Jul 6 23:51:48.755440 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:51:48.822784 sshd[1595]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:48.842347 systemd[1]: sshd@3-134.199.239.131:22-139.178.89.65:37490.service: Deactivated successfully. Jul 6 23:51:48.845130 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:51:48.847259 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:51:48.853653 systemd[1]: Started sshd@4-134.199.239.131:22-139.178.89.65:37494.service - OpenSSH per-connection server daemon (139.178.89.65:37494). Jul 6 23:51:48.855519 systemd-logind[1441]: Removed session 4. Jul 6 23:51:48.910830 sshd[1602]: Accepted publickey for core from 139.178.89.65 port 37494 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:48.912836 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:48.918814 systemd-logind[1441]: New session 5 of user core. Jul 6 23:51:48.929364 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:51:49.000073 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:51:49.000567 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:51:49.015005 sudo[1605]: pam_unix(sudo:session): session closed for user root Jul 6 23:51:49.019365 sshd[1602]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:49.034611 systemd[1]: sshd@4-134.199.239.131:22-139.178.89.65:37494.service: Deactivated successfully. Jul 6 23:51:49.036889 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:51:49.038864 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:51:49.044609 systemd[1]: Started sshd@5-134.199.239.131:22-139.178.89.65:37508.service - OpenSSH per-connection server daemon (139.178.89.65:37508). Jul 6 23:51:49.046134 systemd-logind[1441]: Removed session 5. Jul 6 23:51:49.097428 sshd[1610]: Accepted publickey for core from 139.178.89.65 port 37508 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:49.099867 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:49.105311 systemd-logind[1441]: New session 6 of user core. Jul 6 23:51:49.112377 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:51:49.174325 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:51:49.175297 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:51:49.181211 sudo[1614]: pam_unix(sudo:session): session closed for user root Jul 6 23:51:49.190573 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:51:49.191195 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:51:49.214595 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:51:49.216866 auditctl[1617]: No rules Jul 6 23:51:49.217446 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:51:49.217680 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:51:49.224745 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:51:49.274249 augenrules[1636]: No rules Jul 6 23:51:49.276714 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:51:49.278779 sudo[1613]: pam_unix(sudo:session): session closed for user root Jul 6 23:51:49.283467 sshd[1610]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:49.298489 systemd[1]: sshd@5-134.199.239.131:22-139.178.89.65:37508.service: Deactivated successfully. Jul 6 23:51:49.301644 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:51:49.305411 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:51:49.313684 systemd[1]: Started sshd@6-134.199.239.131:22-139.178.89.65:37516.service - OpenSSH per-connection server daemon (139.178.89.65:37516). Jul 6 23:51:49.317492 systemd-logind[1441]: Removed session 6. Jul 6 23:51:49.357549 sshd[1644]: Accepted publickey for core from 139.178.89.65 port 37516 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:51:49.359885 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:49.366318 systemd-logind[1441]: New session 7 of user core. Jul 6 23:51:49.374444 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:51:49.437303 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:51:49.437801 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:51:49.856483 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:51:49.858588 (dockerd)[1663]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:51:50.258118 dockerd[1663]: time="2025-07-06T23:51:50.257342986Z" level=info msg="Starting up" Jul 6 23:51:50.386666 dockerd[1663]: time="2025-07-06T23:51:50.386350128Z" level=info msg="Loading containers: start." Jul 6 23:51:50.508081 kernel: Initializing XFRM netlink socket Jul 6 23:51:50.597570 systemd-networkd[1355]: docker0: Link UP Jul 6 23:51:50.617372 dockerd[1663]: time="2025-07-06T23:51:50.617230784Z" level=info msg="Loading containers: done." Jul 6 23:51:50.635802 dockerd[1663]: time="2025-07-06T23:51:50.635738802Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:51:50.635994 dockerd[1663]: time="2025-07-06T23:51:50.635880968Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:51:50.636024 dockerd[1663]: time="2025-07-06T23:51:50.636012014Z" level=info msg="Daemon has completed initialization" Jul 6 23:51:50.663296 dockerd[1663]: time="2025-07-06T23:51:50.663035022Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:51:50.663389 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:51:51.359540 containerd[1462]: time="2025-07-06T23:51:51.359494748Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:51:51.966285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281175914.mount: Deactivated successfully. Jul 6 23:51:53.083647 containerd[1462]: time="2025-07-06T23:51:53.083580495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:53.085079 containerd[1462]: time="2025-07-06T23:51:53.084761607Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 6 23:51:53.085079 containerd[1462]: time="2025-07-06T23:51:53.084977469Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:53.087729 containerd[1462]: time="2025-07-06T23:51:53.087638190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:53.089091 containerd[1462]: time="2025-07-06T23:51:53.088828431Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.729285808s" Jul 6 23:51:53.089091 containerd[1462]: time="2025-07-06T23:51:53.088883347Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 6 23:51:53.089640 containerd[1462]: time="2025-07-06T23:51:53.089609227Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:51:54.399616 containerd[1462]: time="2025-07-06T23:51:54.399531905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:54.400922 containerd[1462]: time="2025-07-06T23:51:54.400869694Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 6 23:51:54.401759 containerd[1462]: time="2025-07-06T23:51:54.401408550Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:54.404730 containerd[1462]: time="2025-07-06T23:51:54.404641257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:54.406887 containerd[1462]: time="2025-07-06T23:51:54.406213924Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.316565936s" Jul 6 23:51:54.406887 containerd[1462]: time="2025-07-06T23:51:54.406270895Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 6 23:51:54.406887 containerd[1462]: time="2025-07-06T23:51:54.406852691Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:51:55.575428 containerd[1462]: time="2025-07-06T23:51:55.575371783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:55.578857 containerd[1462]: time="2025-07-06T23:51:55.577905302Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:55.579087 containerd[1462]: time="2025-07-06T23:51:55.579039573Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 6 23:51:55.586079 containerd[1462]: time="2025-07-06T23:51:55.586007780Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.179124359s" Jul 6 23:51:55.586288 containerd[1462]: time="2025-07-06T23:51:55.586270677Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 6 23:51:55.586431 containerd[1462]: time="2025-07-06T23:51:55.586230762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:55.587552 containerd[1462]: time="2025-07-06T23:51:55.587482829Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:51:55.656709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:51:55.665437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:51:55.806906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:51:55.826095 (kubelet)[1879]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:51:55.885617 kubelet[1879]: E0706 23:51:55.885523 1879 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:51:55.889677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:51:55.889891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:51:56.699518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount137726951.mount: Deactivated successfully. Jul 6 23:51:57.267025 containerd[1462]: time="2025-07-06T23:51:57.266297346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:57.267637 containerd[1462]: time="2025-07-06T23:51:57.267592067Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 6 23:51:57.268323 containerd[1462]: time="2025-07-06T23:51:57.268294007Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:57.270582 containerd[1462]: time="2025-07-06T23:51:57.270533036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:57.271304 containerd[1462]: time="2025-07-06T23:51:57.271275428Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.683721087s" Jul 6 23:51:57.271418 containerd[1462]: time="2025-07-06T23:51:57.271402572Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 6 23:51:57.272469 containerd[1462]: time="2025-07-06T23:51:57.272446867Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:51:57.623576 systemd-resolved[1319]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jul 6 23:51:57.762609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181716138.mount: Deactivated successfully. Jul 6 23:51:59.051114 containerd[1462]: time="2025-07-06T23:51:59.049765663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:59.053156 containerd[1462]: time="2025-07-06T23:51:59.053086154Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 6 23:51:59.063084 containerd[1462]: time="2025-07-06T23:51:59.062147955Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:59.066068 containerd[1462]: time="2025-07-06T23:51:59.065310450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:59.066687 containerd[1462]: time="2025-07-06T23:51:59.066652438Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.794103809s" Jul 6 23:51:59.066842 containerd[1462]: time="2025-07-06T23:51:59.066816536Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 6 23:51:59.067902 containerd[1462]: time="2025-07-06T23:51:59.067870735Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:51:59.573229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2949544290.mount: Deactivated successfully. Jul 6 23:51:59.577713 containerd[1462]: time="2025-07-06T23:51:59.576755054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:59.577713 containerd[1462]: time="2025-07-06T23:51:59.577533181Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:51:59.577713 containerd[1462]: time="2025-07-06T23:51:59.577648643Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:59.580891 containerd[1462]: time="2025-07-06T23:51:59.580842159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:51:59.581784 containerd[1462]: time="2025-07-06T23:51:59.581751772Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 513.629131ms" Jul 6 23:51:59.581931 containerd[1462]: time="2025-07-06T23:51:59.581912479Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:51:59.582514 containerd[1462]: time="2025-07-06T23:51:59.582480310Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:52:00.070411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount222277773.mount: Deactivated successfully. Jul 6 23:52:00.694329 systemd-resolved[1319]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jul 6 23:52:01.832202 containerd[1462]: time="2025-07-06T23:52:01.832050449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:01.834161 containerd[1462]: time="2025-07-06T23:52:01.833033093Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 6 23:52:01.834161 containerd[1462]: time="2025-07-06T23:52:01.833575763Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:01.836821 containerd[1462]: time="2025-07-06T23:52:01.836764028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:01.838377 containerd[1462]: time="2025-07-06T23:52:01.838328407Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.255587676s" Jul 6 23:52:01.838377 containerd[1462]: time="2025-07-06T23:52:01.838377223Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 6 23:52:05.030285 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:52:05.038380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:52:05.076322 systemd[1]: Reloading requested from client PID 2034 ('systemctl') (unit session-7.scope)... Jul 6 23:52:05.076363 systemd[1]: Reloading... Jul 6 23:52:05.210094 zram_generator::config[2073]: No configuration found. Jul 6 23:52:05.336121 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:52:05.415586 systemd[1]: Reloading finished in 338 ms. Jul 6 23:52:05.464161 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:52:05.464245 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:52:05.464558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:52:05.467378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:52:05.606649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:52:05.619628 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:52:05.682359 kubelet[2128]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:52:05.682359 kubelet[2128]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:52:05.682359 kubelet[2128]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:52:05.682824 kubelet[2128]: I0706 23:52:05.682399 2128 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:52:06.233915 kubelet[2128]: I0706 23:52:06.233848 2128 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:52:06.233915 kubelet[2128]: I0706 23:52:06.233899 2128 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:52:06.234291 kubelet[2128]: I0706 23:52:06.234265 2128 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:52:06.262246 kubelet[2128]: I0706 23:52:06.262149 2128 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:52:06.264927 kubelet[2128]: E0706 23:52:06.264863 2128 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://134.199.239.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 134.199.239.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:52:06.274387 kubelet[2128]: E0706 23:52:06.274348 2128 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:52:06.274387 kubelet[2128]: I0706 23:52:06.274380 2128 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:52:06.281279 kubelet[2128]: I0706 23:52:06.281226 2128 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:52:06.281567 kubelet[2128]: I0706 23:52:06.281537 2128 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:52:06.284639 kubelet[2128]: I0706 23:52:06.281566 2128 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-d-7537ff12ef","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:52:06.284639 kubelet[2128]: I0706 23:52:06.284637 2128 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:52:06.284639 kubelet[2128]: I0706 23:52:06.284653 2128 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:52:06.284978 kubelet[2128]: I0706 23:52:06.284809 2128 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:52:06.287217 kubelet[2128]: I0706 23:52:06.287046 2128 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:52:06.287217 kubelet[2128]: I0706 23:52:06.287100 2128 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:52:06.287217 kubelet[2128]: I0706 23:52:06.287127 2128 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:52:06.287217 kubelet[2128]: I0706 23:52:06.287146 2128 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:52:06.299472 kubelet[2128]: E0706 23:52:06.299311 2128 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://134.199.239.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-d-7537ff12ef&limit=500&resourceVersion=0\": dial tcp 134.199.239.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:52:06.303167 kubelet[2128]: E0706 23:52:06.301773 2128 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://134.199.239.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.239.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:52:06.303167 kubelet[2128]: I0706 23:52:06.301950 2128 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:52:06.303167 kubelet[2128]: I0706 23:52:06.302671 2128 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:52:06.304628 kubelet[2128]: W0706 23:52:06.303902 2128 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:52:06.308570 kubelet[2128]: I0706 23:52:06.308408 2128 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:52:06.308570 kubelet[2128]: I0706 23:52:06.308473 2128 server.go:1289] "Started kubelet" Jul 6 23:52:06.311078 kubelet[2128]: I0706 23:52:06.309887 2128 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:52:06.311078 kubelet[2128]: I0706 23:52:06.310896 2128 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:52:06.313122 kubelet[2128]: I0706 23:52:06.312145 2128 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:52:06.313122 kubelet[2128]: I0706 23:52:06.312569 2128 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:52:06.314988 kubelet[2128]: E0706 23:52:06.312701 2128 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://134.199.239.131:6443/api/v1/namespaces/default/events\": dial tcp 134.199.239.131:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-d-7537ff12ef.184fce969ad45cc4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-d-7537ff12ef,UID:ci-4081.3.4-d-7537ff12ef,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-d-7537ff12ef,},FirstTimestamp:2025-07-06 23:52:06.308437188 +0000 UTC m=+0.683675516,LastTimestamp:2025-07-06 23:52:06.308437188 +0000 UTC m=+0.683675516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-d-7537ff12ef,}" Jul 6 23:52:06.316114 kubelet[2128]: I0706 23:52:06.315911 2128 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:52:06.317314 kubelet[2128]: I0706 23:52:06.316804 2128 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:52:06.321226 kubelet[2128]: E0706 23:52:06.320461 2128 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-d-7537ff12ef\" not found" Jul 6 23:52:06.321226 kubelet[2128]: I0706 23:52:06.320512 2128 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:52:06.321226 kubelet[2128]: I0706 23:52:06.320749 2128 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:52:06.321226 kubelet[2128]: I0706 23:52:06.320810 2128 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:52:06.321647 kubelet[2128]: E0706 23:52:06.321628 2128 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://134.199.239.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.239.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:52:06.323875 kubelet[2128]: E0706 23:52:06.323845 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.239.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-d-7537ff12ef?timeout=10s\": dial tcp 134.199.239.131:6443: connect: connection refused" interval="200ms" Jul 6 23:52:06.324404 kubelet[2128]: I0706 23:52:06.324384 2128 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:52:06.324577 kubelet[2128]: I0706 23:52:06.324561 2128 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:52:06.328679 kubelet[2128]: E0706 23:52:06.328646 2128 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:52:06.331735 kubelet[2128]: I0706 23:52:06.331716 2128 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:52:06.345707 kubelet[2128]: I0706 23:52:06.345660 2128 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:52:06.346995 kubelet[2128]: I0706 23:52:06.346969 2128 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:52:06.347220 kubelet[2128]: I0706 23:52:06.347208 2128 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:52:06.347301 kubelet[2128]: I0706 23:52:06.347293 2128 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:52:06.347677 kubelet[2128]: I0706 23:52:06.347361 2128 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:52:06.347677 kubelet[2128]: E0706 23:52:06.347427 2128 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:52:06.360662 kubelet[2128]: E0706 23:52:06.360482 2128 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://134.199.239.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.239.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:52:06.365870 kubelet[2128]: I0706 23:52:06.365481 2128 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:52:06.365870 kubelet[2128]: I0706 23:52:06.365524 2128 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:52:06.365870 kubelet[2128]: I0706 23:52:06.365545 2128 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:52:06.368486 kubelet[2128]: I0706 23:52:06.368052 2128 policy_none.go:49] "None policy: Start" Jul 6 23:52:06.368486 kubelet[2128]: I0706 23:52:06.368119 2128 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:52:06.368486 kubelet[2128]: I0706 23:52:06.368137 2128 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:52:06.375022 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:52:06.390106 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:52:06.394462 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:52:06.404491 kubelet[2128]: E0706 23:52:06.403246 2128 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:52:06.404491 kubelet[2128]: I0706 23:52:06.403463 2128 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:52:06.404491 kubelet[2128]: I0706 23:52:06.403474 2128 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:52:06.404491 kubelet[2128]: I0706 23:52:06.403837 2128 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:52:06.406380 kubelet[2128]: E0706 23:52:06.406338 2128 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:52:06.406506 kubelet[2128]: E0706 23:52:06.406412 2128 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-d-7537ff12ef\" not found" Jul 6 23:52:06.460952 systemd[1]: Created slice kubepods-burstable-pode08c42056944087e1e627606e7e619a3.slice - libcontainer container kubepods-burstable-pode08c42056944087e1e627606e7e619a3.slice. Jul 6 23:52:06.473391 kubelet[2128]: E0706 23:52:06.473141 2128 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-d-7537ff12ef\" not found" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.476929 systemd[1]: Created slice kubepods-burstable-podfc092ce995ea6a51d0e0100cfcf79f73.slice - libcontainer container kubepods-burstable-podfc092ce995ea6a51d0e0100cfcf79f73.slice. Jul 6 23:52:06.479117 kubelet[2128]: E0706 23:52:06.479048 2128 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-d-7537ff12ef\" not found" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.485516 systemd[1]: Created slice kubepods-burstable-pod78bc417917372bccf9ca7c47276d0d4b.slice - libcontainer container kubepods-burstable-pod78bc417917372bccf9ca7c47276d0d4b.slice. Jul 6 23:52:06.488230 kubelet[2128]: E0706 23:52:06.487955 2128 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-d-7537ff12ef\" not found" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.505025 kubelet[2128]: I0706 23:52:06.504964 2128 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.505459 kubelet[2128]: E0706 23:52:06.505406 2128 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://134.199.239.131:6443/api/v1/nodes\": dial tcp 134.199.239.131:6443: connect: connection refused" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.523004 kubelet[2128]: I0706 23:52:06.522753 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fc092ce995ea6a51d0e0100cfcf79f73-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-d-7537ff12ef\" (UID: \"fc092ce995ea6a51d0e0100cfcf79f73\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.523004 kubelet[2128]: I0706 23:52:06.522798 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc092ce995ea6a51d0e0100cfcf79f73-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-d-7537ff12ef\" (UID: \"fc092ce995ea6a51d0e0100cfcf79f73\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.523004 kubelet[2128]: I0706 23:52:06.522820 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc092ce995ea6a51d0e0100cfcf79f73-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-d-7537ff12ef\" (UID: \"fc092ce995ea6a51d0e0100cfcf79f73\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.523004 kubelet[2128]: I0706 23:52:06.522836 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc092ce995ea6a51d0e0100cfcf79f73-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-d-7537ff12ef\" (UID: \"fc092ce995ea6a51d0e0100cfcf79f73\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.523004 kubelet[2128]: I0706 23:52:06.522854 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e08c42056944087e1e627606e7e619a3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-d-7537ff12ef\" (UID: \"e08c42056944087e1e627606e7e619a3\") " pod="kube-system/kube-apiserver-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.523348 kubelet[2128]: I0706 23:52:06.522868 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc092ce995ea6a51d0e0100cfcf79f73-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-d-7537ff12ef\" (UID: \"fc092ce995ea6a51d0e0100cfcf79f73\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.523348 kubelet[2128]: I0706 23:52:06.522883 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78bc417917372bccf9ca7c47276d0d4b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-d-7537ff12ef\" (UID: \"78bc417917372bccf9ca7c47276d0d4b\") " pod="kube-system/kube-scheduler-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.523348 kubelet[2128]: I0706 23:52:06.522897 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e08c42056944087e1e627606e7e619a3-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-d-7537ff12ef\" (UID: \"e08c42056944087e1e627606e7e619a3\") " pod="kube-system/kube-apiserver-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.523348 kubelet[2128]: I0706 23:52:06.522913 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e08c42056944087e1e627606e7e619a3-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-d-7537ff12ef\" (UID: \"e08c42056944087e1e627606e7e619a3\") " pod="kube-system/kube-apiserver-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.525213 kubelet[2128]: E0706 23:52:06.525156 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.239.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-d-7537ff12ef?timeout=10s\": dial tcp 134.199.239.131:6443: connect: connection refused" interval="400ms" Jul 6 23:52:06.706807 kubelet[2128]: I0706 23:52:06.706770 2128 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.707285 kubelet[2128]: E0706 23:52:06.707259 2128 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://134.199.239.131:6443/api/v1/nodes\": dial tcp 134.199.239.131:6443: connect: connection refused" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:06.774714 kubelet[2128]: E0706 23:52:06.774549 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:06.776005 containerd[1462]: time="2025-07-06T23:52:06.775940278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-d-7537ff12ef,Uid:e08c42056944087e1e627606e7e619a3,Namespace:kube-system,Attempt:0,}" Jul 6 23:52:06.778133 systemd-resolved[1319]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jul 6 23:52:06.780102 kubelet[2128]: E0706 23:52:06.780006 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:06.780563 containerd[1462]: time="2025-07-06T23:52:06.780527651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-d-7537ff12ef,Uid:fc092ce995ea6a51d0e0100cfcf79f73,Namespace:kube-system,Attempt:0,}" Jul 6 23:52:06.789775 kubelet[2128]: E0706 23:52:06.789378 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:06.792115 containerd[1462]: time="2025-07-06T23:52:06.790285505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-d-7537ff12ef,Uid:78bc417917372bccf9ca7c47276d0d4b,Namespace:kube-system,Attempt:0,}" Jul 6 23:52:06.926364 kubelet[2128]: E0706 23:52:06.926310 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.239.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-d-7537ff12ef?timeout=10s\": dial tcp 134.199.239.131:6443: connect: connection refused" interval="800ms" Jul 6 23:52:07.108778 kubelet[2128]: I0706 23:52:07.108658 2128 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:07.109412 kubelet[2128]: E0706 23:52:07.109371 2128 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://134.199.239.131:6443/api/v1/nodes\": dial tcp 134.199.239.131:6443: connect: connection refused" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:07.238266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745366810.mount: Deactivated successfully. Jul 6 23:52:07.242223 containerd[1462]: time="2025-07-06T23:52:07.242170575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:52:07.243172 containerd[1462]: time="2025-07-06T23:52:07.243105601Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:52:07.244359 containerd[1462]: time="2025-07-06T23:52:07.244316590Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:52:07.245326 kubelet[2128]: E0706 23:52:07.245289 2128 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://134.199.239.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.239.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:52:07.245748 containerd[1462]: time="2025-07-06T23:52:07.245563510Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:52:07.245748 containerd[1462]: time="2025-07-06T23:52:07.245616791Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:52:07.246623 containerd[1462]: time="2025-07-06T23:52:07.246357633Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:52:07.246623 containerd[1462]: time="2025-07-06T23:52:07.246565155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:52:07.248205 containerd[1462]: time="2025-07-06T23:52:07.248170935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:52:07.251308 containerd[1462]: time="2025-07-06T23:52:07.251259359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 460.870907ms" Jul 6 23:52:07.264258 containerd[1462]: time="2025-07-06T23:52:07.263876160Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 483.276816ms" Jul 6 23:52:07.273708 containerd[1462]: time="2025-07-06T23:52:07.273499096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 496.937141ms" Jul 6 23:52:07.419599 containerd[1462]: time="2025-07-06T23:52:07.419200698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:07.423862 containerd[1462]: time="2025-07-06T23:52:07.423565020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:07.423862 containerd[1462]: time="2025-07-06T23:52:07.423631504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:07.425809 containerd[1462]: time="2025-07-06T23:52:07.423775599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:07.434125 containerd[1462]: time="2025-07-06T23:52:07.433798242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:07.434125 containerd[1462]: time="2025-07-06T23:52:07.433872049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:07.434125 containerd[1462]: time="2025-07-06T23:52:07.433888233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:07.434125 containerd[1462]: time="2025-07-06T23:52:07.433982246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:07.439291 containerd[1462]: time="2025-07-06T23:52:07.437619976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:07.439291 containerd[1462]: time="2025-07-06T23:52:07.437696229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:07.439291 containerd[1462]: time="2025-07-06T23:52:07.437710207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:07.439291 containerd[1462]: time="2025-07-06T23:52:07.437807408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:07.463364 systemd[1]: Started cri-containerd-41477f6753dc166d35b4caaf6b8ecf6294669dd54d876bcbc7c1aaa096b39d00.scope - libcontainer container 41477f6753dc166d35b4caaf6b8ecf6294669dd54d876bcbc7c1aaa096b39d00. Jul 6 23:52:07.468736 systemd[1]: Started cri-containerd-7617f60b122b9e1c0de068964832c7228f7cecbaed1a2aae491292e58915b812.scope - libcontainer container 7617f60b122b9e1c0de068964832c7228f7cecbaed1a2aae491292e58915b812. Jul 6 23:52:07.473080 systemd[1]: Started cri-containerd-6dbc6fe940a4cf60b083a25d5f12a9439412b8e6fe17ae399da7452c248459f5.scope - libcontainer container 6dbc6fe940a4cf60b083a25d5f12a9439412b8e6fe17ae399da7452c248459f5. Jul 6 23:52:07.546110 containerd[1462]: time="2025-07-06T23:52:07.545711188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-d-7537ff12ef,Uid:e08c42056944087e1e627606e7e619a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"41477f6753dc166d35b4caaf6b8ecf6294669dd54d876bcbc7c1aaa096b39d00\"" Jul 6 23:52:07.552185 kubelet[2128]: E0706 23:52:07.552147 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:07.561840 containerd[1462]: time="2025-07-06T23:52:07.561588841Z" level=info msg="CreateContainer within sandbox \"41477f6753dc166d35b4caaf6b8ecf6294669dd54d876bcbc7c1aaa096b39d00\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:52:07.574536 containerd[1462]: time="2025-07-06T23:52:07.574396997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-d-7537ff12ef,Uid:fc092ce995ea6a51d0e0100cfcf79f73,Namespace:kube-system,Attempt:0,} returns sandbox id \"7617f60b122b9e1c0de068964832c7228f7cecbaed1a2aae491292e58915b812\"" Jul 6 23:52:07.576033 kubelet[2128]: E0706 23:52:07.575895 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:07.580826 containerd[1462]: time="2025-07-06T23:52:07.580786312Z" level=info msg="CreateContainer within sandbox \"7617f60b122b9e1c0de068964832c7228f7cecbaed1a2aae491292e58915b812\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:52:07.583348 containerd[1462]: time="2025-07-06T23:52:07.583187822Z" level=info msg="CreateContainer within sandbox \"41477f6753dc166d35b4caaf6b8ecf6294669dd54d876bcbc7c1aaa096b39d00\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"454e2d86f510b2b87153e3e1257676a2d626ae6998b695ad1cc7c0574a4baf44\"" Jul 6 23:52:07.583991 containerd[1462]: time="2025-07-06T23:52:07.583758061Z" level=info msg="StartContainer for \"454e2d86f510b2b87153e3e1257676a2d626ae6998b695ad1cc7c0574a4baf44\"" Jul 6 23:52:07.597601 containerd[1462]: time="2025-07-06T23:52:07.597465425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-d-7537ff12ef,Uid:78bc417917372bccf9ca7c47276d0d4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dbc6fe940a4cf60b083a25d5f12a9439412b8e6fe17ae399da7452c248459f5\"" Jul 6 23:52:07.614359 kubelet[2128]: E0706 23:52:07.614320 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:07.618579 containerd[1462]: time="2025-07-06T23:52:07.618534533Z" level=info msg="CreateContainer within sandbox \"6dbc6fe940a4cf60b083a25d5f12a9439412b8e6fe17ae399da7452c248459f5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:52:07.619005 containerd[1462]: time="2025-07-06T23:52:07.618982668Z" level=info msg="CreateContainer within sandbox \"7617f60b122b9e1c0de068964832c7228f7cecbaed1a2aae491292e58915b812\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2564089fc779b40dd0dd4dab4c4fbea3860bffc96e7a2467908537eeef266206\"" Jul 6 23:52:07.619905 systemd[1]: Started cri-containerd-454e2d86f510b2b87153e3e1257676a2d626ae6998b695ad1cc7c0574a4baf44.scope - libcontainer container 454e2d86f510b2b87153e3e1257676a2d626ae6998b695ad1cc7c0574a4baf44. Jul 6 23:52:07.623582 containerd[1462]: time="2025-07-06T23:52:07.623548161Z" level=info msg="StartContainer for \"2564089fc779b40dd0dd4dab4c4fbea3860bffc96e7a2467908537eeef266206\"" Jul 6 23:52:07.642567 containerd[1462]: time="2025-07-06T23:52:07.642257486Z" level=info msg="CreateContainer within sandbox \"6dbc6fe940a4cf60b083a25d5f12a9439412b8e6fe17ae399da7452c248459f5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8584f6fe1abc4ffb0c3c18f19c7169adbead0822d7cc448c05b0e526e35b39ab\"" Jul 6 23:52:07.645114 containerd[1462]: time="2025-07-06T23:52:07.645016509Z" level=info msg="StartContainer for \"8584f6fe1abc4ffb0c3c18f19c7169adbead0822d7cc448c05b0e526e35b39ab\"" Jul 6 23:52:07.671741 systemd[1]: Started cri-containerd-2564089fc779b40dd0dd4dab4c4fbea3860bffc96e7a2467908537eeef266206.scope - libcontainer container 2564089fc779b40dd0dd4dab4c4fbea3860bffc96e7a2467908537eeef266206. Jul 6 23:52:07.682159 kubelet[2128]: E0706 23:52:07.682016 2128 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://134.199.239.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-d-7537ff12ef&limit=500&resourceVersion=0\": dial tcp 134.199.239.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:52:07.709944 kubelet[2128]: E0706 23:52:07.709711 2128 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://134.199.239.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.239.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:52:07.711833 systemd[1]: Started cri-containerd-8584f6fe1abc4ffb0c3c18f19c7169adbead0822d7cc448c05b0e526e35b39ab.scope - libcontainer container 8584f6fe1abc4ffb0c3c18f19c7169adbead0822d7cc448c05b0e526e35b39ab. Jul 6 23:52:07.718192 containerd[1462]: time="2025-07-06T23:52:07.718131081Z" level=info msg="StartContainer for \"454e2d86f510b2b87153e3e1257676a2d626ae6998b695ad1cc7c0574a4baf44\" returns successfully" Jul 6 23:52:07.727551 kubelet[2128]: E0706 23:52:07.727487 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.239.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-d-7537ff12ef?timeout=10s\": dial tcp 134.199.239.131:6443: connect: connection refused" interval="1.6s" Jul 6 23:52:07.764119 containerd[1462]: time="2025-07-06T23:52:07.763970465Z" level=info msg="StartContainer for \"2564089fc779b40dd0dd4dab4c4fbea3860bffc96e7a2467908537eeef266206\" returns successfully" Jul 6 23:52:07.779958 kubelet[2128]: E0706 23:52:07.779809 2128 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://134.199.239.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.239.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:52:07.794375 containerd[1462]: time="2025-07-06T23:52:07.794326498Z" level=info msg="StartContainer for \"8584f6fe1abc4ffb0c3c18f19c7169adbead0822d7cc448c05b0e526e35b39ab\" returns successfully" Jul 6 23:52:07.910952 kubelet[2128]: I0706 23:52:07.910817 2128 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:07.911752 kubelet[2128]: E0706 23:52:07.911200 2128 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://134.199.239.131:6443/api/v1/nodes\": dial tcp 134.199.239.131:6443: connect: connection refused" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:08.368577 kubelet[2128]: E0706 23:52:08.368477 2128 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-d-7537ff12ef\" not found" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:08.368818 kubelet[2128]: E0706 23:52:08.368614 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:08.372302 kubelet[2128]: E0706 23:52:08.371838 2128 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-d-7537ff12ef\" not found" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:08.372302 kubelet[2128]: E0706 23:52:08.371993 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:08.373942 kubelet[2128]: E0706 23:52:08.373917 2128 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-d-7537ff12ef\" not found" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:08.375127 kubelet[2128]: E0706 23:52:08.375102 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:09.376683 kubelet[2128]: E0706 23:52:09.376644 2128 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-d-7537ff12ef\" not found" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:09.377251 kubelet[2128]: E0706 23:52:09.376834 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:09.377251 kubelet[2128]: E0706 23:52:09.377246 2128 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-d-7537ff12ef\" not found" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:09.377386 kubelet[2128]: E0706 23:52:09.377365 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:09.514034 kubelet[2128]: I0706 23:52:09.513998 2128 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:10.365168 kubelet[2128]: I0706 23:52:10.364941 2128 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:10.365168 kubelet[2128]: E0706 23:52:10.364987 2128 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.4-d-7537ff12ef\": node \"ci-4081.3.4-d-7537ff12ef\" not found" Jul 6 23:52:10.387936 kubelet[2128]: E0706 23:52:10.387897 2128 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-d-7537ff12ef\" not found" Jul 6 23:52:10.488178 kubelet[2128]: E0706 23:52:10.488090 2128 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-d-7537ff12ef\" not found" Jul 6 23:52:10.588842 kubelet[2128]: E0706 23:52:10.588788 2128 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-d-7537ff12ef\" not found" Jul 6 23:52:10.623141 kubelet[2128]: I0706 23:52:10.622958 2128 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:10.630933 kubelet[2128]: E0706 23:52:10.630890 2128 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-d-7537ff12ef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:10.631122 kubelet[2128]: I0706 23:52:10.630926 2128 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:10.633419 kubelet[2128]: E0706 23:52:10.633384 2128 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-d-7537ff12ef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:10.633419 kubelet[2128]: I0706 23:52:10.633415 2128 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:10.635195 kubelet[2128]: E0706 23:52:10.635164 2128 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.4-d-7537ff12ef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:11.303484 kubelet[2128]: I0706 23:52:11.303395 2128 apiserver.go:52] "Watching apiserver" Jul 6 23:52:11.321785 kubelet[2128]: I0706 23:52:11.321737 2128 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:52:12.575777 systemd[1]: Reloading requested from client PID 2415 ('systemctl') (unit session-7.scope)... Jul 6 23:52:12.575796 systemd[1]: Reloading... Jul 6 23:52:12.682092 zram_generator::config[2463]: No configuration found. Jul 6 23:52:12.839348 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:52:12.932348 systemd[1]: Reloading finished in 356 ms. Jul 6 23:52:12.978387 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:52:12.993669 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:52:12.993889 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:52:12.993945 systemd[1]: kubelet.service: Consumed 1.100s CPU time, 128.7M memory peak, 0B memory swap peak. Jul 6 23:52:13.000520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:52:13.174135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:52:13.185630 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:52:13.264202 kubelet[2505]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:52:13.264202 kubelet[2505]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:52:13.264202 kubelet[2505]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:52:13.264637 kubelet[2505]: I0706 23:52:13.264225 2505 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:52:13.279242 kubelet[2505]: I0706 23:52:13.278401 2505 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:52:13.279242 kubelet[2505]: I0706 23:52:13.278437 2505 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:52:13.279477 kubelet[2505]: I0706 23:52:13.279353 2505 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:52:13.281002 kubelet[2505]: I0706 23:52:13.280962 2505 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:52:13.284497 kubelet[2505]: I0706 23:52:13.284165 2505 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:52:13.288380 kubelet[2505]: E0706 23:52:13.288303 2505 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:52:13.288380 kubelet[2505]: I0706 23:52:13.288377 2505 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:52:13.293109 kubelet[2505]: I0706 23:52:13.292588 2505 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:52:13.293109 kubelet[2505]: I0706 23:52:13.292868 2505 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:52:13.293109 kubelet[2505]: I0706 23:52:13.292904 2505 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-d-7537ff12ef","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:52:13.293109 kubelet[2505]: I0706 23:52:13.293105 2505 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:52:13.293620 kubelet[2505]: I0706 23:52:13.293118 2505 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:52:13.293620 kubelet[2505]: I0706 23:52:13.293167 2505 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:52:13.293620 kubelet[2505]: I0706 23:52:13.293360 2505 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:52:13.293620 kubelet[2505]: I0706 23:52:13.293391 2505 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:52:13.293620 kubelet[2505]: I0706 23:52:13.293420 2505 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:52:13.293620 kubelet[2505]: I0706 23:52:13.293439 2505 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:52:13.299088 kubelet[2505]: I0706 23:52:13.297482 2505 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:52:13.299088 kubelet[2505]: I0706 23:52:13.297939 2505 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:52:13.301437 kubelet[2505]: I0706 23:52:13.301407 2505 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:52:13.301700 kubelet[2505]: I0706 23:52:13.301684 2505 server.go:1289] "Started kubelet" Jul 6 23:52:13.304950 kubelet[2505]: I0706 23:52:13.304921 2505 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:52:13.307812 kubelet[2505]: I0706 23:52:13.307774 2505 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:52:13.310016 kubelet[2505]: I0706 23:52:13.309988 2505 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:52:13.321737 kubelet[2505]: I0706 23:52:13.321660 2505 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:52:13.322273 kubelet[2505]: I0706 23:52:13.322249 2505 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:52:13.322436 kubelet[2505]: I0706 23:52:13.321789 2505 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:52:13.323662 kubelet[2505]: I0706 23:52:13.323618 2505 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:52:13.323861 kubelet[2505]: E0706 23:52:13.323842 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-d-7537ff12ef\" not found" Jul 6 23:52:13.324361 kubelet[2505]: I0706 23:52:13.324337 2505 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:52:13.324544 kubelet[2505]: I0706 23:52:13.324520 2505 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:52:13.352271 kubelet[2505]: I0706 23:52:13.352210 2505 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:52:13.356211 kubelet[2505]: I0706 23:52:13.356158 2505 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:52:13.356554 kubelet[2505]: I0706 23:52:13.356543 2505 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:52:13.357208 kubelet[2505]: I0706 23:52:13.357182 2505 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:52:13.357330 kubelet[2505]: I0706 23:52:13.357322 2505 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:52:13.357455 kubelet[2505]: E0706 23:52:13.357426 2505 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:52:13.370510 kubelet[2505]: E0706 23:52:13.370412 2505 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:52:13.375502 kubelet[2505]: I0706 23:52:13.375303 2505 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:52:13.375502 kubelet[2505]: I0706 23:52:13.375340 2505 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:52:13.375502 kubelet[2505]: I0706 23:52:13.375441 2505 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:52:13.451650 kubelet[2505]: I0706 23:52:13.450132 2505 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:52:13.451650 kubelet[2505]: I0706 23:52:13.450151 2505 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:52:13.451650 kubelet[2505]: I0706 23:52:13.450172 2505 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:52:13.451650 kubelet[2505]: I0706 23:52:13.450306 2505 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:52:13.451650 kubelet[2505]: I0706 23:52:13.450315 2505 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:52:13.451650 kubelet[2505]: I0706 23:52:13.450333 2505 policy_none.go:49] "None policy: Start" Jul 6 23:52:13.451650 kubelet[2505]: I0706 23:52:13.450343 2505 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:52:13.451650 kubelet[2505]: I0706 23:52:13.450352 2505 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:52:13.451650 kubelet[2505]: I0706 23:52:13.450447 2505 state_mem.go:75] "Updated machine memory state" Jul 6 23:52:13.458224 kubelet[2505]: E0706 23:52:13.456928 2505 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:52:13.458224 kubelet[2505]: I0706 23:52:13.457168 2505 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:52:13.458224 kubelet[2505]: I0706 23:52:13.457180 2505 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:52:13.458224 kubelet[2505]: I0706 23:52:13.457588 2505 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:52:13.458224 kubelet[2505]: I0706 23:52:13.458174 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.460036 kubelet[2505]: I0706 23:52:13.460016 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.460496 kubelet[2505]: I0706 23:52:13.460480 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.465102 kubelet[2505]: E0706 23:52:13.462335 2505 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:52:13.471454 kubelet[2505]: I0706 23:52:13.471428 2505 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:52:13.474452 kubelet[2505]: I0706 23:52:13.474318 2505 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:52:13.475247 kubelet[2505]: I0706 23:52:13.474968 2505 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:52:13.568837 kubelet[2505]: I0706 23:52:13.568810 2505 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.581401 kubelet[2505]: I0706 23:52:13.581360 2505 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.581533 kubelet[2505]: I0706 23:52:13.581482 2505 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.625607 kubelet[2505]: I0706 23:52:13.625266 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e08c42056944087e1e627606e7e619a3-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-d-7537ff12ef\" (UID: \"e08c42056944087e1e627606e7e619a3\") " pod="kube-system/kube-apiserver-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.625607 kubelet[2505]: I0706 23:52:13.625312 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e08c42056944087e1e627606e7e619a3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-d-7537ff12ef\" (UID: \"e08c42056944087e1e627606e7e619a3\") " pod="kube-system/kube-apiserver-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.625607 kubelet[2505]: I0706 23:52:13.625340 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fc092ce995ea6a51d0e0100cfcf79f73-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-d-7537ff12ef\" (UID: \"fc092ce995ea6a51d0e0100cfcf79f73\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.625607 kubelet[2505]: I0706 23:52:13.625357 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc092ce995ea6a51d0e0100cfcf79f73-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-d-7537ff12ef\" (UID: \"fc092ce995ea6a51d0e0100cfcf79f73\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.625607 kubelet[2505]: I0706 23:52:13.625374 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc092ce995ea6a51d0e0100cfcf79f73-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-d-7537ff12ef\" (UID: \"fc092ce995ea6a51d0e0100cfcf79f73\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.625882 kubelet[2505]: I0706 23:52:13.625419 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc092ce995ea6a51d0e0100cfcf79f73-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-d-7537ff12ef\" (UID: \"fc092ce995ea6a51d0e0100cfcf79f73\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.625882 kubelet[2505]: I0706 23:52:13.625438 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e08c42056944087e1e627606e7e619a3-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-d-7537ff12ef\" (UID: \"e08c42056944087e1e627606e7e619a3\") " pod="kube-system/kube-apiserver-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.625882 kubelet[2505]: I0706 23:52:13.625456 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc092ce995ea6a51d0e0100cfcf79f73-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-d-7537ff12ef\" (UID: \"fc092ce995ea6a51d0e0100cfcf79f73\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.625882 kubelet[2505]: I0706 23:52:13.625472 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78bc417917372bccf9ca7c47276d0d4b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-d-7537ff12ef\" (UID: \"78bc417917372bccf9ca7c47276d0d4b\") " pod="kube-system/kube-scheduler-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:13.773199 kubelet[2505]: E0706 23:52:13.772743 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:13.776123 kubelet[2505]: E0706 23:52:13.775963 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:13.776453 kubelet[2505]: E0706 23:52:13.776379 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:14.302869 kubelet[2505]: I0706 23:52:14.302813 2505 apiserver.go:52] "Watching apiserver" Jul 6 23:52:14.325507 kubelet[2505]: I0706 23:52:14.325453 2505 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:52:14.416609 kubelet[2505]: I0706 23:52:14.416538 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:14.418235 kubelet[2505]: I0706 23:52:14.417754 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:14.418235 kubelet[2505]: E0706 23:52:14.417755 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:14.431527 kubelet[2505]: I0706 23:52:14.431172 2505 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:52:14.431527 kubelet[2505]: E0706 23:52:14.431249 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-d-7537ff12ef\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:14.431527 kubelet[2505]: E0706 23:52:14.431439 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:14.435186 kubelet[2505]: I0706 23:52:14.435050 2505 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:52:14.435789 kubelet[2505]: E0706 23:52:14.435216 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-d-7537ff12ef\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:14.436196 kubelet[2505]: E0706 23:52:14.436172 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:14.458799 kubelet[2505]: I0706 23:52:14.458643 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-d-7537ff12ef" podStartSLOduration=1.458620697 podStartE2EDuration="1.458620697s" podCreationTimestamp="2025-07-06 23:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:52:14.4463932 +0000 UTC m=+1.252592782" watchObservedRunningTime="2025-07-06 23:52:14.458620697 +0000 UTC m=+1.264820336" Jul 6 23:52:14.471434 kubelet[2505]: I0706 23:52:14.471143 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-d-7537ff12ef" podStartSLOduration=1.4711217699999999 podStartE2EDuration="1.47112177s" podCreationTimestamp="2025-07-06 23:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:52:14.459598088 +0000 UTC m=+1.265797668" watchObservedRunningTime="2025-07-06 23:52:14.47112177 +0000 UTC m=+1.277321354" Jul 6 23:52:14.471434 kubelet[2505]: I0706 23:52:14.471291 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-d-7537ff12ef" podStartSLOduration=1.471283561 podStartE2EDuration="1.471283561s" podCreationTimestamp="2025-07-06 23:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:52:14.47087725 +0000 UTC m=+1.277076833" watchObservedRunningTime="2025-07-06 23:52:14.471283561 +0000 UTC m=+1.277483144" Jul 6 23:52:15.421106 kubelet[2505]: E0706 23:52:15.419380 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:15.421106 kubelet[2505]: E0706 23:52:15.419625 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:16.421284 kubelet[2505]: E0706 23:52:16.421248 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:17.423142 kubelet[2505]: E0706 23:52:17.423112 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:18.232639 kubelet[2505]: I0706 23:52:18.232611 2505 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:52:18.233166 containerd[1462]: time="2025-07-06T23:52:18.233132758Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:52:18.233754 kubelet[2505]: I0706 23:52:18.233733 2505 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:52:18.659026 systemd[1]: Created slice kubepods-besteffort-pod7b60cf10_6ac2_4406_8191_42adab140be3.slice - libcontainer container kubepods-besteffort-pod7b60cf10_6ac2_4406_8191_42adab140be3.slice. Jul 6 23:52:18.670497 kubelet[2505]: I0706 23:52:18.670441 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7kfg\" (UniqueName: \"kubernetes.io/projected/7b60cf10-6ac2-4406-8191-42adab140be3-kube-api-access-w7kfg\") pod \"kube-proxy-9x8bx\" (UID: \"7b60cf10-6ac2-4406-8191-42adab140be3\") " pod="kube-system/kube-proxy-9x8bx" Jul 6 23:52:18.670919 kubelet[2505]: I0706 23:52:18.670511 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b60cf10-6ac2-4406-8191-42adab140be3-kube-proxy\") pod \"kube-proxy-9x8bx\" (UID: \"7b60cf10-6ac2-4406-8191-42adab140be3\") " pod="kube-system/kube-proxy-9x8bx" Jul 6 23:52:18.670919 kubelet[2505]: I0706 23:52:18.670553 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b60cf10-6ac2-4406-8191-42adab140be3-xtables-lock\") pod \"kube-proxy-9x8bx\" (UID: \"7b60cf10-6ac2-4406-8191-42adab140be3\") " pod="kube-system/kube-proxy-9x8bx" Jul 6 23:52:18.670919 kubelet[2505]: I0706 23:52:18.670578 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b60cf10-6ac2-4406-8191-42adab140be3-lib-modules\") pod \"kube-proxy-9x8bx\" (UID: \"7b60cf10-6ac2-4406-8191-42adab140be3\") " pod="kube-system/kube-proxy-9x8bx" Jul 6 23:52:18.809400 kubelet[2505]: E0706 23:52:18.809086 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:18.967214 kubelet[2505]: E0706 23:52:18.966765 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:18.969355 containerd[1462]: time="2025-07-06T23:52:18.969300374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9x8bx,Uid:7b60cf10-6ac2-4406-8191-42adab140be3,Namespace:kube-system,Attempt:0,}" Jul 6 23:52:19.000543 containerd[1462]: time="2025-07-06T23:52:19.000268660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:19.000543 containerd[1462]: time="2025-07-06T23:52:19.000456096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:19.001189 containerd[1462]: time="2025-07-06T23:52:19.001095245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:19.001475 containerd[1462]: time="2025-07-06T23:52:19.001405599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:19.034307 systemd[1]: Started cri-containerd-8619de9f8426be4f08edd2073a2118ca769e331a273932016f642dc283cd8862.scope - libcontainer container 8619de9f8426be4f08edd2073a2118ca769e331a273932016f642dc283cd8862. Jul 6 23:52:19.073306 containerd[1462]: time="2025-07-06T23:52:19.073150697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9x8bx,Uid:7b60cf10-6ac2-4406-8191-42adab140be3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8619de9f8426be4f08edd2073a2118ca769e331a273932016f642dc283cd8862\"" Jul 6 23:52:19.074923 kubelet[2505]: E0706 23:52:19.074899 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:19.080430 containerd[1462]: time="2025-07-06T23:52:19.080296374Z" level=info msg="CreateContainer within sandbox \"8619de9f8426be4f08edd2073a2118ca769e331a273932016f642dc283cd8862\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:52:19.095329 containerd[1462]: time="2025-07-06T23:52:19.095198602Z" level=info msg="CreateContainer within sandbox \"8619de9f8426be4f08edd2073a2118ca769e331a273932016f642dc283cd8862\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f798ad52318c8e349e96569eb0b14dcc8a084c8746444edb4f8dd8c61120ff7\"" Jul 6 23:52:19.097468 containerd[1462]: time="2025-07-06T23:52:19.096172204Z" level=info msg="StartContainer for \"3f798ad52318c8e349e96569eb0b14dcc8a084c8746444edb4f8dd8c61120ff7\"" Jul 6 23:52:19.138455 systemd[1]: Started cri-containerd-3f798ad52318c8e349e96569eb0b14dcc8a084c8746444edb4f8dd8c61120ff7.scope - libcontainer container 3f798ad52318c8e349e96569eb0b14dcc8a084c8746444edb4f8dd8c61120ff7. Jul 6 23:52:19.184452 containerd[1462]: time="2025-07-06T23:52:19.184414723Z" level=info msg="StartContainer for \"3f798ad52318c8e349e96569eb0b14dcc8a084c8746444edb4f8dd8c61120ff7\" returns successfully" Jul 6 23:52:19.400051 systemd[1]: Created slice kubepods-besteffort-pod7ccce51f_0c2c_47eb_964c_7036738aede0.slice - libcontainer container kubepods-besteffort-pod7ccce51f_0c2c_47eb_964c_7036738aede0.slice. Jul 6 23:52:19.430904 kubelet[2505]: E0706 23:52:19.429679 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:19.430904 kubelet[2505]: E0706 23:52:19.430892 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:19.475186 kubelet[2505]: I0706 23:52:19.475142 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ccce51f-0c2c-47eb-964c-7036738aede0-var-lib-calico\") pod \"tigera-operator-747864d56d-g988l\" (UID: \"7ccce51f-0c2c-47eb-964c-7036738aede0\") " pod="tigera-operator/tigera-operator-747864d56d-g988l" Jul 6 23:52:19.475418 kubelet[2505]: I0706 23:52:19.475200 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrg6d\" (UniqueName: \"kubernetes.io/projected/7ccce51f-0c2c-47eb-964c-7036738aede0-kube-api-access-vrg6d\") pod \"tigera-operator-747864d56d-g988l\" (UID: \"7ccce51f-0c2c-47eb-964c-7036738aede0\") " pod="tigera-operator/tigera-operator-747864d56d-g988l" Jul 6 23:52:19.707246 containerd[1462]: time="2025-07-06T23:52:19.707131105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-g988l,Uid:7ccce51f-0c2c-47eb-964c-7036738aede0,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:52:19.738142 containerd[1462]: time="2025-07-06T23:52:19.737279678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:19.738142 containerd[1462]: time="2025-07-06T23:52:19.737367731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:19.738142 containerd[1462]: time="2025-07-06T23:52:19.737402570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:19.738142 containerd[1462]: time="2025-07-06T23:52:19.737513355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:19.764333 systemd[1]: Started cri-containerd-4bae1687aeeef0337c089b5d31072869fdb1522b195e8420b2475355e24cd0cd.scope - libcontainer container 4bae1687aeeef0337c089b5d31072869fdb1522b195e8420b2475355e24cd0cd. Jul 6 23:52:19.791450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount865049064.mount: Deactivated successfully. Jul 6 23:52:19.826673 containerd[1462]: time="2025-07-06T23:52:19.826625087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-g988l,Uid:7ccce51f-0c2c-47eb-964c-7036738aede0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4bae1687aeeef0337c089b5d31072869fdb1522b195e8420b2475355e24cd0cd\"" Jul 6 23:52:19.829305 containerd[1462]: time="2025-07-06T23:52:19.829211051Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:52:20.433714 kubelet[2505]: E0706 23:52:20.433676 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:21.715746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3177398100.mount: Deactivated successfully. Jul 6 23:52:21.780004 kubelet[2505]: E0706 23:52:21.778763 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:21.797318 kubelet[2505]: I0706 23:52:21.796967 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9x8bx" podStartSLOduration=3.7969472619999998 podStartE2EDuration="3.796947262s" podCreationTimestamp="2025-07-06 23:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:52:19.455124661 +0000 UTC m=+6.261324244" watchObservedRunningTime="2025-07-06 23:52:21.796947262 +0000 UTC m=+8.603146836" Jul 6 23:52:22.437127 kubelet[2505]: E0706 23:52:22.436982 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:22.691722 containerd[1462]: time="2025-07-06T23:52:22.691310682Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:22.693045 containerd[1462]: time="2025-07-06T23:52:22.692743618Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 6 23:52:22.694122 containerd[1462]: time="2025-07-06T23:52:22.693607303Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:22.696798 containerd[1462]: time="2025-07-06T23:52:22.696686209Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:22.697938 containerd[1462]: time="2025-07-06T23:52:22.697796486Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.868547266s" Jul 6 23:52:22.697938 containerd[1462]: time="2025-07-06T23:52:22.697833895Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 6 23:52:22.703140 containerd[1462]: time="2025-07-06T23:52:22.703022337Z" level=info msg="CreateContainer within sandbox \"4bae1687aeeef0337c089b5d31072869fdb1522b195e8420b2475355e24cd0cd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:52:22.716445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2205730451.mount: Deactivated successfully. Jul 6 23:52:22.720897 containerd[1462]: time="2025-07-06T23:52:22.720750198Z" level=info msg="CreateContainer within sandbox \"4bae1687aeeef0337c089b5d31072869fdb1522b195e8420b2475355e24cd0cd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"038996632c5f2c67a1aee8149ba7f376dbd1d152aae2b26c3b85f77c859d1965\"" Jul 6 23:52:22.723876 containerd[1462]: time="2025-07-06T23:52:22.722859581Z" level=info msg="StartContainer for \"038996632c5f2c67a1aee8149ba7f376dbd1d152aae2b26c3b85f77c859d1965\"" Jul 6 23:52:22.756975 systemd[1]: run-containerd-runc-k8s.io-038996632c5f2c67a1aee8149ba7f376dbd1d152aae2b26c3b85f77c859d1965-runc.r1vNni.mount: Deactivated successfully. Jul 6 23:52:22.768315 systemd[1]: Started cri-containerd-038996632c5f2c67a1aee8149ba7f376dbd1d152aae2b26c3b85f77c859d1965.scope - libcontainer container 038996632c5f2c67a1aee8149ba7f376dbd1d152aae2b26c3b85f77c859d1965. Jul 6 23:52:22.802621 containerd[1462]: time="2025-07-06T23:52:22.802563718Z" level=info msg="StartContainer for \"038996632c5f2c67a1aee8149ba7f376dbd1d152aae2b26c3b85f77c859d1965\" returns successfully" Jul 6 23:52:26.449730 kubelet[2505]: E0706 23:52:26.449684 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:26.494819 kubelet[2505]: E0706 23:52:26.494764 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:26.504495 kubelet[2505]: I0706 23:52:26.504435 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-g988l" podStartSLOduration=4.63380329 podStartE2EDuration="7.504416028s" podCreationTimestamp="2025-07-06 23:52:19 +0000 UTC" firstStartedPulling="2025-07-06 23:52:19.828745977 +0000 UTC m=+6.634945551" lastFinishedPulling="2025-07-06 23:52:22.699358714 +0000 UTC m=+9.505558289" observedRunningTime="2025-07-06 23:52:23.450384724 +0000 UTC m=+10.256584298" watchObservedRunningTime="2025-07-06 23:52:26.504416028 +0000 UTC m=+13.310615602" Jul 6 23:52:28.086153 update_engine[1442]: I20250706 23:52:28.085565 1442 update_attempter.cc:509] Updating boot flags... Jul 6 23:52:28.123099 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2888) Jul 6 23:52:28.193190 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2887) Jul 6 23:52:28.273144 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2887) Jul 6 23:52:29.772069 sudo[1647]: pam_unix(sudo:session): session closed for user root Jul 6 23:52:29.776239 sshd[1644]: pam_unix(sshd:session): session closed for user core Jul 6 23:52:29.783516 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:52:29.784988 systemd[1]: sshd@6-134.199.239.131:22-139.178.89.65:37516.service: Deactivated successfully. Jul 6 23:52:29.787725 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:52:29.790995 systemd[1]: session-7.scope: Consumed 5.472s CPU time, 145.4M memory peak, 0B memory swap peak. Jul 6 23:52:29.793372 systemd-logind[1441]: Removed session 7. Jul 6 23:52:33.644790 systemd[1]: Created slice kubepods-besteffort-podbc810ed7_fb35_43a1_a9fe_8612b6449187.slice - libcontainer container kubepods-besteffort-podbc810ed7_fb35_43a1_a9fe_8612b6449187.slice. Jul 6 23:52:33.672569 kubelet[2505]: I0706 23:52:33.672387 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47nqq\" (UniqueName: \"kubernetes.io/projected/bc810ed7-fb35-43a1-a9fe-8612b6449187-kube-api-access-47nqq\") pod \"calico-typha-659b48f488-dl4x7\" (UID: \"bc810ed7-fb35-43a1-a9fe-8612b6449187\") " pod="calico-system/calico-typha-659b48f488-dl4x7" Jul 6 23:52:33.672569 kubelet[2505]: I0706 23:52:33.672451 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc810ed7-fb35-43a1-a9fe-8612b6449187-tigera-ca-bundle\") pod \"calico-typha-659b48f488-dl4x7\" (UID: \"bc810ed7-fb35-43a1-a9fe-8612b6449187\") " pod="calico-system/calico-typha-659b48f488-dl4x7" Jul 6 23:52:33.672569 kubelet[2505]: I0706 23:52:33.672480 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bc810ed7-fb35-43a1-a9fe-8612b6449187-typha-certs\") pod \"calico-typha-659b48f488-dl4x7\" (UID: \"bc810ed7-fb35-43a1-a9fe-8612b6449187\") " pod="calico-system/calico-typha-659b48f488-dl4x7" Jul 6 23:52:33.951234 kubelet[2505]: E0706 23:52:33.950824 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:33.952434 containerd[1462]: time="2025-07-06T23:52:33.952394951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-659b48f488-dl4x7,Uid:bc810ed7-fb35-43a1-a9fe-8612b6449187,Namespace:calico-system,Attempt:0,}" Jul 6 23:52:33.991711 containerd[1462]: time="2025-07-06T23:52:33.991137809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:33.991711 containerd[1462]: time="2025-07-06T23:52:33.991349048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:33.991711 containerd[1462]: time="2025-07-06T23:52:33.991368121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:33.995559 containerd[1462]: time="2025-07-06T23:52:33.995411255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:34.049717 systemd[1]: Created slice kubepods-besteffort-pod76837bc2_973e_485c_9310_9292db5d160f.slice - libcontainer container kubepods-besteffort-pod76837bc2_973e_485c_9310_9292db5d160f.slice. Jul 6 23:52:34.069281 systemd[1]: Started cri-containerd-cf7a8f811676b94e6fc719842e9781f2cb087311e67bbf703f281486ad4c4788.scope - libcontainer container cf7a8f811676b94e6fc719842e9781f2cb087311e67bbf703f281486ad4c4788. Jul 6 23:52:34.075598 kubelet[2505]: I0706 23:52:34.075517 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jklv\" (UniqueName: \"kubernetes.io/projected/76837bc2-973e-485c-9310-9292db5d160f-kube-api-access-4jklv\") pod \"calico-node-fl6s6\" (UID: \"76837bc2-973e-485c-9310-9292db5d160f\") " pod="calico-system/calico-node-fl6s6" Jul 6 23:52:34.075598 kubelet[2505]: I0706 23:52:34.075585 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/76837bc2-973e-485c-9310-9292db5d160f-flexvol-driver-host\") pod \"calico-node-fl6s6\" (UID: \"76837bc2-973e-485c-9310-9292db5d160f\") " pod="calico-system/calico-node-fl6s6" Jul 6 23:52:34.075801 kubelet[2505]: I0706 23:52:34.075614 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76837bc2-973e-485c-9310-9292db5d160f-xtables-lock\") pod \"calico-node-fl6s6\" (UID: \"76837bc2-973e-485c-9310-9292db5d160f\") " pod="calico-system/calico-node-fl6s6" Jul 6 23:52:34.075801 kubelet[2505]: I0706 23:52:34.075635 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76837bc2-973e-485c-9310-9292db5d160f-lib-modules\") pod \"calico-node-fl6s6\" (UID: \"76837bc2-973e-485c-9310-9292db5d160f\") " pod="calico-system/calico-node-fl6s6" Jul 6 23:52:34.075801 kubelet[2505]: I0706 23:52:34.075651 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/76837bc2-973e-485c-9310-9292db5d160f-node-certs\") pod \"calico-node-fl6s6\" (UID: \"76837bc2-973e-485c-9310-9292db5d160f\") " pod="calico-system/calico-node-fl6s6" Jul 6 23:52:34.075801 kubelet[2505]: I0706 23:52:34.075670 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/76837bc2-973e-485c-9310-9292db5d160f-cni-bin-dir\") pod \"calico-node-fl6s6\" (UID: \"76837bc2-973e-485c-9310-9292db5d160f\") " pod="calico-system/calico-node-fl6s6" Jul 6 23:52:34.075801 kubelet[2505]: I0706 23:52:34.075687 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/76837bc2-973e-485c-9310-9292db5d160f-policysync\") pod \"calico-node-fl6s6\" (UID: \"76837bc2-973e-485c-9310-9292db5d160f\") " pod="calico-system/calico-node-fl6s6" Jul 6 23:52:34.075937 kubelet[2505]: I0706 23:52:34.075704 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/76837bc2-973e-485c-9310-9292db5d160f-var-lib-calico\") pod \"calico-node-fl6s6\" (UID: \"76837bc2-973e-485c-9310-9292db5d160f\") " pod="calico-system/calico-node-fl6s6" Jul 6 23:52:34.075937 kubelet[2505]: I0706 23:52:34.075719 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/76837bc2-973e-485c-9310-9292db5d160f-cni-log-dir\") pod \"calico-node-fl6s6\" (UID: \"76837bc2-973e-485c-9310-9292db5d160f\") " pod="calico-system/calico-node-fl6s6" Jul 6 23:52:34.075937 kubelet[2505]: I0706 23:52:34.075734 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/76837bc2-973e-485c-9310-9292db5d160f-cni-net-dir\") pod \"calico-node-fl6s6\" (UID: \"76837bc2-973e-485c-9310-9292db5d160f\") " pod="calico-system/calico-node-fl6s6" Jul 6 23:52:34.075937 kubelet[2505]: I0706 23:52:34.075750 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/76837bc2-973e-485c-9310-9292db5d160f-var-run-calico\") pod \"calico-node-fl6s6\" (UID: \"76837bc2-973e-485c-9310-9292db5d160f\") " pod="calico-system/calico-node-fl6s6" Jul 6 23:52:34.075937 kubelet[2505]: I0706 23:52:34.075769 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76837bc2-973e-485c-9310-9292db5d160f-tigera-ca-bundle\") pod \"calico-node-fl6s6\" (UID: \"76837bc2-973e-485c-9310-9292db5d160f\") " pod="calico-system/calico-node-fl6s6" Jul 6 23:52:34.152802 containerd[1462]: time="2025-07-06T23:52:34.152611725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-659b48f488-dl4x7,Uid:bc810ed7-fb35-43a1-a9fe-8612b6449187,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf7a8f811676b94e6fc719842e9781f2cb087311e67bbf703f281486ad4c4788\"" Jul 6 23:52:34.153757 kubelet[2505]: E0706 23:52:34.153731 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:34.156189 containerd[1462]: time="2025-07-06T23:52:34.156146388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:52:34.180505 kubelet[2505]: E0706 23:52:34.179970 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.180505 kubelet[2505]: W0706 23:52:34.180018 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.180505 kubelet[2505]: E0706 23:52:34.180044 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.180786 kubelet[2505]: E0706 23:52:34.180549 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.180786 kubelet[2505]: W0706 23:52:34.180684 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.180786 kubelet[2505]: E0706 23:52:34.180700 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.183847 kubelet[2505]: E0706 23:52:34.181224 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.183847 kubelet[2505]: W0706 23:52:34.181250 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.183847 kubelet[2505]: E0706 23:52:34.181263 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.183847 kubelet[2505]: E0706 23:52:34.183845 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.184197 kubelet[2505]: W0706 23:52:34.183861 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.184197 kubelet[2505]: E0706 23:52:34.183923 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.184808 kubelet[2505]: E0706 23:52:34.184503 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.184808 kubelet[2505]: W0706 23:52:34.184627 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.184808 kubelet[2505]: E0706 23:52:34.184642 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.187696 kubelet[2505]: E0706 23:52:34.185046 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.187696 kubelet[2505]: W0706 23:52:34.185093 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.187696 kubelet[2505]: E0706 23:52:34.185116 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.187696 kubelet[2505]: E0706 23:52:34.185463 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.187696 kubelet[2505]: W0706 23:52:34.185494 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.187696 kubelet[2505]: E0706 23:52:34.185507 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.187696 kubelet[2505]: E0706 23:52:34.185882 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.187696 kubelet[2505]: W0706 23:52:34.185892 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.187696 kubelet[2505]: E0706 23:52:34.185902 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.187696 kubelet[2505]: E0706 23:52:34.186280 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.188027 kubelet[2505]: W0706 23:52:34.186289 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.188027 kubelet[2505]: E0706 23:52:34.186300 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.188027 kubelet[2505]: E0706 23:52:34.186709 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.188027 kubelet[2505]: W0706 23:52:34.186733 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.188027 kubelet[2505]: E0706 23:52:34.186744 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.188027 kubelet[2505]: E0706 23:52:34.187235 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.188027 kubelet[2505]: W0706 23:52:34.187245 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.188027 kubelet[2505]: E0706 23:52:34.187273 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.188027 kubelet[2505]: E0706 23:52:34.187646 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.188027 kubelet[2505]: W0706 23:52:34.187672 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.188326 kubelet[2505]: E0706 23:52:34.187684 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.188326 kubelet[2505]: E0706 23:52:34.188138 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.188326 kubelet[2505]: W0706 23:52:34.188148 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.188326 kubelet[2505]: E0706 23:52:34.188162 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.188468 kubelet[2505]: E0706 23:52:34.188367 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.188468 kubelet[2505]: W0706 23:52:34.188375 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.188468 kubelet[2505]: E0706 23:52:34.188384 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.189171 kubelet[2505]: E0706 23:52:34.188689 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.189171 kubelet[2505]: W0706 23:52:34.188703 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.189171 kubelet[2505]: E0706 23:52:34.188713 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.189171 kubelet[2505]: E0706 23:52:34.189024 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.189171 kubelet[2505]: W0706 23:52:34.189034 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.189171 kubelet[2505]: E0706 23:52:34.189044 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.189468 kubelet[2505]: E0706 23:52:34.189301 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.189468 kubelet[2505]: W0706 23:52:34.189310 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.189468 kubelet[2505]: E0706 23:52:34.189319 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.189711 kubelet[2505]: E0706 23:52:34.189697 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.189711 kubelet[2505]: W0706 23:52:34.189709 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.189774 kubelet[2505]: E0706 23:52:34.189720 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.190122 kubelet[2505]: E0706 23:52:34.190029 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.190286 kubelet[2505]: W0706 23:52:34.190046 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.190286 kubelet[2505]: E0706 23:52:34.190214 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.191082 kubelet[2505]: E0706 23:52:34.190684 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.191082 kubelet[2505]: W0706 23:52:34.190710 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.191082 kubelet[2505]: E0706 23:52:34.190723 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.191747 kubelet[2505]: E0706 23:52:34.191729 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.191747 kubelet[2505]: W0706 23:52:34.191743 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.191827 kubelet[2505]: E0706 23:52:34.191755 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.195776 kubelet[2505]: E0706 23:52:34.195753 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.196022 kubelet[2505]: W0706 23:52:34.195993 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.196342 kubelet[2505]: E0706 23:52:34.196024 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.324767 kubelet[2505]: E0706 23:52:34.324711 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fvz6x" podUID="f74bf16c-5f59-4774-ac59-c82b3e42ab4b" Jul 6 23:52:34.359089 containerd[1462]: time="2025-07-06T23:52:34.358759686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fl6s6,Uid:76837bc2-973e-485c-9310-9292db5d160f,Namespace:calico-system,Attempt:0,}" Jul 6 23:52:34.370440 kubelet[2505]: E0706 23:52:34.370396 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.370440 kubelet[2505]: W0706 23:52:34.370429 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.370826 kubelet[2505]: E0706 23:52:34.370454 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.370826 kubelet[2505]: E0706 23:52:34.370767 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.370826 kubelet[2505]: W0706 23:52:34.370779 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.370826 kubelet[2505]: E0706 23:52:34.370795 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.371169 kubelet[2505]: E0706 23:52:34.371134 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.371169 kubelet[2505]: W0706 23:52:34.371147 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.371343 kubelet[2505]: E0706 23:52:34.371178 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.372198 kubelet[2505]: E0706 23:52:34.372175 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.372198 kubelet[2505]: W0706 23:52:34.372194 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.372641 kubelet[2505]: E0706 23:52:34.372210 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.373369 kubelet[2505]: E0706 23:52:34.372783 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.373369 kubelet[2505]: W0706 23:52:34.373014 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.373369 kubelet[2505]: E0706 23:52:34.373029 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.374859 kubelet[2505]: E0706 23:52:34.373923 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.374859 kubelet[2505]: W0706 23:52:34.373942 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.374859 kubelet[2505]: E0706 23:52:34.374194 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.375819 kubelet[2505]: E0706 23:52:34.375687 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.375819 kubelet[2505]: W0706 23:52:34.375704 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.375819 kubelet[2505]: E0706 23:52:34.375722 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.376605 kubelet[2505]: E0706 23:52:34.376324 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.376605 kubelet[2505]: W0706 23:52:34.376341 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.376605 kubelet[2505]: E0706 23:52:34.376353 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.377591 kubelet[2505]: E0706 23:52:34.377529 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.377591 kubelet[2505]: W0706 23:52:34.377543 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.377591 kubelet[2505]: E0706 23:52:34.377555 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.378417 kubelet[2505]: E0706 23:52:34.378119 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.378417 kubelet[2505]: W0706 23:52:34.378132 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.378417 kubelet[2505]: E0706 23:52:34.378144 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.378830 kubelet[2505]: E0706 23:52:34.378588 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.378830 kubelet[2505]: W0706 23:52:34.378719 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.378830 kubelet[2505]: E0706 23:52:34.378731 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.379302 kubelet[2505]: E0706 23:52:34.379287 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.379445 kubelet[2505]: W0706 23:52:34.379300 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.379445 kubelet[2505]: E0706 23:52:34.379318 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.381107 kubelet[2505]: E0706 23:52:34.380429 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.381107 kubelet[2505]: W0706 23:52:34.380447 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.381107 kubelet[2505]: E0706 23:52:34.380459 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.381370 kubelet[2505]: E0706 23:52:34.381356 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.381413 kubelet[2505]: W0706 23:52:34.381375 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.381413 kubelet[2505]: E0706 23:52:34.381386 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.381576 kubelet[2505]: E0706 23:52:34.381561 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.381576 kubelet[2505]: W0706 23:52:34.381569 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.381627 kubelet[2505]: E0706 23:52:34.381578 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.382273 kubelet[2505]: E0706 23:52:34.382143 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.382273 kubelet[2505]: W0706 23:52:34.382162 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.382273 kubelet[2505]: E0706 23:52:34.382174 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.383758 kubelet[2505]: E0706 23:52:34.382938 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.383758 kubelet[2505]: W0706 23:52:34.382953 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.383758 kubelet[2505]: E0706 23:52:34.382979 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.383758 kubelet[2505]: E0706 23:52:34.383707 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.383758 kubelet[2505]: W0706 23:52:34.383718 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.383758 kubelet[2505]: E0706 23:52:34.383731 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.385946 kubelet[2505]: E0706 23:52:34.385708 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.385946 kubelet[2505]: W0706 23:52:34.385724 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.385946 kubelet[2505]: E0706 23:52:34.385736 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.387195 kubelet[2505]: E0706 23:52:34.387127 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.387195 kubelet[2505]: W0706 23:52:34.387144 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.387195 kubelet[2505]: E0706 23:52:34.387157 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.387700 kubelet[2505]: E0706 23:52:34.387588 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.387700 kubelet[2505]: W0706 23:52:34.387599 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.387700 kubelet[2505]: E0706 23:52:34.387610 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.387700 kubelet[2505]: I0706 23:52:34.387644 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f74bf16c-5f59-4774-ac59-c82b3e42ab4b-kubelet-dir\") pod \"csi-node-driver-fvz6x\" (UID: \"f74bf16c-5f59-4774-ac59-c82b3e42ab4b\") " pod="calico-system/csi-node-driver-fvz6x" Jul 6 23:52:34.388093 kubelet[2505]: E0706 23:52:34.388033 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.388093 kubelet[2505]: W0706 23:52:34.388047 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.388618 kubelet[2505]: E0706 23:52:34.388595 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.388833 kubelet[2505]: I0706 23:52:34.388646 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz829\" (UniqueName: \"kubernetes.io/projected/f74bf16c-5f59-4774-ac59-c82b3e42ab4b-kube-api-access-nz829\") pod \"csi-node-driver-fvz6x\" (UID: \"f74bf16c-5f59-4774-ac59-c82b3e42ab4b\") " pod="calico-system/csi-node-driver-fvz6x" Jul 6 23:52:34.388934 kubelet[2505]: E0706 23:52:34.388889 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.389426 kubelet[2505]: W0706 23:52:34.389127 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.389426 kubelet[2505]: E0706 23:52:34.389150 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.389426 kubelet[2505]: I0706 23:52:34.389181 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f74bf16c-5f59-4774-ac59-c82b3e42ab4b-registration-dir\") pod \"csi-node-driver-fvz6x\" (UID: \"f74bf16c-5f59-4774-ac59-c82b3e42ab4b\") " pod="calico-system/csi-node-driver-fvz6x" Jul 6 23:52:34.390096 kubelet[2505]: E0706 23:52:34.389911 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.390096 kubelet[2505]: W0706 23:52:34.389927 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.390096 kubelet[2505]: E0706 23:52:34.389940 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.390572 kubelet[2505]: E0706 23:52:34.390446 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.390572 kubelet[2505]: W0706 23:52:34.390457 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.390572 kubelet[2505]: E0706 23:52:34.390566 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.391024 kubelet[2505]: I0706 23:52:34.390722 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f74bf16c-5f59-4774-ac59-c82b3e42ab4b-socket-dir\") pod \"csi-node-driver-fvz6x\" (UID: \"f74bf16c-5f59-4774-ac59-c82b3e42ab4b\") " pod="calico-system/csi-node-driver-fvz6x" Jul 6 23:52:34.391625 kubelet[2505]: E0706 23:52:34.391510 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.391625 kubelet[2505]: W0706 23:52:34.391525 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.391625 kubelet[2505]: E0706 23:52:34.391537 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.391869 kubelet[2505]: E0706 23:52:34.391725 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.391869 kubelet[2505]: W0706 23:52:34.391733 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.391869 kubelet[2505]: E0706 23:52:34.391741 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.392814 kubelet[2505]: E0706 23:52:34.392211 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.392814 kubelet[2505]: W0706 23:52:34.392224 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.392814 kubelet[2505]: E0706 23:52:34.392235 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.392814 kubelet[2505]: I0706 23:52:34.392403 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f74bf16c-5f59-4774-ac59-c82b3e42ab4b-varrun\") pod \"csi-node-driver-fvz6x\" (UID: \"f74bf16c-5f59-4774-ac59-c82b3e42ab4b\") " pod="calico-system/csi-node-driver-fvz6x" Jul 6 23:52:34.392814 kubelet[2505]: E0706 23:52:34.392807 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.393046 kubelet[2505]: W0706 23:52:34.392820 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.393046 kubelet[2505]: E0706 23:52:34.392832 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.394133 kubelet[2505]: E0706 23:52:34.393969 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.394133 kubelet[2505]: W0706 23:52:34.393985 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.394133 kubelet[2505]: E0706 23:52:34.393997 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.394875 kubelet[2505]: E0706 23:52:34.394793 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.394875 kubelet[2505]: W0706 23:52:34.394806 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.394875 kubelet[2505]: E0706 23:52:34.394818 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.395804 kubelet[2505]: E0706 23:52:34.395627 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.395804 kubelet[2505]: W0706 23:52:34.395643 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.395804 kubelet[2505]: E0706 23:52:34.395656 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.396681 kubelet[2505]: E0706 23:52:34.396206 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.396681 kubelet[2505]: W0706 23:52:34.396219 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.396681 kubelet[2505]: E0706 23:52:34.396230 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.397428 kubelet[2505]: E0706 23:52:34.397303 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.397428 kubelet[2505]: W0706 23:52:34.397319 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.397428 kubelet[2505]: E0706 23:52:34.397332 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.397661 kubelet[2505]: E0706 23:52:34.397560 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.397661 kubelet[2505]: W0706 23:52:34.397572 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.397661 kubelet[2505]: E0706 23:52:34.397586 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.400138 containerd[1462]: time="2025-07-06T23:52:34.398045562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:34.400138 containerd[1462]: time="2025-07-06T23:52:34.398725211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:34.400138 containerd[1462]: time="2025-07-06T23:52:34.398739109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:34.400138 containerd[1462]: time="2025-07-06T23:52:34.398933723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:34.433303 systemd[1]: Started cri-containerd-628c2f0078620d14d46d6ac117923c5b97c3ca878bba1ead3ef250cfff20b3d9.scope - libcontainer container 628c2f0078620d14d46d6ac117923c5b97c3ca878bba1ead3ef250cfff20b3d9. Jul 6 23:52:34.490914 containerd[1462]: time="2025-07-06T23:52:34.490865806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fl6s6,Uid:76837bc2-973e-485c-9310-9292db5d160f,Namespace:calico-system,Attempt:0,} returns sandbox id \"628c2f0078620d14d46d6ac117923c5b97c3ca878bba1ead3ef250cfff20b3d9\"" Jul 6 23:52:34.495311 kubelet[2505]: E0706 23:52:34.495090 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.495311 kubelet[2505]: W0706 23:52:34.495115 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.495311 kubelet[2505]: E0706 23:52:34.495137 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.495932 kubelet[2505]: E0706 23:52:34.495426 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.495932 kubelet[2505]: W0706 23:52:34.495435 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.495932 kubelet[2505]: E0706 23:52:34.495446 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.496163 kubelet[2505]: E0706 23:52:34.496146 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.496210 kubelet[2505]: W0706 23:52:34.496163 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.496210 kubelet[2505]: E0706 23:52:34.496176 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.496479 kubelet[2505]: E0706 23:52:34.496465 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.496479 kubelet[2505]: W0706 23:52:34.496477 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.496546 kubelet[2505]: E0706 23:52:34.496488 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.496696 kubelet[2505]: E0706 23:52:34.496684 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.496737 kubelet[2505]: W0706 23:52:34.496698 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.496737 kubelet[2505]: E0706 23:52:34.496710 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.496966 kubelet[2505]: E0706 23:52:34.496954 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.496966 kubelet[2505]: W0706 23:52:34.496965 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.497097 kubelet[2505]: E0706 23:52:34.496974 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.497186 kubelet[2505]: E0706 23:52:34.497175 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.497186 kubelet[2505]: W0706 23:52:34.497185 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.497267 kubelet[2505]: E0706 23:52:34.497194 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.497438 kubelet[2505]: E0706 23:52:34.497426 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.497487 kubelet[2505]: W0706 23:52:34.497474 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.497519 kubelet[2505]: E0706 23:52:34.497489 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.497695 kubelet[2505]: E0706 23:52:34.497684 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.497695 kubelet[2505]: W0706 23:52:34.497693 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.497892 kubelet[2505]: E0706 23:52:34.497701 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.497892 kubelet[2505]: E0706 23:52:34.497879 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.497892 kubelet[2505]: W0706 23:52:34.497885 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.497991 kubelet[2505]: E0706 23:52:34.497893 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.498132 kubelet[2505]: E0706 23:52:34.498108 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.498132 kubelet[2505]: W0706 23:52:34.498119 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.498132 kubelet[2505]: E0706 23:52:34.498129 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.498343 kubelet[2505]: E0706 23:52:34.498296 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.498343 kubelet[2505]: W0706 23:52:34.498303 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.498343 kubelet[2505]: E0706 23:52:34.498310 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.498484 kubelet[2505]: E0706 23:52:34.498473 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.498484 kubelet[2505]: W0706 23:52:34.498482 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.498589 kubelet[2505]: E0706 23:52:34.498490 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.498763 kubelet[2505]: E0706 23:52:34.498749 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.498805 kubelet[2505]: W0706 23:52:34.498764 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.498805 kubelet[2505]: E0706 23:52:34.498776 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.499031 kubelet[2505]: E0706 23:52:34.499018 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.499196 kubelet[2505]: W0706 23:52:34.499030 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.499196 kubelet[2505]: E0706 23:52:34.499040 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.499292 kubelet[2505]: E0706 23:52:34.499249 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.499292 kubelet[2505]: W0706 23:52:34.499259 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.499292 kubelet[2505]: E0706 23:52:34.499268 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.499426 kubelet[2505]: E0706 23:52:34.499416 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.499426 kubelet[2505]: W0706 23:52:34.499425 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.499488 kubelet[2505]: E0706 23:52:34.499434 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.499659 kubelet[2505]: E0706 23:52:34.499648 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.499659 kubelet[2505]: W0706 23:52:34.499657 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.499716 kubelet[2505]: E0706 23:52:34.499665 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.500741 kubelet[2505]: E0706 23:52:34.500719 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.500741 kubelet[2505]: W0706 23:52:34.500738 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.500820 kubelet[2505]: E0706 23:52:34.500750 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.501688 kubelet[2505]: E0706 23:52:34.501669 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.501688 kubelet[2505]: W0706 23:52:34.501685 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.501775 kubelet[2505]: E0706 23:52:34.501697 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.501911 kubelet[2505]: E0706 23:52:34.501885 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.501911 kubelet[2505]: W0706 23:52:34.501895 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.501911 kubelet[2505]: E0706 23:52:34.501904 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.502501 kubelet[2505]: E0706 23:52:34.502486 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.502501 kubelet[2505]: W0706 23:52:34.502501 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.502610 kubelet[2505]: E0706 23:52:34.502513 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.502805 kubelet[2505]: E0706 23:52:34.502790 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.502805 kubelet[2505]: W0706 23:52:34.502804 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.502897 kubelet[2505]: E0706 23:52:34.502818 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.504151 kubelet[2505]: E0706 23:52:34.504132 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.504151 kubelet[2505]: W0706 23:52:34.504150 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.504271 kubelet[2505]: E0706 23:52:34.504165 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.505151 kubelet[2505]: E0706 23:52:34.505135 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.505151 kubelet[2505]: W0706 23:52:34.505148 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.505366 kubelet[2505]: E0706 23:52:34.505159 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:34.509898 kubelet[2505]: E0706 23:52:34.509871 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:34.509968 kubelet[2505]: W0706 23:52:34.509953 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:34.509996 kubelet[2505]: E0706 23:52:34.509971 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:35.855543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3257936596.mount: Deactivated successfully. Jul 6 23:52:36.365895 kubelet[2505]: E0706 23:52:36.365841 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fvz6x" podUID="f74bf16c-5f59-4774-ac59-c82b3e42ab4b" Jul 6 23:52:36.971131 containerd[1462]: time="2025-07-06T23:52:36.971049963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:36.974611 containerd[1462]: time="2025-07-06T23:52:36.974528166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 6 23:52:36.975322 containerd[1462]: time="2025-07-06T23:52:36.975264987Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:36.983910 containerd[1462]: time="2025-07-06T23:52:36.982448074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:36.985541 containerd[1462]: time="2025-07-06T23:52:36.985481284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.829267398s" Jul 6 23:52:36.985541 containerd[1462]: time="2025-07-06T23:52:36.985535812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 6 23:52:36.992145 containerd[1462]: time="2025-07-06T23:52:36.992108933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:52:37.020089 containerd[1462]: time="2025-07-06T23:52:37.019496142Z" level=info msg="CreateContainer within sandbox \"cf7a8f811676b94e6fc719842e9781f2cb087311e67bbf703f281486ad4c4788\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:52:37.039739 containerd[1462]: time="2025-07-06T23:52:37.034600288Z" level=info msg="CreateContainer within sandbox \"cf7a8f811676b94e6fc719842e9781f2cb087311e67bbf703f281486ad4c4788\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b314e364e26db25f19ecd6d089ec39ed2979b10ed236ffa26586447fc6ad8536\"" Jul 6 23:52:37.039739 containerd[1462]: time="2025-07-06T23:52:37.036284609Z" level=info msg="StartContainer for \"b314e364e26db25f19ecd6d089ec39ed2979b10ed236ffa26586447fc6ad8536\"" Jul 6 23:52:37.089294 systemd[1]: Started cri-containerd-b314e364e26db25f19ecd6d089ec39ed2979b10ed236ffa26586447fc6ad8536.scope - libcontainer container b314e364e26db25f19ecd6d089ec39ed2979b10ed236ffa26586447fc6ad8536. Jul 6 23:52:37.169856 containerd[1462]: time="2025-07-06T23:52:37.169784248Z" level=info msg="StartContainer for \"b314e364e26db25f19ecd6d089ec39ed2979b10ed236ffa26586447fc6ad8536\" returns successfully" Jul 6 23:52:37.490098 kubelet[2505]: E0706 23:52:37.488798 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:37.510052 kubelet[2505]: E0706 23:52:37.509977 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.510052 kubelet[2505]: W0706 23:52:37.510044 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.510366 kubelet[2505]: I0706 23:52:37.507631 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-659b48f488-dl4x7" podStartSLOduration=1.6722792869999998 podStartE2EDuration="4.50761551s" podCreationTimestamp="2025-07-06 23:52:33 +0000 UTC" firstStartedPulling="2025-07-06 23:52:34.155515696 +0000 UTC m=+20.961715270" lastFinishedPulling="2025-07-06 23:52:36.990851931 +0000 UTC m=+23.797051493" observedRunningTime="2025-07-06 23:52:37.507134059 +0000 UTC m=+24.313333640" watchObservedRunningTime="2025-07-06 23:52:37.50761551 +0000 UTC m=+24.313815093" Jul 6 23:52:37.510526 kubelet[2505]: E0706 23:52:37.510511 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.511431 kubelet[2505]: E0706 23:52:37.511201 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.511431 kubelet[2505]: W0706 23:52:37.511426 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.511573 kubelet[2505]: E0706 23:52:37.511444 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.512127 kubelet[2505]: E0706 23:52:37.512108 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.512127 kubelet[2505]: W0706 23:52:37.512122 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.512278 kubelet[2505]: E0706 23:52:37.512136 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.512588 kubelet[2505]: E0706 23:52:37.512574 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.512588 kubelet[2505]: W0706 23:52:37.512587 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.512675 kubelet[2505]: E0706 23:52:37.512597 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.512934 kubelet[2505]: E0706 23:52:37.512919 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.513104 kubelet[2505]: W0706 23:52:37.513025 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.513104 kubelet[2505]: E0706 23:52:37.513037 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.514309 kubelet[2505]: E0706 23:52:37.513373 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.514309 kubelet[2505]: W0706 23:52:37.513382 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.514309 kubelet[2505]: E0706 23:52:37.513393 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.514309 kubelet[2505]: E0706 23:52:37.513614 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.514309 kubelet[2505]: W0706 23:52:37.513633 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.514309 kubelet[2505]: E0706 23:52:37.513643 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.514309 kubelet[2505]: E0706 23:52:37.513855 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.514309 kubelet[2505]: W0706 23:52:37.513863 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.514309 kubelet[2505]: E0706 23:52:37.513873 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.514309 kubelet[2505]: E0706 23:52:37.514137 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.514629 kubelet[2505]: W0706 23:52:37.514146 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.514629 kubelet[2505]: E0706 23:52:37.514156 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.514629 kubelet[2505]: E0706 23:52:37.514337 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.514629 kubelet[2505]: W0706 23:52:37.514344 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.514629 kubelet[2505]: E0706 23:52:37.514353 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.514629 kubelet[2505]: E0706 23:52:37.514520 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.514629 kubelet[2505]: W0706 23:52:37.514526 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.514629 kubelet[2505]: E0706 23:52:37.514534 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.514977 kubelet[2505]: E0706 23:52:37.514710 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.514977 kubelet[2505]: W0706 23:52:37.514716 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.514977 kubelet[2505]: E0706 23:52:37.514724 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.514977 kubelet[2505]: E0706 23:52:37.514941 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.514977 kubelet[2505]: W0706 23:52:37.514950 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.514977 kubelet[2505]: E0706 23:52:37.514962 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.515360 kubelet[2505]: E0706 23:52:37.515227 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.515360 kubelet[2505]: W0706 23:52:37.515235 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.515360 kubelet[2505]: E0706 23:52:37.515244 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.515839 kubelet[2505]: E0706 23:52:37.515495 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.515839 kubelet[2505]: W0706 23:52:37.515507 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.515839 kubelet[2505]: E0706 23:52:37.515517 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.524737 kubelet[2505]: E0706 23:52:37.524196 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.524737 kubelet[2505]: W0706 23:52:37.524307 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.524737 kubelet[2505]: E0706 23:52:37.524328 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.525464 kubelet[2505]: E0706 23:52:37.525436 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.525464 kubelet[2505]: W0706 23:52:37.525452 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.525464 kubelet[2505]: E0706 23:52:37.525467 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.527093 kubelet[2505]: E0706 23:52:37.526466 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.527093 kubelet[2505]: W0706 23:52:37.526482 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.527093 kubelet[2505]: E0706 23:52:37.526495 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.527093 kubelet[2505]: E0706 23:52:37.527096 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.527260 kubelet[2505]: W0706 23:52:37.527107 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.527260 kubelet[2505]: E0706 23:52:37.527120 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.527550 kubelet[2505]: E0706 23:52:37.527530 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.527550 kubelet[2505]: W0706 23:52:37.527545 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.527629 kubelet[2505]: E0706 23:52:37.527556 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.529084 kubelet[2505]: E0706 23:52:37.528135 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.529084 kubelet[2505]: W0706 23:52:37.528152 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.529084 kubelet[2505]: E0706 23:52:37.528163 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.529084 kubelet[2505]: E0706 23:52:37.528812 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.529084 kubelet[2505]: W0706 23:52:37.528822 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.529084 kubelet[2505]: E0706 23:52:37.528834 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.529484 kubelet[2505]: E0706 23:52:37.529466 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.529484 kubelet[2505]: W0706 23:52:37.529481 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.529549 kubelet[2505]: E0706 23:52:37.529492 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.529731 kubelet[2505]: E0706 23:52:37.529711 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.529731 kubelet[2505]: W0706 23:52:37.529723 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.529731 kubelet[2505]: E0706 23:52:37.529732 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.529954 kubelet[2505]: E0706 23:52:37.529935 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.529954 kubelet[2505]: W0706 23:52:37.529947 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.530045 kubelet[2505]: E0706 23:52:37.529958 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.530203 kubelet[2505]: E0706 23:52:37.530182 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.530203 kubelet[2505]: W0706 23:52:37.530199 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.530268 kubelet[2505]: E0706 23:52:37.530209 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.530530 kubelet[2505]: E0706 23:52:37.530507 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.530530 kubelet[2505]: W0706 23:52:37.530520 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.530530 kubelet[2505]: E0706 23:52:37.530530 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.531381 kubelet[2505]: E0706 23:52:37.531129 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.531381 kubelet[2505]: W0706 23:52:37.531143 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.531381 kubelet[2505]: E0706 23:52:37.531154 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.531381 kubelet[2505]: E0706 23:52:37.531358 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.531381 kubelet[2505]: W0706 23:52:37.531365 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.531381 kubelet[2505]: E0706 23:52:37.531374 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.531703 kubelet[2505]: E0706 23:52:37.531552 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.531703 kubelet[2505]: W0706 23:52:37.531562 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.531703 kubelet[2505]: E0706 23:52:37.531575 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.532792 kubelet[2505]: E0706 23:52:37.531900 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.532792 kubelet[2505]: W0706 23:52:37.531914 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.532792 kubelet[2505]: E0706 23:52:37.531924 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.532964 kubelet[2505]: E0706 23:52:37.532831 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.532964 kubelet[2505]: W0706 23:52:37.532841 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.532964 kubelet[2505]: E0706 23:52:37.532852 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:37.533245 kubelet[2505]: E0706 23:52:37.533199 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:37.533245 kubelet[2505]: W0706 23:52:37.533214 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:37.533245 kubelet[2505]: E0706 23:52:37.533225 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.359003 kubelet[2505]: E0706 23:52:38.358896 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fvz6x" podUID="f74bf16c-5f59-4774-ac59-c82b3e42ab4b" Jul 6 23:52:38.490561 kubelet[2505]: E0706 23:52:38.490526 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:38.522012 kubelet[2505]: E0706 23:52:38.521977 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.522012 kubelet[2505]: W0706 23:52:38.522002 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.522279 kubelet[2505]: E0706 23:52:38.522175 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.523183 kubelet[2505]: E0706 23:52:38.523161 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.523333 kubelet[2505]: W0706 23:52:38.523187 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.523333 kubelet[2505]: E0706 23:52:38.523203 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.523477 kubelet[2505]: E0706 23:52:38.523421 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.523477 kubelet[2505]: W0706 23:52:38.523432 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.523477 kubelet[2505]: E0706 23:52:38.523442 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.523792 kubelet[2505]: E0706 23:52:38.523637 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.523792 kubelet[2505]: W0706 23:52:38.523648 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.523792 kubelet[2505]: E0706 23:52:38.523658 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.524411 kubelet[2505]: E0706 23:52:38.524018 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.524411 kubelet[2505]: W0706 23:52:38.524038 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.524411 kubelet[2505]: E0706 23:52:38.524049 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.524411 kubelet[2505]: E0706 23:52:38.524248 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.524411 kubelet[2505]: W0706 23:52:38.524258 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.524411 kubelet[2505]: E0706 23:52:38.524267 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.524620 kubelet[2505]: E0706 23:52:38.524458 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.524620 kubelet[2505]: W0706 23:52:38.524465 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.524620 kubelet[2505]: E0706 23:52:38.524473 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.524780 kubelet[2505]: E0706 23:52:38.524764 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.524780 kubelet[2505]: W0706 23:52:38.524777 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.524847 kubelet[2505]: E0706 23:52:38.524789 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.525355 kubelet[2505]: E0706 23:52:38.525340 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.525355 kubelet[2505]: W0706 23:52:38.525353 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.525443 kubelet[2505]: E0706 23:52:38.525369 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.525575 kubelet[2505]: E0706 23:52:38.525557 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.525575 kubelet[2505]: W0706 23:52:38.525568 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.525660 kubelet[2505]: E0706 23:52:38.525576 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.525782 kubelet[2505]: E0706 23:52:38.525747 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.525782 kubelet[2505]: W0706 23:52:38.525762 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.525782 kubelet[2505]: E0706 23:52:38.525770 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.526216 kubelet[2505]: E0706 23:52:38.525957 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.526216 kubelet[2505]: W0706 23:52:38.525963 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.526216 kubelet[2505]: E0706 23:52:38.525971 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.526216 kubelet[2505]: E0706 23:52:38.526178 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.526216 kubelet[2505]: W0706 23:52:38.526186 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.526216 kubelet[2505]: E0706 23:52:38.526195 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.526939 kubelet[2505]: E0706 23:52:38.526410 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.526939 kubelet[2505]: W0706 23:52:38.526430 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.526939 kubelet[2505]: E0706 23:52:38.526438 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.526939 kubelet[2505]: E0706 23:52:38.526615 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.526939 kubelet[2505]: W0706 23:52:38.526622 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.526939 kubelet[2505]: E0706 23:52:38.526629 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.536208 kubelet[2505]: E0706 23:52:38.536162 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.536208 kubelet[2505]: W0706 23:52:38.536199 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.536208 kubelet[2505]: E0706 23:52:38.536220 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.536528 kubelet[2505]: E0706 23:52:38.536497 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.536528 kubelet[2505]: W0706 23:52:38.536509 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.536528 kubelet[2505]: E0706 23:52:38.536520 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.536815 kubelet[2505]: E0706 23:52:38.536794 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.536815 kubelet[2505]: W0706 23:52:38.536810 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.536947 kubelet[2505]: E0706 23:52:38.536820 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.537040 kubelet[2505]: E0706 23:52:38.537028 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.537040 kubelet[2505]: W0706 23:52:38.537038 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.537159 kubelet[2505]: E0706 23:52:38.537047 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.537253 kubelet[2505]: E0706 23:52:38.537241 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.537253 kubelet[2505]: W0706 23:52:38.537251 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.537344 kubelet[2505]: E0706 23:52:38.537259 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.537451 kubelet[2505]: E0706 23:52:38.537438 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.537451 kubelet[2505]: W0706 23:52:38.537448 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.537555 kubelet[2505]: E0706 23:52:38.537455 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.537744 kubelet[2505]: E0706 23:52:38.537725 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.537776 kubelet[2505]: W0706 23:52:38.537738 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.537776 kubelet[2505]: E0706 23:52:38.537764 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.538027 kubelet[2505]: E0706 23:52:38.538013 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.538027 kubelet[2505]: W0706 23:52:38.538025 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.538127 kubelet[2505]: E0706 23:52:38.538035 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.538308 kubelet[2505]: E0706 23:52:38.538295 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.538308 kubelet[2505]: W0706 23:52:38.538307 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.538369 kubelet[2505]: E0706 23:52:38.538316 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.538578 kubelet[2505]: E0706 23:52:38.538567 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.538578 kubelet[2505]: W0706 23:52:38.538578 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.538643 kubelet[2505]: E0706 23:52:38.538587 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.539035 kubelet[2505]: E0706 23:52:38.539018 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.539106 kubelet[2505]: W0706 23:52:38.539040 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.539106 kubelet[2505]: E0706 23:52:38.539050 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.539552 kubelet[2505]: E0706 23:52:38.539448 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.539552 kubelet[2505]: W0706 23:52:38.539463 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.539552 kubelet[2505]: E0706 23:52:38.539475 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.539842 kubelet[2505]: E0706 23:52:38.539744 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.539842 kubelet[2505]: W0706 23:52:38.539755 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.539842 kubelet[2505]: E0706 23:52:38.539765 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.539974 kubelet[2505]: E0706 23:52:38.539965 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.540033 kubelet[2505]: W0706 23:52:38.540024 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.540098 kubelet[2505]: E0706 23:52:38.540089 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.540344 kubelet[2505]: E0706 23:52:38.540334 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.540411 kubelet[2505]: W0706 23:52:38.540402 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.540657 kubelet[2505]: E0706 23:52:38.540456 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.540724 kubelet[2505]: E0706 23:52:38.540709 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.540754 kubelet[2505]: W0706 23:52:38.540723 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.540754 kubelet[2505]: E0706 23:52:38.540735 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.540969 kubelet[2505]: E0706 23:52:38.540958 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.541007 kubelet[2505]: W0706 23:52:38.540972 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.541007 kubelet[2505]: E0706 23:52:38.540980 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:38.541189 kubelet[2505]: E0706 23:52:38.541179 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:52:38.541189 kubelet[2505]: W0706 23:52:38.541189 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:52:38.541241 kubelet[2505]: E0706 23:52:38.541198 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:52:39.000655 containerd[1462]: time="2025-07-06T23:52:38.999908327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:39.000655 containerd[1462]: time="2025-07-06T23:52:39.000603587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 6 23:52:39.001333 containerd[1462]: time="2025-07-06T23:52:39.001304134Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:39.003658 containerd[1462]: time="2025-07-06T23:52:39.003576786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:39.004481 containerd[1462]: time="2025-07-06T23:52:39.004445515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.012299501s" Jul 6 23:52:39.004598 containerd[1462]: time="2025-07-06T23:52:39.004582298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 6 23:52:39.011250 containerd[1462]: time="2025-07-06T23:52:39.011196248Z" level=info msg="CreateContainer within sandbox \"628c2f0078620d14d46d6ac117923c5b97c3ca878bba1ead3ef250cfff20b3d9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:52:39.058889 containerd[1462]: time="2025-07-06T23:52:39.058836137Z" level=info msg="CreateContainer within sandbox \"628c2f0078620d14d46d6ac117923c5b97c3ca878bba1ead3ef250cfff20b3d9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f158c35431ce72bf94c104a92d68326a1031454ace0f665dd1e760c5d154d6e2\"" Jul 6 23:52:39.061325 containerd[1462]: time="2025-07-06T23:52:39.061281473Z" level=info msg="StartContainer for \"f158c35431ce72bf94c104a92d68326a1031454ace0f665dd1e760c5d154d6e2\"" Jul 6 23:52:39.111446 systemd[1]: Started cri-containerd-f158c35431ce72bf94c104a92d68326a1031454ace0f665dd1e760c5d154d6e2.scope - libcontainer container f158c35431ce72bf94c104a92d68326a1031454ace0f665dd1e760c5d154d6e2. Jul 6 23:52:39.153160 containerd[1462]: time="2025-07-06T23:52:39.153117364Z" level=info msg="StartContainer for \"f158c35431ce72bf94c104a92d68326a1031454ace0f665dd1e760c5d154d6e2\" returns successfully" Jul 6 23:52:39.178771 systemd[1]: cri-containerd-f158c35431ce72bf94c104a92d68326a1031454ace0f665dd1e760c5d154d6e2.scope: Deactivated successfully. Jul 6 23:52:39.206930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f158c35431ce72bf94c104a92d68326a1031454ace0f665dd1e760c5d154d6e2-rootfs.mount: Deactivated successfully. Jul 6 23:52:39.263897 containerd[1462]: time="2025-07-06T23:52:39.231769281Z" level=info msg="shim disconnected" id=f158c35431ce72bf94c104a92d68326a1031454ace0f665dd1e760c5d154d6e2 namespace=k8s.io Jul 6 23:52:39.263897 containerd[1462]: time="2025-07-06T23:52:39.263818958Z" level=warning msg="cleaning up after shim disconnected" id=f158c35431ce72bf94c104a92d68326a1031454ace0f665dd1e760c5d154d6e2 namespace=k8s.io Jul 6 23:52:39.263897 containerd[1462]: time="2025-07-06T23:52:39.263836545Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:52:39.496574 kubelet[2505]: E0706 23:52:39.496438 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:39.503656 containerd[1462]: time="2025-07-06T23:52:39.503187451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:52:40.358938 kubelet[2505]: E0706 23:52:40.358859 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fvz6x" podUID="f74bf16c-5f59-4774-ac59-c82b3e42ab4b" Jul 6 23:52:42.359161 kubelet[2505]: E0706 23:52:42.358010 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fvz6x" podUID="f74bf16c-5f59-4774-ac59-c82b3e42ab4b" Jul 6 23:52:42.806159 containerd[1462]: time="2025-07-06T23:52:42.806111697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:42.807645 containerd[1462]: time="2025-07-06T23:52:42.807577546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 6 23:52:42.809117 containerd[1462]: time="2025-07-06T23:52:42.808276087Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:42.809953 containerd[1462]: time="2025-07-06T23:52:42.809907509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:42.810837 containerd[1462]: time="2025-07-06T23:52:42.810689455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.307450282s" Jul 6 23:52:42.810837 containerd[1462]: time="2025-07-06T23:52:42.810737995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 6 23:52:42.815681 containerd[1462]: time="2025-07-06T23:52:42.815647643Z" level=info msg="CreateContainer within sandbox \"628c2f0078620d14d46d6ac117923c5b97c3ca878bba1ead3ef250cfff20b3d9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:52:42.830988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789759639.mount: Deactivated successfully. Jul 6 23:52:42.838995 containerd[1462]: time="2025-07-06T23:52:42.838861036Z" level=info msg="CreateContainer within sandbox \"628c2f0078620d14d46d6ac117923c5b97c3ca878bba1ead3ef250cfff20b3d9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c08619f9c62691f664068eb1de94b1da7e42910f81aeee84a24a1ab16a037bd7\"" Jul 6 23:52:42.839799 containerd[1462]: time="2025-07-06T23:52:42.839760441Z" level=info msg="StartContainer for \"c08619f9c62691f664068eb1de94b1da7e42910f81aeee84a24a1ab16a037bd7\"" Jul 6 23:52:42.903314 systemd[1]: Started cri-containerd-c08619f9c62691f664068eb1de94b1da7e42910f81aeee84a24a1ab16a037bd7.scope - libcontainer container c08619f9c62691f664068eb1de94b1da7e42910f81aeee84a24a1ab16a037bd7. Jul 6 23:52:42.942750 containerd[1462]: time="2025-07-06T23:52:42.942675473Z" level=info msg="StartContainer for \"c08619f9c62691f664068eb1de94b1da7e42910f81aeee84a24a1ab16a037bd7\" returns successfully" Jul 6 23:52:43.606577 systemd[1]: cri-containerd-c08619f9c62691f664068eb1de94b1da7e42910f81aeee84a24a1ab16a037bd7.scope: Deactivated successfully. Jul 6 23:52:43.623833 kubelet[2505]: I0706 23:52:43.623504 2505 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:52:43.648420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c08619f9c62691f664068eb1de94b1da7e42910f81aeee84a24a1ab16a037bd7-rootfs.mount: Deactivated successfully. Jul 6 23:52:43.650847 containerd[1462]: time="2025-07-06T23:52:43.650765857Z" level=info msg="shim disconnected" id=c08619f9c62691f664068eb1de94b1da7e42910f81aeee84a24a1ab16a037bd7 namespace=k8s.io Jul 6 23:52:43.650847 containerd[1462]: time="2025-07-06T23:52:43.650825748Z" level=warning msg="cleaning up after shim disconnected" id=c08619f9c62691f664068eb1de94b1da7e42910f81aeee84a24a1ab16a037bd7 namespace=k8s.io Jul 6 23:52:43.650847 containerd[1462]: time="2025-07-06T23:52:43.650835751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:52:43.685146 systemd[1]: Created slice kubepods-burstable-podbc3e67e5_3ae0_4db6_94bf_18460361bfcb.slice - libcontainer container kubepods-burstable-podbc3e67e5_3ae0_4db6_94bf_18460361bfcb.slice. Jul 6 23:52:43.705580 systemd[1]: Created slice kubepods-besteffort-pod0ccbb99e_87ff_4a32_b2b1_2a60da50c8f7.slice - libcontainer container kubepods-besteffort-pod0ccbb99e_87ff_4a32_b2b1_2a60da50c8f7.slice. Jul 6 23:52:43.729732 systemd[1]: Created slice kubepods-burstable-pod47d0f9e3_1b02_4028_8726_1149e0c33163.slice - libcontainer container kubepods-burstable-pod47d0f9e3_1b02_4028_8726_1149e0c33163.slice. Jul 6 23:52:43.741572 systemd[1]: Created slice kubepods-besteffort-poddd2dd146_3eba_44dc_85ba_a095b70face3.slice - libcontainer container kubepods-besteffort-poddd2dd146_3eba_44dc_85ba_a095b70face3.slice. Jul 6 23:52:43.753728 systemd[1]: Created slice kubepods-besteffort-podafca6f34_d92a_4a5d_ad63_7c0fa937928f.slice - libcontainer container kubepods-besteffort-podafca6f34_d92a_4a5d_ad63_7c0fa937928f.slice. Jul 6 23:52:43.766490 systemd[1]: Created slice kubepods-besteffort-pod70ac0620_95a8_4173_8948_facb2c4a4406.slice - libcontainer container kubepods-besteffort-pod70ac0620_95a8_4173_8948_facb2c4a4406.slice. Jul 6 23:52:43.777391 kubelet[2505]: I0706 23:52:43.777344 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47d0f9e3-1b02-4028-8726-1149e0c33163-config-volume\") pod \"coredns-674b8bbfcf-xwpvc\" (UID: \"47d0f9e3-1b02-4028-8726-1149e0c33163\") " pod="kube-system/coredns-674b8bbfcf-xwpvc" Jul 6 23:52:43.778536 kubelet[2505]: I0706 23:52:43.778501 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6k6m\" (UniqueName: \"kubernetes.io/projected/dda5be44-0474-421e-81ed-886448d2d1f0-kube-api-access-p6k6m\") pod \"goldmane-768f4c5c69-w2rcs\" (UID: \"dda5be44-0474-421e-81ed-886448d2d1f0\") " pod="calico-system/goldmane-768f4c5c69-w2rcs" Jul 6 23:52:43.779408 systemd[1]: Created slice kubepods-besteffort-poddda5be44_0474_421e_81ed_886448d2d1f0.slice - libcontainer container kubepods-besteffort-poddda5be44_0474_421e_81ed_886448d2d1f0.slice. Jul 6 23:52:43.780014 kubelet[2505]: I0706 23:52:43.779965 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc3e67e5-3ae0-4db6-94bf-18460361bfcb-config-volume\") pod \"coredns-674b8bbfcf-vfg28\" (UID: \"bc3e67e5-3ae0-4db6-94bf-18460361bfcb\") " pod="kube-system/coredns-674b8bbfcf-vfg28" Jul 6 23:52:43.780228 kubelet[2505]: I0706 23:52:43.780192 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd2dd146-3eba-44dc-85ba-a095b70face3-tigera-ca-bundle\") pod \"calico-kube-controllers-cf87b6475-95fh9\" (UID: \"dd2dd146-3eba-44dc-85ba-a095b70face3\") " pod="calico-system/calico-kube-controllers-cf87b6475-95fh9" Jul 6 23:52:43.780837 kubelet[2505]: I0706 23:52:43.780802 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-456vj\" (UniqueName: \"kubernetes.io/projected/dd2dd146-3eba-44dc-85ba-a095b70face3-kube-api-access-456vj\") pod \"calico-kube-controllers-cf87b6475-95fh9\" (UID: \"dd2dd146-3eba-44dc-85ba-a095b70face3\") " pod="calico-system/calico-kube-controllers-cf87b6475-95fh9" Jul 6 23:52:43.780941 kubelet[2505]: I0706 23:52:43.780842 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47cm8\" (UniqueName: \"kubernetes.io/projected/0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7-kube-api-access-47cm8\") pod \"whisker-6744cfb56f-t5f4t\" (UID: \"0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7\") " pod="calico-system/whisker-6744cfb56f-t5f4t" Jul 6 23:52:43.780941 kubelet[2505]: I0706 23:52:43.780869 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7-whisker-backend-key-pair\") pod \"whisker-6744cfb56f-t5f4t\" (UID: \"0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7\") " pod="calico-system/whisker-6744cfb56f-t5f4t" Jul 6 23:52:43.780941 kubelet[2505]: I0706 23:52:43.780887 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqw7m\" (UniqueName: \"kubernetes.io/projected/afca6f34-d92a-4a5d-ad63-7c0fa937928f-kube-api-access-cqw7m\") pod \"calico-apiserver-55f4bc6d54-zlkgl\" (UID: \"afca6f34-d92a-4a5d-ad63-7c0fa937928f\") " pod="calico-apiserver/calico-apiserver-55f4bc6d54-zlkgl" Jul 6 23:52:43.780941 kubelet[2505]: I0706 23:52:43.780913 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4gbv\" (UniqueName: \"kubernetes.io/projected/bc3e67e5-3ae0-4db6-94bf-18460361bfcb-kube-api-access-h4gbv\") pod \"coredns-674b8bbfcf-vfg28\" (UID: \"bc3e67e5-3ae0-4db6-94bf-18460361bfcb\") " pod="kube-system/coredns-674b8bbfcf-vfg28" Jul 6 23:52:43.781187 kubelet[2505]: I0706 23:52:43.780942 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dda5be44-0474-421e-81ed-886448d2d1f0-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-w2rcs\" (UID: \"dda5be44-0474-421e-81ed-886448d2d1f0\") " pod="calico-system/goldmane-768f4c5c69-w2rcs" Jul 6 23:52:43.781187 kubelet[2505]: I0706 23:52:43.780970 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsfwk\" (UniqueName: \"kubernetes.io/projected/70ac0620-95a8-4173-8948-facb2c4a4406-kube-api-access-zsfwk\") pod \"calico-apiserver-55f4bc6d54-f2gb7\" (UID: \"70ac0620-95a8-4173-8948-facb2c4a4406\") " pod="calico-apiserver/calico-apiserver-55f4bc6d54-f2gb7" Jul 6 23:52:43.781187 kubelet[2505]: I0706 23:52:43.780998 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbdgp\" (UniqueName: \"kubernetes.io/projected/47d0f9e3-1b02-4028-8726-1149e0c33163-kube-api-access-jbdgp\") pod \"coredns-674b8bbfcf-xwpvc\" (UID: \"47d0f9e3-1b02-4028-8726-1149e0c33163\") " pod="kube-system/coredns-674b8bbfcf-xwpvc" Jul 6 23:52:43.781187 kubelet[2505]: I0706 23:52:43.781021 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7-whisker-ca-bundle\") pod \"whisker-6744cfb56f-t5f4t\" (UID: \"0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7\") " pod="calico-system/whisker-6744cfb56f-t5f4t" Jul 6 23:52:43.781187 kubelet[2505]: I0706 23:52:43.781047 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dda5be44-0474-421e-81ed-886448d2d1f0-config\") pod \"goldmane-768f4c5c69-w2rcs\" (UID: \"dda5be44-0474-421e-81ed-886448d2d1f0\") " pod="calico-system/goldmane-768f4c5c69-w2rcs" Jul 6 23:52:43.783162 kubelet[2505]: I0706 23:52:43.781079 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/dda5be44-0474-421e-81ed-886448d2d1f0-goldmane-key-pair\") pod \"goldmane-768f4c5c69-w2rcs\" (UID: \"dda5be44-0474-421e-81ed-886448d2d1f0\") " pod="calico-system/goldmane-768f4c5c69-w2rcs" Jul 6 23:52:43.783669 kubelet[2505]: I0706 23:52:43.783648 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/70ac0620-95a8-4173-8948-facb2c4a4406-calico-apiserver-certs\") pod \"calico-apiserver-55f4bc6d54-f2gb7\" (UID: \"70ac0620-95a8-4173-8948-facb2c4a4406\") " pod="calico-apiserver/calico-apiserver-55f4bc6d54-f2gb7" Jul 6 23:52:43.783821 kubelet[2505]: I0706 23:52:43.783803 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/afca6f34-d92a-4a5d-ad63-7c0fa937928f-calico-apiserver-certs\") pod \"calico-apiserver-55f4bc6d54-zlkgl\" (UID: \"afca6f34-d92a-4a5d-ad63-7c0fa937928f\") " pod="calico-apiserver/calico-apiserver-55f4bc6d54-zlkgl" Jul 6 23:52:43.995014 kubelet[2505]: E0706 23:52:43.993112 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:43.995177 containerd[1462]: time="2025-07-06T23:52:43.993691837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vfg28,Uid:bc3e67e5-3ae0-4db6-94bf-18460361bfcb,Namespace:kube-system,Attempt:0,}" Jul 6 23:52:44.024929 containerd[1462]: time="2025-07-06T23:52:44.024453252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6744cfb56f-t5f4t,Uid:0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7,Namespace:calico-system,Attempt:0,}" Jul 6 23:52:44.039661 kubelet[2505]: E0706 23:52:44.038967 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:44.041880 containerd[1462]: time="2025-07-06T23:52:44.040329095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xwpvc,Uid:47d0f9e3-1b02-4028-8726-1149e0c33163,Namespace:kube-system,Attempt:0,}" Jul 6 23:52:44.060531 containerd[1462]: time="2025-07-06T23:52:44.060477607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f4bc6d54-zlkgl,Uid:afca6f34-d92a-4a5d-ad63-7c0fa937928f,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:52:44.066424 containerd[1462]: time="2025-07-06T23:52:44.060871682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf87b6475-95fh9,Uid:dd2dd146-3eba-44dc-85ba-a095b70face3,Namespace:calico-system,Attempt:0,}" Jul 6 23:52:44.107646 containerd[1462]: time="2025-07-06T23:52:44.107573676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-w2rcs,Uid:dda5be44-0474-421e-81ed-886448d2d1f0,Namespace:calico-system,Attempt:0,}" Jul 6 23:52:44.107884 containerd[1462]: time="2025-07-06T23:52:44.107856408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f4bc6d54-f2gb7,Uid:70ac0620-95a8-4173-8948-facb2c4a4406,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:52:44.368300 systemd[1]: Created slice kubepods-besteffort-podf74bf16c_5f59_4774_ac59_c82b3e42ab4b.slice - libcontainer container kubepods-besteffort-podf74bf16c_5f59_4774_ac59_c82b3e42ab4b.slice. Jul 6 23:52:44.373679 containerd[1462]: time="2025-07-06T23:52:44.373205946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fvz6x,Uid:f74bf16c-5f59-4774-ac59-c82b3e42ab4b,Namespace:calico-system,Attempt:0,}" Jul 6 23:52:44.417436 containerd[1462]: time="2025-07-06T23:52:44.417365611Z" level=error msg="Failed to destroy network for sandbox \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.443818 containerd[1462]: time="2025-07-06T23:52:44.443749025Z" level=error msg="encountered an error cleaning up failed sandbox \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.448766 containerd[1462]: time="2025-07-06T23:52:44.448004854Z" level=error msg="Failed to destroy network for sandbox \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.450356 containerd[1462]: time="2025-07-06T23:52:44.450191286Z" level=error msg="encountered an error cleaning up failed sandbox \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.452877 containerd[1462]: time="2025-07-06T23:52:44.452478823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vfg28,Uid:bc3e67e5-3ae0-4db6-94bf-18460361bfcb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.452877 containerd[1462]: time="2025-07-06T23:52:44.452577885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf87b6475-95fh9,Uid:dd2dd146-3eba-44dc-85ba-a095b70face3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.464365 containerd[1462]: time="2025-07-06T23:52:44.464112411Z" level=error msg="Failed to destroy network for sandbox \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.464532 kubelet[2505]: E0706 23:52:44.464225 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.464532 kubelet[2505]: E0706 23:52:44.464302 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vfg28" Jul 6 23:52:44.464532 kubelet[2505]: E0706 23:52:44.464225 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.464532 kubelet[2505]: E0706 23:52:44.464381 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cf87b6475-95fh9" Jul 6 23:52:44.464684 kubelet[2505]: E0706 23:52:44.464411 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cf87b6475-95fh9" Jul 6 23:52:44.464684 kubelet[2505]: E0706 23:52:44.464492 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cf87b6475-95fh9_calico-system(dd2dd146-3eba-44dc-85ba-a095b70face3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cf87b6475-95fh9_calico-system(dd2dd146-3eba-44dc-85ba-a095b70face3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cf87b6475-95fh9" podUID="dd2dd146-3eba-44dc-85ba-a095b70face3" Jul 6 23:52:44.465047 kubelet[2505]: E0706 23:52:44.464333 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vfg28" Jul 6 23:52:44.465047 kubelet[2505]: E0706 23:52:44.464995 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vfg28_kube-system(bc3e67e5-3ae0-4db6-94bf-18460361bfcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vfg28_kube-system(bc3e67e5-3ae0-4db6-94bf-18460361bfcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vfg28" podUID="bc3e67e5-3ae0-4db6-94bf-18460361bfcb" Jul 6 23:52:44.468408 containerd[1462]: time="2025-07-06T23:52:44.466031137Z" level=error msg="encountered an error cleaning up failed sandbox \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.468980 containerd[1462]: time="2025-07-06T23:52:44.468452187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-w2rcs,Uid:dda5be44-0474-421e-81ed-886448d2d1f0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.470923 kubelet[2505]: E0706 23:52:44.468732 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.470923 kubelet[2505]: E0706 23:52:44.468794 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-w2rcs" Jul 6 23:52:44.470923 kubelet[2505]: E0706 23:52:44.468817 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-w2rcs" Jul 6 23:52:44.471079 kubelet[2505]: E0706 23:52:44.468878 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-w2rcs_calico-system(dda5be44-0474-421e-81ed-886448d2d1f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-w2rcs_calico-system(dda5be44-0474-421e-81ed-886448d2d1f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-w2rcs" podUID="dda5be44-0474-421e-81ed-886448d2d1f0" Jul 6 23:52:44.498447 containerd[1462]: time="2025-07-06T23:52:44.498394519Z" level=error msg="Failed to destroy network for sandbox \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.499273 containerd[1462]: time="2025-07-06T23:52:44.499222981Z" level=error msg="encountered an error cleaning up failed sandbox \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.499467 containerd[1462]: time="2025-07-06T23:52:44.499442584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f4bc6d54-zlkgl,Uid:afca6f34-d92a-4a5d-ad63-7c0fa937928f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.500123 kubelet[2505]: E0706 23:52:44.500076 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.500217 kubelet[2505]: E0706 23:52:44.500139 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55f4bc6d54-zlkgl" Jul 6 23:52:44.500217 kubelet[2505]: E0706 23:52:44.500166 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55f4bc6d54-zlkgl" Jul 6 23:52:44.500291 kubelet[2505]: E0706 23:52:44.500224 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55f4bc6d54-zlkgl_calico-apiserver(afca6f34-d92a-4a5d-ad63-7c0fa937928f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55f4bc6d54-zlkgl_calico-apiserver(afca6f34-d92a-4a5d-ad63-7c0fa937928f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55f4bc6d54-zlkgl" podUID="afca6f34-d92a-4a5d-ad63-7c0fa937928f" Jul 6 23:52:44.504529 containerd[1462]: time="2025-07-06T23:52:44.504474951Z" level=error msg="Failed to destroy network for sandbox \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.505369 containerd[1462]: time="2025-07-06T23:52:44.505162519Z" level=error msg="encountered an error cleaning up failed sandbox \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.507146 containerd[1462]: time="2025-07-06T23:52:44.505429115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xwpvc,Uid:47d0f9e3-1b02-4028-8726-1149e0c33163,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.507251 kubelet[2505]: E0706 23:52:44.505781 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.507251 kubelet[2505]: E0706 23:52:44.505857 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xwpvc" Jul 6 23:52:44.507251 kubelet[2505]: E0706 23:52:44.505891 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xwpvc" Jul 6 23:52:44.507415 kubelet[2505]: E0706 23:52:44.505982 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xwpvc_kube-system(47d0f9e3-1b02-4028-8726-1149e0c33163)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xwpvc_kube-system(47d0f9e3-1b02-4028-8726-1149e0c33163)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xwpvc" podUID="47d0f9e3-1b02-4028-8726-1149e0c33163" Jul 6 23:52:44.513606 containerd[1462]: time="2025-07-06T23:52:44.513561442Z" level=error msg="Failed to destroy network for sandbox \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.514956 kubelet[2505]: I0706 23:52:44.514928 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:52:44.516465 containerd[1462]: time="2025-07-06T23:52:44.515599985Z" level=error msg="encountered an error cleaning up failed sandbox \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.516639 containerd[1462]: time="2025-07-06T23:52:44.516500296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6744cfb56f-t5f4t,Uid:0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.517212 kubelet[2505]: E0706 23:52:44.516831 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.517212 kubelet[2505]: E0706 23:52:44.516874 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6744cfb56f-t5f4t" Jul 6 23:52:44.517212 kubelet[2505]: E0706 23:52:44.516894 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6744cfb56f-t5f4t" Jul 6 23:52:44.517402 kubelet[2505]: E0706 23:52:44.516946 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6744cfb56f-t5f4t_calico-system(0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6744cfb56f-t5f4t_calico-system(0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6744cfb56f-t5f4t" podUID="0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7" Jul 6 23:52:44.523185 kubelet[2505]: I0706 23:52:44.522317 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:52:44.523971 containerd[1462]: time="2025-07-06T23:52:44.523766737Z" level=info msg="StopPodSandbox for \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\"" Jul 6 23:52:44.534128 containerd[1462]: time="2025-07-06T23:52:44.533923382Z" level=info msg="StopPodSandbox for \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\"" Jul 6 23:52:44.534263 containerd[1462]: time="2025-07-06T23:52:44.534197407Z" level=info msg="Ensure that sandbox afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d in task-service has been cleanup successfully" Jul 6 23:52:44.535092 containerd[1462]: time="2025-07-06T23:52:44.534809982Z" level=info msg="Ensure that sandbox bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9 in task-service has been cleanup successfully" Jul 6 23:52:44.537222 containerd[1462]: time="2025-07-06T23:52:44.537184190Z" level=error msg="Failed to destroy network for sandbox \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.538334 containerd[1462]: time="2025-07-06T23:52:44.538275125Z" level=error msg="encountered an error cleaning up failed sandbox \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.538732 containerd[1462]: time="2025-07-06T23:52:44.538553809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f4bc6d54-f2gb7,Uid:70ac0620-95a8-4173-8948-facb2c4a4406,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.539923 kubelet[2505]: E0706 23:52:44.539815 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.539923 kubelet[2505]: E0706 23:52:44.539873 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55f4bc6d54-f2gb7" Jul 6 23:52:44.539923 kubelet[2505]: E0706 23:52:44.539895 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55f4bc6d54-f2gb7" Jul 6 23:52:44.540485 kubelet[2505]: E0706 23:52:44.539945 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55f4bc6d54-f2gb7_calico-apiserver(70ac0620-95a8-4173-8948-facb2c4a4406)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55f4bc6d54-f2gb7_calico-apiserver(70ac0620-95a8-4173-8948-facb2c4a4406)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55f4bc6d54-f2gb7" podUID="70ac0620-95a8-4173-8948-facb2c4a4406" Jul 6 23:52:44.554084 kubelet[2505]: I0706 23:52:44.553827 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:52:44.557628 containerd[1462]: time="2025-07-06T23:52:44.555999510Z" level=info msg="StopPodSandbox for \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\"" Jul 6 23:52:44.557628 containerd[1462]: time="2025-07-06T23:52:44.557324885Z" level=info msg="Ensure that sandbox 4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2 in task-service has been cleanup successfully" Jul 6 23:52:44.562327 containerd[1462]: time="2025-07-06T23:52:44.562283348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:52:44.563659 kubelet[2505]: I0706 23:52:44.563630 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:52:44.571183 containerd[1462]: time="2025-07-06T23:52:44.571141767Z" level=info msg="StopPodSandbox for \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\"" Jul 6 23:52:44.571550 containerd[1462]: time="2025-07-06T23:52:44.571523885Z" level=info msg="Ensure that sandbox ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c in task-service has been cleanup successfully" Jul 6 23:52:44.580573 kubelet[2505]: I0706 23:52:44.580437 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:52:44.582089 containerd[1462]: time="2025-07-06T23:52:44.581600112Z" level=info msg="StopPodSandbox for \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\"" Jul 6 23:52:44.585797 containerd[1462]: time="2025-07-06T23:52:44.585739051Z" level=info msg="Ensure that sandbox 851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43 in task-service has been cleanup successfully" Jul 6 23:52:44.667121 containerd[1462]: time="2025-07-06T23:52:44.666936197Z" level=error msg="StopPodSandbox for \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\" failed" error="failed to destroy network for sandbox \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.667260 kubelet[2505]: E0706 23:52:44.667192 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:52:44.668576 kubelet[2505]: E0706 23:52:44.667252 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43"} Jul 6 23:52:44.668576 kubelet[2505]: E0706 23:52:44.667324 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc3e67e5-3ae0-4db6-94bf-18460361bfcb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:44.668576 kubelet[2505]: E0706 23:52:44.667351 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc3e67e5-3ae0-4db6-94bf-18460361bfcb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vfg28" podUID="bc3e67e5-3ae0-4db6-94bf-18460361bfcb" Jul 6 23:52:44.671421 containerd[1462]: time="2025-07-06T23:52:44.671050519Z" level=error msg="StopPodSandbox for \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\" failed" error="failed to destroy network for sandbox \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.672465 kubelet[2505]: E0706 23:52:44.671814 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:52:44.672465 kubelet[2505]: E0706 23:52:44.671872 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2"} Jul 6 23:52:44.672465 kubelet[2505]: E0706 23:52:44.671906 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd2dd146-3eba-44dc-85ba-a095b70face3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:44.672465 kubelet[2505]: E0706 23:52:44.671929 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd2dd146-3eba-44dc-85ba-a095b70face3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cf87b6475-95fh9" podUID="dd2dd146-3eba-44dc-85ba-a095b70face3" Jul 6 23:52:44.672732 containerd[1462]: time="2025-07-06T23:52:44.671575788Z" level=error msg="StopPodSandbox for \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\" failed" error="failed to destroy network for sandbox \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.672776 kubelet[2505]: E0706 23:52:44.672602 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:52:44.672776 kubelet[2505]: E0706 23:52:44.672644 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9"} Jul 6 23:52:44.672776 kubelet[2505]: E0706 23:52:44.672675 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"afca6f34-d92a-4a5d-ad63-7c0fa937928f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:44.672776 kubelet[2505]: E0706 23:52:44.672695 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"afca6f34-d92a-4a5d-ad63-7c0fa937928f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55f4bc6d54-zlkgl" podUID="afca6f34-d92a-4a5d-ad63-7c0fa937928f" Jul 6 23:52:44.699611 containerd[1462]: time="2025-07-06T23:52:44.697094835Z" level=error msg="StopPodSandbox for \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\" failed" error="failed to destroy network for sandbox \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.700138 kubelet[2505]: E0706 23:52:44.699929 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:52:44.701787 kubelet[2505]: E0706 23:52:44.700511 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d"} Jul 6 23:52:44.701787 kubelet[2505]: E0706 23:52:44.700568 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dda5be44-0474-421e-81ed-886448d2d1f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:44.701787 kubelet[2505]: E0706 23:52:44.700716 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dda5be44-0474-421e-81ed-886448d2d1f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-w2rcs" podUID="dda5be44-0474-421e-81ed-886448d2d1f0" Jul 6 23:52:44.705465 containerd[1462]: time="2025-07-06T23:52:44.705422452Z" level=error msg="Failed to destroy network for sandbox \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.705964 containerd[1462]: time="2025-07-06T23:52:44.705935246Z" level=error msg="encountered an error cleaning up failed sandbox \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.706136 containerd[1462]: time="2025-07-06T23:52:44.706110372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fvz6x,Uid:f74bf16c-5f59-4774-ac59-c82b3e42ab4b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.706250 containerd[1462]: time="2025-07-06T23:52:44.705655332Z" level=error msg="StopPodSandbox for \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\" failed" error="failed to destroy network for sandbox \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.706531 kubelet[2505]: E0706 23:52:44.706486 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:52:44.706606 kubelet[2505]: E0706 23:52:44.706539 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c"} Jul 6 23:52:44.706606 kubelet[2505]: E0706 23:52:44.706573 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47d0f9e3-1b02-4028-8726-1149e0c33163\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:44.706719 kubelet[2505]: E0706 23:52:44.706599 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47d0f9e3-1b02-4028-8726-1149e0c33163\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xwpvc" podUID="47d0f9e3-1b02-4028-8726-1149e0c33163" Jul 6 23:52:44.706719 kubelet[2505]: E0706 23:52:44.706632 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:44.706719 kubelet[2505]: E0706 23:52:44.706652 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fvz6x" Jul 6 23:52:44.706719 kubelet[2505]: E0706 23:52:44.706682 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fvz6x" Jul 6 23:52:44.706869 kubelet[2505]: E0706 23:52:44.706731 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fvz6x_calico-system(f74bf16c-5f59-4774-ac59-c82b3e42ab4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fvz6x_calico-system(f74bf16c-5f59-4774-ac59-c82b3e42ab4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fvz6x" podUID="f74bf16c-5f59-4774-ac59-c82b3e42ab4b" Jul 6 23:52:44.918320 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43-shm.mount: Deactivated successfully. Jul 6 23:52:45.587384 kubelet[2505]: I0706 23:52:45.586316 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:52:45.587646 containerd[1462]: time="2025-07-06T23:52:45.587567900Z" level=info msg="StopPodSandbox for \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\"" Jul 6 23:52:45.589403 containerd[1462]: time="2025-07-06T23:52:45.588276240Z" level=info msg="Ensure that sandbox 63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765 in task-service has been cleanup successfully" Jul 6 23:52:45.589804 kubelet[2505]: I0706 23:52:45.589777 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:52:45.591398 containerd[1462]: time="2025-07-06T23:52:45.590615202Z" level=info msg="StopPodSandbox for \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\"" Jul 6 23:52:45.592901 kubelet[2505]: I0706 23:52:45.592398 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:52:45.599946 containerd[1462]: time="2025-07-06T23:52:45.598778113Z" level=info msg="Ensure that sandbox 26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0 in task-service has been cleanup successfully" Jul 6 23:52:45.607829 containerd[1462]: time="2025-07-06T23:52:45.607471222Z" level=info msg="StopPodSandbox for \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\"" Jul 6 23:52:45.608260 containerd[1462]: time="2025-07-06T23:52:45.608200204Z" level=info msg="Ensure that sandbox c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c in task-service has been cleanup successfully" Jul 6 23:52:45.662098 containerd[1462]: time="2025-07-06T23:52:45.661991848Z" level=error msg="StopPodSandbox for \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\" failed" error="failed to destroy network for sandbox \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:45.663119 kubelet[2505]: E0706 23:52:45.662362 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:52:45.663119 kubelet[2505]: E0706 23:52:45.662439 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c"} Jul 6 23:52:45.663119 kubelet[2505]: E0706 23:52:45.662489 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:45.663119 kubelet[2505]: E0706 23:52:45.662525 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6744cfb56f-t5f4t" podUID="0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7" Jul 6 23:52:45.663778 containerd[1462]: time="2025-07-06T23:52:45.663357757Z" level=error msg="StopPodSandbox for \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\" failed" error="failed to destroy network for sandbox \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:45.664193 kubelet[2505]: E0706 23:52:45.663981 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:52:45.664193 kubelet[2505]: E0706 23:52:45.664038 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0"} Jul 6 23:52:45.664193 kubelet[2505]: E0706 23:52:45.664121 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f74bf16c-5f59-4774-ac59-c82b3e42ab4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:45.664193 kubelet[2505]: E0706 23:52:45.664153 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f74bf16c-5f59-4774-ac59-c82b3e42ab4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fvz6x" podUID="f74bf16c-5f59-4774-ac59-c82b3e42ab4b" Jul 6 23:52:45.668939 containerd[1462]: time="2025-07-06T23:52:45.668804530Z" level=error msg="StopPodSandbox for \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\" failed" error="failed to destroy network for sandbox \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:52:45.669435 kubelet[2505]: E0706 23:52:45.669265 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:52:45.669435 kubelet[2505]: E0706 23:52:45.669340 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765"} Jul 6 23:52:45.669435 kubelet[2505]: E0706 23:52:45.669374 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"70ac0620-95a8-4173-8948-facb2c4a4406\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:52:45.669435 kubelet[2505]: E0706 23:52:45.669399 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"70ac0620-95a8-4173-8948-facb2c4a4406\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55f4bc6d54-f2gb7" podUID="70ac0620-95a8-4173-8948-facb2c4a4406" Jul 6 23:52:51.587637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1210852305.mount: Deactivated successfully. Jul 6 23:52:51.683649 containerd[1462]: time="2025-07-06T23:52:51.677756127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:51.685475 containerd[1462]: time="2025-07-06T23:52:51.681397737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 6 23:52:51.688555 containerd[1462]: time="2025-07-06T23:52:51.687173945Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:51.688555 containerd[1462]: time="2025-07-06T23:52:51.687737723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:51.694145 containerd[1462]: time="2025-07-06T23:52:51.694093153Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.127866352s" Jul 6 23:52:51.694145 containerd[1462]: time="2025-07-06T23:52:51.694141971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 6 23:52:51.751209 containerd[1462]: time="2025-07-06T23:52:51.751164953Z" level=info msg="CreateContainer within sandbox \"628c2f0078620d14d46d6ac117923c5b97c3ca878bba1ead3ef250cfff20b3d9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:52:51.881988 containerd[1462]: time="2025-07-06T23:52:51.881878169Z" level=info msg="CreateContainer within sandbox \"628c2f0078620d14d46d6ac117923c5b97c3ca878bba1ead3ef250cfff20b3d9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"40b29385339ace52b97a8e17e8de91c40a7e2e4fbe8f6cf2676aaae110d125d0\"" Jul 6 23:52:51.891812 containerd[1462]: time="2025-07-06T23:52:51.891753848Z" level=info msg="StartContainer for \"40b29385339ace52b97a8e17e8de91c40a7e2e4fbe8f6cf2676aaae110d125d0\"" Jul 6 23:52:52.046734 systemd[1]: Started cri-containerd-40b29385339ace52b97a8e17e8de91c40a7e2e4fbe8f6cf2676aaae110d125d0.scope - libcontainer container 40b29385339ace52b97a8e17e8de91c40a7e2e4fbe8f6cf2676aaae110d125d0. Jul 6 23:52:52.092713 containerd[1462]: time="2025-07-06T23:52:52.092495934Z" level=info msg="StartContainer for \"40b29385339ace52b97a8e17e8de91c40a7e2e4fbe8f6cf2676aaae110d125d0\" returns successfully" Jul 6 23:52:52.240283 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:52:52.240438 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:52:52.481761 containerd[1462]: time="2025-07-06T23:52:52.481719077Z" level=info msg="StopPodSandbox for \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\"" Jul 6 23:52:52.706234 kubelet[2505]: I0706 23:52:52.690618 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fl6s6" podStartSLOduration=1.45598693 podStartE2EDuration="18.666578724s" podCreationTimestamp="2025-07-06 23:52:34 +0000 UTC" firstStartedPulling="2025-07-06 23:52:34.492510357 +0000 UTC m=+21.298709930" lastFinishedPulling="2025-07-06 23:52:51.703102156 +0000 UTC m=+38.509301724" observedRunningTime="2025-07-06 23:52:52.662979213 +0000 UTC m=+39.469178796" watchObservedRunningTime="2025-07-06 23:52:52.666578724 +0000 UTC m=+39.472778306" Jul 6 23:52:52.845257 containerd[1462]: 2025-07-06 23:52:52.577 [INFO][3777] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:52:52.845257 containerd[1462]: 2025-07-06 23:52:52.578 [INFO][3777] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" iface="eth0" netns="/var/run/netns/cni-77e2a7a6-2b86-239a-e463-1cf35d66edc3" Jul 6 23:52:52.845257 containerd[1462]: 2025-07-06 23:52:52.579 [INFO][3777] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" iface="eth0" netns="/var/run/netns/cni-77e2a7a6-2b86-239a-e463-1cf35d66edc3" Jul 6 23:52:52.845257 containerd[1462]: 2025-07-06 23:52:52.580 [INFO][3777] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" iface="eth0" netns="/var/run/netns/cni-77e2a7a6-2b86-239a-e463-1cf35d66edc3" Jul 6 23:52:52.845257 containerd[1462]: 2025-07-06 23:52:52.580 [INFO][3777] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:52:52.845257 containerd[1462]: 2025-07-06 23:52:52.580 [INFO][3777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:52:52.845257 containerd[1462]: 2025-07-06 23:52:52.822 [INFO][3785] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" HandleID="k8s-pod-network.c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-whisker--6744cfb56f--t5f4t-eth0" Jul 6 23:52:52.845257 containerd[1462]: 2025-07-06 23:52:52.825 [INFO][3785] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:52.845257 containerd[1462]: 2025-07-06 23:52:52.825 [INFO][3785] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:52.845257 containerd[1462]: 2025-07-06 23:52:52.836 [WARNING][3785] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" HandleID="k8s-pod-network.c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-whisker--6744cfb56f--t5f4t-eth0" Jul 6 23:52:52.845257 containerd[1462]: 2025-07-06 23:52:52.836 [INFO][3785] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" HandleID="k8s-pod-network.c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-whisker--6744cfb56f--t5f4t-eth0" Jul 6 23:52:52.845257 containerd[1462]: 2025-07-06 23:52:52.839 [INFO][3785] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:52.845257 containerd[1462]: 2025-07-06 23:52:52.842 [INFO][3777] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:52:52.849394 containerd[1462]: time="2025-07-06T23:52:52.847244257Z" level=info msg="TearDown network for sandbox \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\" successfully" Jul 6 23:52:52.849394 containerd[1462]: time="2025-07-06T23:52:52.847285447Z" level=info msg="StopPodSandbox for \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\" returns successfully" Jul 6 23:52:52.849498 systemd[1]: run-netns-cni\x2d77e2a7a6\x2d2b86\x2d239a\x2de463\x2d1cf35d66edc3.mount: Deactivated successfully. Jul 6 23:52:52.958950 kubelet[2505]: I0706 23:52:52.958804 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7-whisker-backend-key-pair\") pod \"0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7\" (UID: \"0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7\") " Jul 6 23:52:52.958950 kubelet[2505]: I0706 23:52:52.958866 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47cm8\" (UniqueName: \"kubernetes.io/projected/0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7-kube-api-access-47cm8\") pod \"0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7\" (UID: \"0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7\") " Jul 6 23:52:52.958950 kubelet[2505]: I0706 23:52:52.958886 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7-whisker-ca-bundle\") pod \"0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7\" (UID: \"0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7\") " Jul 6 23:52:52.967954 systemd[1]: var-lib-kubelet-pods-0ccbb99e\x2d87ff\x2d4a32\x2db2b1\x2d2a60da50c8f7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d47cm8.mount: Deactivated successfully. Jul 6 23:52:52.969169 systemd[1]: var-lib-kubelet-pods-0ccbb99e\x2d87ff\x2d4a32\x2db2b1\x2d2a60da50c8f7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:52:52.972170 kubelet[2505]: I0706 23:52:52.969696 2505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7" (UID: "0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:52:52.972170 kubelet[2505]: I0706 23:52:52.966401 2505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7-kube-api-access-47cm8" (OuterVolumeSpecName: "kube-api-access-47cm8") pod "0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7" (UID: "0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7"). InnerVolumeSpecName "kube-api-access-47cm8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:52:52.972170 kubelet[2505]: I0706 23:52:52.970975 2505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7" (UID: "0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:52:53.059836 kubelet[2505]: I0706 23:52:53.059779 2505 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7-whisker-backend-key-pair\") on node \"ci-4081.3.4-d-7537ff12ef\" DevicePath \"\"" Jul 6 23:52:53.059836 kubelet[2505]: I0706 23:52:53.059821 2505 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-47cm8\" (UniqueName: \"kubernetes.io/projected/0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7-kube-api-access-47cm8\") on node \"ci-4081.3.4-d-7537ff12ef\" DevicePath \"\"" Jul 6 23:52:53.059836 kubelet[2505]: I0706 23:52:53.059832 2505 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7-whisker-ca-bundle\") on node \"ci-4081.3.4-d-7537ff12ef\" DevicePath \"\"" Jul 6 23:52:53.378991 systemd[1]: Removed slice kubepods-besteffort-pod0ccbb99e_87ff_4a32_b2b1_2a60da50c8f7.slice - libcontainer container kubepods-besteffort-pod0ccbb99e_87ff_4a32_b2b1_2a60da50c8f7.slice. Jul 6 23:52:53.647929 kubelet[2505]: I0706 23:52:53.647798 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:52:53.768842 kubelet[2505]: I0706 23:52:53.767630 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4np5n\" (UniqueName: \"kubernetes.io/projected/2cffe33d-e536-47ae-8928-ef98b16d39d0-kube-api-access-4np5n\") pod \"whisker-769f4b899f-n7zhk\" (UID: \"2cffe33d-e536-47ae-8928-ef98b16d39d0\") " pod="calico-system/whisker-769f4b899f-n7zhk" Jul 6 23:52:53.770174 kubelet[2505]: I0706 23:52:53.769470 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2cffe33d-e536-47ae-8928-ef98b16d39d0-whisker-backend-key-pair\") pod \"whisker-769f4b899f-n7zhk\" (UID: \"2cffe33d-e536-47ae-8928-ef98b16d39d0\") " pod="calico-system/whisker-769f4b899f-n7zhk" Jul 6 23:52:53.770174 kubelet[2505]: I0706 23:52:53.769560 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cffe33d-e536-47ae-8928-ef98b16d39d0-whisker-ca-bundle\") pod \"whisker-769f4b899f-n7zhk\" (UID: \"2cffe33d-e536-47ae-8928-ef98b16d39d0\") " pod="calico-system/whisker-769f4b899f-n7zhk" Jul 6 23:52:53.774891 systemd[1]: Created slice kubepods-besteffort-pod2cffe33d_e536_47ae_8928_ef98b16d39d0.slice - libcontainer container kubepods-besteffort-pod2cffe33d_e536_47ae_8928_ef98b16d39d0.slice. Jul 6 23:52:53.850070 systemd[1]: run-containerd-runc-k8s.io-40b29385339ace52b97a8e17e8de91c40a7e2e4fbe8f6cf2676aaae110d125d0-runc.d1fWuT.mount: Deactivated successfully. Jul 6 23:52:54.083079 containerd[1462]: time="2025-07-06T23:52:54.082392748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-769f4b899f-n7zhk,Uid:2cffe33d-e536-47ae-8928-ef98b16d39d0,Namespace:calico-system,Attempt:0,}" Jul 6 23:52:54.312706 systemd-networkd[1355]: calie1be3857443: Link UP Jul 6 23:52:54.323073 systemd-networkd[1355]: calie1be3857443: Gained carrier Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.174 [INFO][3916] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.196 [INFO][3916] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-eth0 whisker-769f4b899f- calico-system 2cffe33d-e536-47ae-8928-ef98b16d39d0 970 0 2025-07-06 23:52:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:769f4b899f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.4-d-7537ff12ef whisker-769f4b899f-n7zhk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie1be3857443 [] [] }} ContainerID="3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" Namespace="calico-system" Pod="whisker-769f4b899f-n7zhk" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-" Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.196 [INFO][3916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" Namespace="calico-system" Pod="whisker-769f4b899f-n7zhk" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-eth0" Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.239 [INFO][3928] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" HandleID="k8s-pod-network.3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" Workload="ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-eth0" Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.239 [INFO][3928] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" HandleID="k8s-pod-network.3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" Workload="ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-d-7537ff12ef", "pod":"whisker-769f4b899f-n7zhk", "timestamp":"2025-07-06 23:52:54.239443246 +0000 UTC"}, Hostname:"ci-4081.3.4-d-7537ff12ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.239 [INFO][3928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.239 [INFO][3928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.240 [INFO][3928] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-d-7537ff12ef' Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.253 [INFO][3928] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.262 [INFO][3928] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.269 [INFO][3928] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.271 [INFO][3928] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.273 [INFO][3928] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.274 [INFO][3928] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.275 [INFO][3928] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7 Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.281 [INFO][3928] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.287 [INFO][3928] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.193/26] block=192.168.82.192/26 handle="k8s-pod-network.3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.287 [INFO][3928] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.193/26] handle="k8s-pod-network.3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.287 [INFO][3928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:54.335733 containerd[1462]: 2025-07-06 23:52:54.287 [INFO][3928] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.193/26] IPv6=[] ContainerID="3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" HandleID="k8s-pod-network.3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" Workload="ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-eth0" Jul 6 23:52:54.339218 containerd[1462]: 2025-07-06 23:52:54.290 [INFO][3916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" Namespace="calico-system" Pod="whisker-769f4b899f-n7zhk" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-eth0", GenerateName:"whisker-769f4b899f-", Namespace:"calico-system", SelfLink:"", UID:"2cffe33d-e536-47ae-8928-ef98b16d39d0", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"769f4b899f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"", Pod:"whisker-769f4b899f-n7zhk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie1be3857443", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:54.339218 containerd[1462]: 2025-07-06 23:52:54.291 [INFO][3916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.193/32] ContainerID="3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" Namespace="calico-system" Pod="whisker-769f4b899f-n7zhk" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-eth0" Jul 6 23:52:54.339218 containerd[1462]: 2025-07-06 23:52:54.291 [INFO][3916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1be3857443 ContainerID="3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" Namespace="calico-system" Pod="whisker-769f4b899f-n7zhk" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-eth0" Jul 6 23:52:54.339218 containerd[1462]: 2025-07-06 23:52:54.312 [INFO][3916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" Namespace="calico-system" Pod="whisker-769f4b899f-n7zhk" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-eth0" Jul 6 23:52:54.339218 containerd[1462]: 2025-07-06 23:52:54.313 [INFO][3916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" Namespace="calico-system" Pod="whisker-769f4b899f-n7zhk" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-eth0", GenerateName:"whisker-769f4b899f-", Namespace:"calico-system", SelfLink:"", UID:"2cffe33d-e536-47ae-8928-ef98b16d39d0", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"769f4b899f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7", Pod:"whisker-769f4b899f-n7zhk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie1be3857443", MAC:"82:c3:b3:14:26:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:54.339218 containerd[1462]: 2025-07-06 23:52:54.332 [INFO][3916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7" Namespace="calico-system" Pod="whisker-769f4b899f-n7zhk" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-whisker--769f4b899f--n7zhk-eth0" Jul 6 23:52:54.416312 containerd[1462]: time="2025-07-06T23:52:54.415974311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:54.416312 containerd[1462]: time="2025-07-06T23:52:54.416068963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:54.416312 containerd[1462]: time="2025-07-06T23:52:54.416084186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:54.417196 containerd[1462]: time="2025-07-06T23:52:54.417089423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:54.445089 systemd[1]: Started cri-containerd-3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7.scope - libcontainer container 3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7. Jul 6 23:52:54.549146 containerd[1462]: time="2025-07-06T23:52:54.549102403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-769f4b899f-n7zhk,Uid:2cffe33d-e536-47ae-8928-ef98b16d39d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7\"" Jul 6 23:52:54.557943 containerd[1462]: time="2025-07-06T23:52:54.557899789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:52:54.639920 kernel: bpftool[4036]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 6 23:52:55.006903 systemd-networkd[1355]: vxlan.calico: Link UP Jul 6 23:52:55.006913 systemd-networkd[1355]: vxlan.calico: Gained carrier Jul 6 23:52:55.361184 kubelet[2505]: I0706 23:52:55.361143 2505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7" path="/var/lib/kubelet/pods/0ccbb99e-87ff-4a32-b2b1-2a60da50c8f7/volumes" Jul 6 23:52:55.982637 containerd[1462]: time="2025-07-06T23:52:55.982565697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:55.983468 containerd[1462]: time="2025-07-06T23:52:55.983426158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 6 23:52:55.984941 containerd[1462]: time="2025-07-06T23:52:55.983974523Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:55.986012 containerd[1462]: time="2025-07-06T23:52:55.985981312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:52:55.986842 containerd[1462]: time="2025-07-06T23:52:55.986815131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.428875882s" Jul 6 23:52:55.986942 containerd[1462]: time="2025-07-06T23:52:55.986926183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 6 23:52:55.991193 containerd[1462]: time="2025-07-06T23:52:55.991159228Z" level=info msg="CreateContainer within sandbox \"3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:52:56.006471 containerd[1462]: time="2025-07-06T23:52:56.006409766Z" level=info msg="CreateContainer within sandbox \"3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"80b69e63bd6ecfa3cf92fe8f1a275b774407a3ea9c5cfc15a3502118d95bf659\"" Jul 6 23:52:56.007881 containerd[1462]: time="2025-07-06T23:52:56.007690852Z" level=info msg="StartContainer for \"80b69e63bd6ecfa3cf92fe8f1a275b774407a3ea9c5cfc15a3502118d95bf659\"" Jul 6 23:52:56.053436 systemd[1]: Started cri-containerd-80b69e63bd6ecfa3cf92fe8f1a275b774407a3ea9c5cfc15a3502118d95bf659.scope - libcontainer container 80b69e63bd6ecfa3cf92fe8f1a275b774407a3ea9c5cfc15a3502118d95bf659. Jul 6 23:52:56.107704 containerd[1462]: time="2025-07-06T23:52:56.107600265Z" level=info msg="StartContainer for \"80b69e63bd6ecfa3cf92fe8f1a275b774407a3ea9c5cfc15a3502118d95bf659\" returns successfully" Jul 6 23:52:56.111154 containerd[1462]: time="2025-07-06T23:52:56.110796158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:52:56.182308 systemd-networkd[1355]: calie1be3857443: Gained IPv6LL Jul 6 23:52:56.566912 systemd-networkd[1355]: vxlan.calico: Gained IPv6LL Jul 6 23:52:57.362004 containerd[1462]: time="2025-07-06T23:52:57.361955454Z" level=info msg="StopPodSandbox for \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\"" Jul 6 23:52:57.363693 containerd[1462]: time="2025-07-06T23:52:57.363290549Z" level=info msg="StopPodSandbox for \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\"" Jul 6 23:52:57.364671 containerd[1462]: time="2025-07-06T23:52:57.364282105Z" level=info msg="StopPodSandbox for \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\"" Jul 6 23:52:57.594087 containerd[1462]: 2025-07-06 23:52:57.460 [INFO][4197] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:52:57.594087 containerd[1462]: 2025-07-06 23:52:57.464 [INFO][4197] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" iface="eth0" netns="/var/run/netns/cni-fbd8de05-7cd4-8452-bb9b-db42bcd4cad0" Jul 6 23:52:57.594087 containerd[1462]: 2025-07-06 23:52:57.464 [INFO][4197] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" iface="eth0" netns="/var/run/netns/cni-fbd8de05-7cd4-8452-bb9b-db42bcd4cad0" Jul 6 23:52:57.594087 containerd[1462]: 2025-07-06 23:52:57.465 [INFO][4197] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" iface="eth0" netns="/var/run/netns/cni-fbd8de05-7cd4-8452-bb9b-db42bcd4cad0" Jul 6 23:52:57.594087 containerd[1462]: 2025-07-06 23:52:57.465 [INFO][4197] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:52:57.594087 containerd[1462]: 2025-07-06 23:52:57.465 [INFO][4197] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:52:57.594087 containerd[1462]: 2025-07-06 23:52:57.555 [INFO][4212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" HandleID="k8s-pod-network.ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:52:57.594087 containerd[1462]: 2025-07-06 23:52:57.556 [INFO][4212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:57.594087 containerd[1462]: 2025-07-06 23:52:57.556 [INFO][4212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:57.594087 containerd[1462]: 2025-07-06 23:52:57.575 [WARNING][4212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" HandleID="k8s-pod-network.ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:52:57.594087 containerd[1462]: 2025-07-06 23:52:57.575 [INFO][4212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" HandleID="k8s-pod-network.ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:52:57.594087 containerd[1462]: 2025-07-06 23:52:57.579 [INFO][4212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:57.594087 containerd[1462]: 2025-07-06 23:52:57.583 [INFO][4197] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:52:57.594087 containerd[1462]: time="2025-07-06T23:52:57.593175362Z" level=info msg="TearDown network for sandbox \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\" successfully" Jul 6 23:52:57.594087 containerd[1462]: time="2025-07-06T23:52:57.593207930Z" level=info msg="StopPodSandbox for \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\" returns successfully" Jul 6 23:52:57.597405 systemd[1]: run-netns-cni\x2dfbd8de05\x2d7cd4\x2d8452\x2dbb9b\x2ddb42bcd4cad0.mount: Deactivated successfully. Jul 6 23:52:57.600330 kubelet[2505]: E0706 23:52:57.600295 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:57.601614 containerd[1462]: time="2025-07-06T23:52:57.601582361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xwpvc,Uid:47d0f9e3-1b02-4028-8726-1149e0c33163,Namespace:kube-system,Attempt:1,}" Jul 6 23:52:57.647875 containerd[1462]: 2025-07-06 23:52:57.536 [INFO][4196] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:52:57.647875 containerd[1462]: 2025-07-06 23:52:57.536 [INFO][4196] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" iface="eth0" netns="/var/run/netns/cni-64b1993e-e82e-80c3-845e-7f3b03f7fa54" Jul 6 23:52:57.647875 containerd[1462]: 2025-07-06 23:52:57.537 [INFO][4196] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" iface="eth0" netns="/var/run/netns/cni-64b1993e-e82e-80c3-845e-7f3b03f7fa54" Jul 6 23:52:57.647875 containerd[1462]: 2025-07-06 23:52:57.540 [INFO][4196] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" iface="eth0" netns="/var/run/netns/cni-64b1993e-e82e-80c3-845e-7f3b03f7fa54" Jul 6 23:52:57.647875 containerd[1462]: 2025-07-06 23:52:57.540 [INFO][4196] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:52:57.647875 containerd[1462]: 2025-07-06 23:52:57.540 [INFO][4196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:52:57.647875 containerd[1462]: 2025-07-06 23:52:57.606 [INFO][4225] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" HandleID="k8s-pod-network.26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Workload="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:52:57.647875 containerd[1462]: 2025-07-06 23:52:57.606 [INFO][4225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:57.647875 containerd[1462]: 2025-07-06 23:52:57.606 [INFO][4225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:57.647875 containerd[1462]: 2025-07-06 23:52:57.625 [WARNING][4225] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" HandleID="k8s-pod-network.26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Workload="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:52:57.647875 containerd[1462]: 2025-07-06 23:52:57.625 [INFO][4225] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" HandleID="k8s-pod-network.26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Workload="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:52:57.647875 containerd[1462]: 2025-07-06 23:52:57.629 [INFO][4225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:57.647875 containerd[1462]: 2025-07-06 23:52:57.634 [INFO][4196] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:52:57.647875 containerd[1462]: time="2025-07-06T23:52:57.647727547Z" level=info msg="TearDown network for sandbox \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\" successfully" Jul 6 23:52:57.653115 containerd[1462]: time="2025-07-06T23:52:57.652134424Z" level=info msg="StopPodSandbox for \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\" returns successfully" Jul 6 23:52:57.653777 systemd[1]: run-netns-cni\x2d64b1993e\x2de82e\x2d80c3\x2d845e\x2d7f3b03f7fa54.mount: Deactivated successfully. Jul 6 23:52:57.655032 containerd[1462]: time="2025-07-06T23:52:57.654947475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fvz6x,Uid:f74bf16c-5f59-4774-ac59-c82b3e42ab4b,Namespace:calico-system,Attempt:1,}" Jul 6 23:52:57.687087 containerd[1462]: 2025-07-06 23:52:57.517 [INFO][4198] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:52:57.687087 containerd[1462]: 2025-07-06 23:52:57.522 [INFO][4198] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" iface="eth0" netns="/var/run/netns/cni-64b098f3-72d9-c670-5a7c-b4f72a30d47a" Jul 6 23:52:57.687087 containerd[1462]: 2025-07-06 23:52:57.522 [INFO][4198] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" iface="eth0" netns="/var/run/netns/cni-64b098f3-72d9-c670-5a7c-b4f72a30d47a" Jul 6 23:52:57.687087 containerd[1462]: 2025-07-06 23:52:57.524 [INFO][4198] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" iface="eth0" netns="/var/run/netns/cni-64b098f3-72d9-c670-5a7c-b4f72a30d47a" Jul 6 23:52:57.687087 containerd[1462]: 2025-07-06 23:52:57.524 [INFO][4198] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:52:57.687087 containerd[1462]: 2025-07-06 23:52:57.524 [INFO][4198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:52:57.687087 containerd[1462]: 2025-07-06 23:52:57.607 [INFO][4220] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" HandleID="k8s-pod-network.63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:52:57.687087 containerd[1462]: 2025-07-06 23:52:57.608 [INFO][4220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:57.687087 containerd[1462]: 2025-07-06 23:52:57.629 [INFO][4220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:57.687087 containerd[1462]: 2025-07-06 23:52:57.651 [WARNING][4220] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" HandleID="k8s-pod-network.63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:52:57.687087 containerd[1462]: 2025-07-06 23:52:57.651 [INFO][4220] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" HandleID="k8s-pod-network.63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:52:57.687087 containerd[1462]: 2025-07-06 23:52:57.657 [INFO][4220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:57.687087 containerd[1462]: 2025-07-06 23:52:57.664 [INFO][4198] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:52:57.689138 containerd[1462]: time="2025-07-06T23:52:57.687516893Z" level=info msg="TearDown network for sandbox \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\" successfully" Jul 6 23:52:57.689138 containerd[1462]: time="2025-07-06T23:52:57.687570591Z" level=info msg="StopPodSandbox for \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\" returns successfully" Jul 6 23:52:57.706408 containerd[1462]: time="2025-07-06T23:52:57.705396600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f4bc6d54-f2gb7,Uid:70ac0620-95a8-4173-8948-facb2c4a4406,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:52:57.932540 systemd-networkd[1355]: cali2ef5d209f6c: Link UP Jul 6 23:52:57.937140 systemd-networkd[1355]: cali2ef5d209f6c: Gained carrier Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.701 [INFO][4235] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0 coredns-674b8bbfcf- kube-system 47d0f9e3-1b02-4028-8726-1149e0c33163 991 0 2025-07-06 23:52:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-d-7537ff12ef coredns-674b8bbfcf-xwpvc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ef5d209f6c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwpvc" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-" Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.701 [INFO][4235] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwpvc" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.804 [INFO][4260] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" HandleID="k8s-pod-network.9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.806 [INFO][4260] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" HandleID="k8s-pod-network.9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e9970), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-d-7537ff12ef", "pod":"coredns-674b8bbfcf-xwpvc", "timestamp":"2025-07-06 23:52:57.804974816 +0000 UTC"}, Hostname:"ci-4081.3.4-d-7537ff12ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.806 [INFO][4260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.806 [INFO][4260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.806 [INFO][4260] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-d-7537ff12ef' Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.826 [INFO][4260] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.838 [INFO][4260] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.854 [INFO][4260] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.861 [INFO][4260] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.865 [INFO][4260] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.865 [INFO][4260] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.871 [INFO][4260] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693 Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.877 [INFO][4260] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.901 [INFO][4260] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.194/26] block=192.168.82.192/26 handle="k8s-pod-network.9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.902 [INFO][4260] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.194/26] handle="k8s-pod-network.9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.902 [INFO][4260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:58.016087 containerd[1462]: 2025-07-06 23:52:57.902 [INFO][4260] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.194/26] IPv6=[] ContainerID="9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" HandleID="k8s-pod-network.9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:52:58.018676 containerd[1462]: 2025-07-06 23:52:57.920 [INFO][4235] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwpvc" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"47d0f9e3-1b02-4028-8726-1149e0c33163", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"", Pod:"coredns-674b8bbfcf-xwpvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ef5d209f6c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:58.018676 containerd[1462]: 2025-07-06 23:52:57.920 [INFO][4235] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.194/32] ContainerID="9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwpvc" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:52:58.018676 containerd[1462]: 2025-07-06 23:52:57.920 [INFO][4235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ef5d209f6c ContainerID="9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwpvc" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:52:58.018676 containerd[1462]: 2025-07-06 23:52:57.940 [INFO][4235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwpvc" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:52:58.018676 containerd[1462]: 2025-07-06 23:52:57.942 [INFO][4235] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwpvc" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"47d0f9e3-1b02-4028-8726-1149e0c33163", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693", Pod:"coredns-674b8bbfcf-xwpvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ef5d209f6c", MAC:"9a:9c:92:bb:28:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:58.018676 containerd[1462]: 2025-07-06 23:52:57.994 [INFO][4235] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwpvc" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:52:58.110086 containerd[1462]: time="2025-07-06T23:52:58.103991602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:58.110086 containerd[1462]: time="2025-07-06T23:52:58.107117936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:58.110086 containerd[1462]: time="2025-07-06T23:52:58.108388414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:58.112387 containerd[1462]: time="2025-07-06T23:52:58.111183802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:58.153871 systemd-networkd[1355]: cali5244bb283cd: Link UP Jul 6 23:52:58.164485 systemd-networkd[1355]: cali5244bb283cd: Gained carrier Jul 6 23:52:58.181282 systemd[1]: Started cri-containerd-9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693.scope - libcontainer container 9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693. Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:57.776 [INFO][4248] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0 csi-node-driver- calico-system f74bf16c-5f59-4774-ac59-c82b3e42ab4b 994 0 2025-07-06 23:52:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.4-d-7537ff12ef csi-node-driver-fvz6x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5244bb283cd [] [] }} ContainerID="165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" Namespace="calico-system" Pod="csi-node-driver-fvz6x" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-" Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:57.776 [INFO][4248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" Namespace="calico-system" Pod="csi-node-driver-fvz6x" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:57.873 [INFO][4277] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" HandleID="k8s-pod-network.165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" Workload="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:57.875 [INFO][4277] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" HandleID="k8s-pod-network.165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" Workload="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031e8d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-d-7537ff12ef", "pod":"csi-node-driver-fvz6x", "timestamp":"2025-07-06 23:52:57.873883271 +0000 UTC"}, Hostname:"ci-4081.3.4-d-7537ff12ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:57.875 [INFO][4277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:57.903 [INFO][4277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:57.903 [INFO][4277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-d-7537ff12ef' Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:57.951 [INFO][4277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:57.976 [INFO][4277] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:58.004 [INFO][4277] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:58.014 [INFO][4277] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:58.026 [INFO][4277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:58.026 [INFO][4277] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:58.045 [INFO][4277] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8 Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:58.069 [INFO][4277] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:58.110 [INFO][4277] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.195/26] block=192.168.82.192/26 handle="k8s-pod-network.165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:58.114 [INFO][4277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.195/26] handle="k8s-pod-network.165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:58.116 [INFO][4277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:58.218866 containerd[1462]: 2025-07-06 23:52:58.116 [INFO][4277] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.195/26] IPv6=[] ContainerID="165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" HandleID="k8s-pod-network.165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" Workload="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:52:58.222754 containerd[1462]: 2025-07-06 23:52:58.128 [INFO][4248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" Namespace="calico-system" Pod="csi-node-driver-fvz6x" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f74bf16c-5f59-4774-ac59-c82b3e42ab4b", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"", Pod:"csi-node-driver-fvz6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5244bb283cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:58.222754 containerd[1462]: 2025-07-06 23:52:58.128 [INFO][4248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.195/32] ContainerID="165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" Namespace="calico-system" Pod="csi-node-driver-fvz6x" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:52:58.222754 containerd[1462]: 2025-07-06 23:52:58.128 [INFO][4248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5244bb283cd ContainerID="165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" Namespace="calico-system" Pod="csi-node-driver-fvz6x" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:52:58.222754 containerd[1462]: 2025-07-06 23:52:58.166 [INFO][4248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" Namespace="calico-system" Pod="csi-node-driver-fvz6x" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:52:58.222754 containerd[1462]: 2025-07-06 23:52:58.173 [INFO][4248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" Namespace="calico-system" Pod="csi-node-driver-fvz6x" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f74bf16c-5f59-4774-ac59-c82b3e42ab4b", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8", Pod:"csi-node-driver-fvz6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5244bb283cd", MAC:"0a:9e:4d:55:e2:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:58.222754 containerd[1462]: 2025-07-06 23:52:58.201 [INFO][4248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8" Namespace="calico-system" Pod="csi-node-driver-fvz6x" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:52:58.335684 systemd-networkd[1355]: cali07c5e8cac6e: Link UP Jul 6 23:52:58.338503 systemd-networkd[1355]: cali07c5e8cac6e: Gained carrier Jul 6 23:52:58.352519 containerd[1462]: time="2025-07-06T23:52:58.352347118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xwpvc,Uid:47d0f9e3-1b02-4028-8726-1149e0c33163,Namespace:kube-system,Attempt:1,} returns sandbox id \"9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693\"" Jul 6 23:52:58.359597 containerd[1462]: time="2025-07-06T23:52:58.359128863Z" level=info msg="StopPodSandbox for \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\"" Jul 6 23:52:58.361971 kubelet[2505]: E0706 23:52:58.356942 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:58.363047 containerd[1462]: time="2025-07-06T23:52:58.362848099Z" level=info msg="StopPodSandbox for \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\"" Jul 6 23:52:58.367409 containerd[1462]: time="2025-07-06T23:52:58.366976804Z" level=info msg="StopPodSandbox for \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\"" Jul 6 23:52:58.373748 containerd[1462]: time="2025-07-06T23:52:58.368945029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:58.373748 containerd[1462]: time="2025-07-06T23:52:58.369131970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:58.373748 containerd[1462]: time="2025-07-06T23:52:58.369298373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:58.373748 containerd[1462]: time="2025-07-06T23:52:58.369435709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:58.395103 containerd[1462]: time="2025-07-06T23:52:58.394475352Z" level=info msg="CreateContainer within sandbox \"9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:57.837 [INFO][4264] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0 calico-apiserver-55f4bc6d54- calico-apiserver 70ac0620-95a8-4173-8948-facb2c4a4406 993 0 2025-07-06 23:52:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55f4bc6d54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-d-7537ff12ef calico-apiserver-55f4bc6d54-f2gb7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali07c5e8cac6e [] [] }} ContainerID="525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-f2gb7" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-" Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:57.837 [INFO][4264] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-f2gb7" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.042 [INFO][4286] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" HandleID="k8s-pod-network.525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.043 [INFO][4286] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" HandleID="k8s-pod-network.525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000397df0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-d-7537ff12ef", "pod":"calico-apiserver-55f4bc6d54-f2gb7", "timestamp":"2025-07-06 23:52:58.042268476 +0000 UTC"}, Hostname:"ci-4081.3.4-d-7537ff12ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.043 [INFO][4286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.114 [INFO][4286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.116 [INFO][4286] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-d-7537ff12ef' Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.132 [INFO][4286] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.161 [INFO][4286] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.198 [INFO][4286] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.215 [INFO][4286] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.227 [INFO][4286] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.227 [INFO][4286] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.234 [INFO][4286] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581 Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.257 [INFO][4286] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.281 [INFO][4286] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.196/26] block=192.168.82.192/26 handle="k8s-pod-network.525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.281 [INFO][4286] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.196/26] handle="k8s-pod-network.525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.281 [INFO][4286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:58.400646 containerd[1462]: 2025-07-06 23:52:58.281 [INFO][4286] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.196/26] IPv6=[] ContainerID="525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" HandleID="k8s-pod-network.525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:52:58.401550 containerd[1462]: 2025-07-06 23:52:58.302 [INFO][4264] cni-plugin/k8s.go 418: Populated endpoint ContainerID="525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-f2gb7" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0", GenerateName:"calico-apiserver-55f4bc6d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"70ac0620-95a8-4173-8948-facb2c4a4406", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f4bc6d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"", Pod:"calico-apiserver-55f4bc6d54-f2gb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07c5e8cac6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:58.401550 containerd[1462]: 2025-07-06 23:52:58.305 [INFO][4264] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.196/32] ContainerID="525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-f2gb7" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:52:58.401550 containerd[1462]: 2025-07-06 23:52:58.306 [INFO][4264] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07c5e8cac6e ContainerID="525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-f2gb7" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:52:58.401550 containerd[1462]: 2025-07-06 23:52:58.341 [INFO][4264] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-f2gb7" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:52:58.401550 containerd[1462]: 2025-07-06 23:52:58.342 [INFO][4264] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-f2gb7" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0", GenerateName:"calico-apiserver-55f4bc6d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"70ac0620-95a8-4173-8948-facb2c4a4406", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f4bc6d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581", Pod:"calico-apiserver-55f4bc6d54-f2gb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07c5e8cac6e", MAC:"46:47:69:43:02:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:58.401550 containerd[1462]: 2025-07-06 23:52:58.371 [INFO][4264] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-f2gb7" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:52:58.434816 systemd[1]: Started cri-containerd-165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8.scope - libcontainer container 165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8. Jul 6 23:52:58.479540 containerd[1462]: time="2025-07-06T23:52:58.479275270Z" level=info msg="CreateContainer within sandbox \"9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a682f8520c64207b6c0cd6317af92f194864680ee8c42d49147ead31680a015e\"" Jul 6 23:52:58.483051 containerd[1462]: time="2025-07-06T23:52:58.482770925Z" level=info msg="StartContainer for \"a682f8520c64207b6c0cd6317af92f194864680ee8c42d49147ead31680a015e\"" Jul 6 23:52:58.535937 containerd[1462]: time="2025-07-06T23:52:58.535190633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:58.535937 containerd[1462]: time="2025-07-06T23:52:58.535272536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:58.535937 containerd[1462]: time="2025-07-06T23:52:58.535287522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:58.535937 containerd[1462]: time="2025-07-06T23:52:58.535382208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:58.536283 containerd[1462]: time="2025-07-06T23:52:58.536164908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fvz6x,Uid:f74bf16c-5f59-4774-ac59-c82b3e42ab4b,Namespace:calico-system,Attempt:1,} returns sandbox id \"165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8\"" Jul 6 23:52:58.582333 systemd[1]: Started cri-containerd-525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581.scope - libcontainer container 525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581. Jul 6 23:52:58.606166 systemd[1]: run-netns-cni\x2d64b098f3\x2d72d9\x2dc670\x2d5a7c\x2db4f72a30d47a.mount: Deactivated successfully. Jul 6 23:52:58.618283 systemd[1]: Started cri-containerd-a682f8520c64207b6c0cd6317af92f194864680ee8c42d49147ead31680a015e.scope - libcontainer container a682f8520c64207b6c0cd6317af92f194864680ee8c42d49147ead31680a015e. Jul 6 23:52:58.740221 containerd[1462]: time="2025-07-06T23:52:58.739818281Z" level=info msg="StartContainer for \"a682f8520c64207b6c0cd6317af92f194864680ee8c42d49147ead31680a015e\" returns successfully" Jul 6 23:52:58.794725 containerd[1462]: time="2025-07-06T23:52:58.794506533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f4bc6d54-f2gb7,Uid:70ac0620-95a8-4173-8948-facb2c4a4406,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581\"" Jul 6 23:52:58.798664 containerd[1462]: 2025-07-06 23:52:58.650 [INFO][4422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:52:58.798664 containerd[1462]: 2025-07-06 23:52:58.651 [INFO][4422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" iface="eth0" netns="/var/run/netns/cni-6d271c7c-2eb9-53d5-a0f1-f2d59586ab75" Jul 6 23:52:58.798664 containerd[1462]: 2025-07-06 23:52:58.651 [INFO][4422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" iface="eth0" netns="/var/run/netns/cni-6d271c7c-2eb9-53d5-a0f1-f2d59586ab75" Jul 6 23:52:58.798664 containerd[1462]: 2025-07-06 23:52:58.656 [INFO][4422] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" iface="eth0" netns="/var/run/netns/cni-6d271c7c-2eb9-53d5-a0f1-f2d59586ab75" Jul 6 23:52:58.798664 containerd[1462]: 2025-07-06 23:52:58.656 [INFO][4422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:52:58.798664 containerd[1462]: 2025-07-06 23:52:58.656 [INFO][4422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:52:58.798664 containerd[1462]: 2025-07-06 23:52:58.748 [INFO][4518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" HandleID="k8s-pod-network.afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Workload="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:52:58.798664 containerd[1462]: 2025-07-06 23:52:58.749 [INFO][4518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:58.798664 containerd[1462]: 2025-07-06 23:52:58.749 [INFO][4518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:58.798664 containerd[1462]: 2025-07-06 23:52:58.767 [WARNING][4518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" HandleID="k8s-pod-network.afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Workload="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:52:58.798664 containerd[1462]: 2025-07-06 23:52:58.767 [INFO][4518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" HandleID="k8s-pod-network.afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Workload="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:52:58.798664 containerd[1462]: 2025-07-06 23:52:58.774 [INFO][4518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:58.798664 containerd[1462]: 2025-07-06 23:52:58.791 [INFO][4422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:52:58.805558 containerd[1462]: time="2025-07-06T23:52:58.803073550Z" level=info msg="TearDown network for sandbox \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\" successfully" Jul 6 23:52:58.805558 containerd[1462]: time="2025-07-06T23:52:58.803103306Z" level=info msg="StopPodSandbox for \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\" returns successfully" Jul 6 23:52:58.803718 systemd[1]: run-netns-cni\x2d6d271c7c\x2d2eb9\x2d53d5\x2da0f1\x2df2d59586ab75.mount: Deactivated successfully. Jul 6 23:52:58.810466 containerd[1462]: time="2025-07-06T23:52:58.810335326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-w2rcs,Uid:dda5be44-0474-421e-81ed-886448d2d1f0,Namespace:calico-system,Attempt:1,}" Jul 6 23:52:58.892185 containerd[1462]: 2025-07-06 23:52:58.661 [INFO][4413] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:52:58.892185 containerd[1462]: 2025-07-06 23:52:58.661 [INFO][4413] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" iface="eth0" netns="/var/run/netns/cni-3168d40e-75ec-5a01-5944-d18799dceadd" Jul 6 23:52:58.892185 containerd[1462]: 2025-07-06 23:52:58.663 [INFO][4413] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" iface="eth0" netns="/var/run/netns/cni-3168d40e-75ec-5a01-5944-d18799dceadd" Jul 6 23:52:58.892185 containerd[1462]: 2025-07-06 23:52:58.668 [INFO][4413] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" iface="eth0" netns="/var/run/netns/cni-3168d40e-75ec-5a01-5944-d18799dceadd" Jul 6 23:52:58.892185 containerd[1462]: 2025-07-06 23:52:58.668 [INFO][4413] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:52:58.892185 containerd[1462]: 2025-07-06 23:52:58.668 [INFO][4413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:52:58.892185 containerd[1462]: 2025-07-06 23:52:58.825 [INFO][4525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" HandleID="k8s-pod-network.851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:52:58.892185 containerd[1462]: 2025-07-06 23:52:58.827 [INFO][4525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:58.892185 containerd[1462]: 2025-07-06 23:52:58.827 [INFO][4525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:58.892185 containerd[1462]: 2025-07-06 23:52:58.859 [WARNING][4525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" HandleID="k8s-pod-network.851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:52:58.892185 containerd[1462]: 2025-07-06 23:52:58.859 [INFO][4525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" HandleID="k8s-pod-network.851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:52:58.892185 containerd[1462]: 2025-07-06 23:52:58.863 [INFO][4525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:58.892185 containerd[1462]: 2025-07-06 23:52:58.880 [INFO][4413] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:52:58.894577 containerd[1462]: time="2025-07-06T23:52:58.894353938Z" level=info msg="TearDown network for sandbox \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\" successfully" Jul 6 23:52:58.894577 containerd[1462]: time="2025-07-06T23:52:58.894526045Z" level=info msg="StopPodSandbox for \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\" returns successfully" Jul 6 23:52:58.898006 systemd[1]: run-netns-cni\x2d3168d40e\x2d75ec\x2d5a01\x2d5944\x2dd18799dceadd.mount: Deactivated successfully. Jul 6 23:52:58.901133 kubelet[2505]: E0706 23:52:58.899445 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:58.905424 containerd[1462]: time="2025-07-06T23:52:58.905172140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vfg28,Uid:bc3e67e5-3ae0-4db6-94bf-18460361bfcb,Namespace:kube-system,Attempt:1,}" Jul 6 23:52:58.915793 containerd[1462]: 2025-07-06 23:52:58.634 [INFO][4426] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:52:58.915793 containerd[1462]: 2025-07-06 23:52:58.634 [INFO][4426] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" iface="eth0" netns="/var/run/netns/cni-17d97e07-4079-a5c5-9a4b-6c568c90bcfc" Jul 6 23:52:58.915793 containerd[1462]: 2025-07-06 23:52:58.635 [INFO][4426] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" iface="eth0" netns="/var/run/netns/cni-17d97e07-4079-a5c5-9a4b-6c568c90bcfc" Jul 6 23:52:58.915793 containerd[1462]: 2025-07-06 23:52:58.636 [INFO][4426] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" iface="eth0" netns="/var/run/netns/cni-17d97e07-4079-a5c5-9a4b-6c568c90bcfc" Jul 6 23:52:58.915793 containerd[1462]: 2025-07-06 23:52:58.636 [INFO][4426] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:52:58.915793 containerd[1462]: 2025-07-06 23:52:58.636 [INFO][4426] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:52:58.915793 containerd[1462]: 2025-07-06 23:52:58.847 [INFO][4508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" HandleID="k8s-pod-network.bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:52:58.915793 containerd[1462]: 2025-07-06 23:52:58.847 [INFO][4508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:58.915793 containerd[1462]: 2025-07-06 23:52:58.863 [INFO][4508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:58.915793 containerd[1462]: 2025-07-06 23:52:58.890 [WARNING][4508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" HandleID="k8s-pod-network.bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:52:58.915793 containerd[1462]: 2025-07-06 23:52:58.890 [INFO][4508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" HandleID="k8s-pod-network.bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:52:58.915793 containerd[1462]: 2025-07-06 23:52:58.895 [INFO][4508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:58.915793 containerd[1462]: 2025-07-06 23:52:58.911 [INFO][4426] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:52:58.918425 containerd[1462]: time="2025-07-06T23:52:58.918358269Z" level=info msg="TearDown network for sandbox \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\" successfully" Jul 6 23:52:58.918425 containerd[1462]: time="2025-07-06T23:52:58.918415800Z" level=info msg="StopPodSandbox for \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\" returns successfully" Jul 6 23:52:58.920647 containerd[1462]: time="2025-07-06T23:52:58.920207907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f4bc6d54-zlkgl,Uid:afca6f34-d92a-4a5d-ad63-7c0fa937928f,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:52:59.176427 systemd-networkd[1355]: califbf062a4ac7: Link UP Jul 6 23:52:59.179004 systemd-networkd[1355]: califbf062a4ac7: Gained carrier Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:58.951 [INFO][4552] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0 goldmane-768f4c5c69- calico-system dda5be44-0474-421e-81ed-886448d2d1f0 1015 0 2025-07-06 23:52:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.4-d-7537ff12ef goldmane-768f4c5c69-w2rcs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califbf062a4ac7 [] [] }} ContainerID="159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" Namespace="calico-system" Pod="goldmane-768f4c5c69-w2rcs" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-" Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:58.954 [INFO][4552] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" Namespace="calico-system" Pod="goldmane-768f4c5c69-w2rcs" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.092 [INFO][4589] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" HandleID="k8s-pod-network.159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" Workload="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.094 [INFO][4589] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" HandleID="k8s-pod-network.159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" Workload="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-d-7537ff12ef", "pod":"goldmane-768f4c5c69-w2rcs", "timestamp":"2025-07-06 23:52:59.092712303 +0000 UTC"}, Hostname:"ci-4081.3.4-d-7537ff12ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.094 [INFO][4589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.094 [INFO][4589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.095 [INFO][4589] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-d-7537ff12ef' Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.120 [INFO][4589] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.130 [INFO][4589] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.140 [INFO][4589] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.143 [INFO][4589] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.146 [INFO][4589] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.146 [INFO][4589] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.149 [INFO][4589] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55 Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.156 [INFO][4589] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.167 [INFO][4589] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.197/26] block=192.168.82.192/26 handle="k8s-pod-network.159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.167 [INFO][4589] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.197/26] handle="k8s-pod-network.159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.167 [INFO][4589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:59.209610 containerd[1462]: 2025-07-06 23:52:59.167 [INFO][4589] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.197/26] IPv6=[] ContainerID="159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" HandleID="k8s-pod-network.159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" Workload="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:52:59.211010 containerd[1462]: 2025-07-06 23:52:59.172 [INFO][4552] cni-plugin/k8s.go 418: Populated endpoint ContainerID="159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" Namespace="calico-system" Pod="goldmane-768f4c5c69-w2rcs" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"dda5be44-0474-421e-81ed-886448d2d1f0", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"", Pod:"goldmane-768f4c5c69-w2rcs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califbf062a4ac7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:59.211010 containerd[1462]: 2025-07-06 23:52:59.172 [INFO][4552] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.197/32] ContainerID="159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" Namespace="calico-system" Pod="goldmane-768f4c5c69-w2rcs" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:52:59.211010 containerd[1462]: 2025-07-06 23:52:59.172 [INFO][4552] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbf062a4ac7 ContainerID="159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" Namespace="calico-system" Pod="goldmane-768f4c5c69-w2rcs" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:52:59.211010 containerd[1462]: 2025-07-06 23:52:59.181 [INFO][4552] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" Namespace="calico-system" Pod="goldmane-768f4c5c69-w2rcs" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:52:59.211010 containerd[1462]: 2025-07-06 23:52:59.183 [INFO][4552] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" Namespace="calico-system" Pod="goldmane-768f4c5c69-w2rcs" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"dda5be44-0474-421e-81ed-886448d2d1f0", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55", Pod:"goldmane-768f4c5c69-w2rcs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califbf062a4ac7", MAC:"f6:a4:57:65:1e:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:59.211010 containerd[1462]: 2025-07-06 23:52:59.199 [INFO][4552] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55" Namespace="calico-system" Pod="goldmane-768f4c5c69-w2rcs" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:52:59.305317 systemd-networkd[1355]: calic7eef8ab358: Link UP Jul 6 23:52:59.314734 systemd-networkd[1355]: calic7eef8ab358: Gained carrier Jul 6 23:52:59.362870 containerd[1462]: time="2025-07-06T23:52:59.362112318Z" level=info msg="StopPodSandbox for \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\"" Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.073 [INFO][4578] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0 calico-apiserver-55f4bc6d54- calico-apiserver afca6f34-d92a-4a5d-ad63-7c0fa937928f 1013 0 2025-07-06 23:52:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55f4bc6d54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-d-7537ff12ef calico-apiserver-55f4bc6d54-zlkgl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic7eef8ab358 [] [] }} ContainerID="ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-zlkgl" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-" Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.073 [INFO][4578] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-zlkgl" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.146 [INFO][4602] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" HandleID="k8s-pod-network.ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.146 [INFO][4602] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" HandleID="k8s-pod-network.ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-d-7537ff12ef", "pod":"calico-apiserver-55f4bc6d54-zlkgl", "timestamp":"2025-07-06 23:52:59.146238909 +0000 UTC"}, Hostname:"ci-4081.3.4-d-7537ff12ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.147 [INFO][4602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.168 [INFO][4602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.168 [INFO][4602] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-d-7537ff12ef' Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.221 [INFO][4602] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.233 [INFO][4602] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.244 [INFO][4602] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.247 [INFO][4602] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.251 [INFO][4602] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.251 [INFO][4602] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.254 [INFO][4602] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087 Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.268 [INFO][4602] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.281 [INFO][4602] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.198/26] block=192.168.82.192/26 handle="k8s-pod-network.ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.281 [INFO][4602] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.198/26] handle="k8s-pod-network.ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.281 [INFO][4602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:59.362870 containerd[1462]: 2025-07-06 23:52:59.281 [INFO][4602] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.198/26] IPv6=[] ContainerID="ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" HandleID="k8s-pod-network.ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:52:59.367613 containerd[1462]: 2025-07-06 23:52:59.294 [INFO][4578] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-zlkgl" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0", GenerateName:"calico-apiserver-55f4bc6d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"afca6f34-d92a-4a5d-ad63-7c0fa937928f", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f4bc6d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"", Pod:"calico-apiserver-55f4bc6d54-zlkgl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7eef8ab358", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:59.367613 containerd[1462]: 2025-07-06 23:52:59.294 [INFO][4578] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.198/32] ContainerID="ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-zlkgl" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:52:59.367613 containerd[1462]: 2025-07-06 23:52:59.294 [INFO][4578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7eef8ab358 ContainerID="ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-zlkgl" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:52:59.367613 containerd[1462]: 2025-07-06 23:52:59.327 [INFO][4578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-zlkgl" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:52:59.367613 containerd[1462]: 2025-07-06 23:52:59.332 [INFO][4578] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-zlkgl" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0", GenerateName:"calico-apiserver-55f4bc6d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"afca6f34-d92a-4a5d-ad63-7c0fa937928f", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f4bc6d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087", Pod:"calico-apiserver-55f4bc6d54-zlkgl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7eef8ab358", MAC:"86:32:ba:9a:2a:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:59.367613 containerd[1462]: 2025-07-06 23:52:59.353 [INFO][4578] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087" Namespace="calico-apiserver" Pod="calico-apiserver-55f4bc6d54-zlkgl" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:52:59.367613 containerd[1462]: time="2025-07-06T23:52:59.362250172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:59.367613 containerd[1462]: time="2025-07-06T23:52:59.362499221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:59.367613 containerd[1462]: time="2025-07-06T23:52:59.362516614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:59.367613 containerd[1462]: time="2025-07-06T23:52:59.362674252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:59.435278 systemd[1]: Started cri-containerd-159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55.scope - libcontainer container 159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55. Jul 6 23:52:59.446800 systemd-networkd[1355]: cali5244bb283cd: Gained IPv6LL Jul 6 23:52:59.473304 containerd[1462]: time="2025-07-06T23:52:59.467828618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:59.473304 containerd[1462]: time="2025-07-06T23:52:59.472845816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:59.473304 containerd[1462]: time="2025-07-06T23:52:59.472867066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:59.473304 containerd[1462]: time="2025-07-06T23:52:59.472968420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:59.477460 systemd-networkd[1355]: cali0e7fea581d6: Link UP Jul 6 23:52:59.479707 systemd-networkd[1355]: cali0e7fea581d6: Gained carrier Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.069 [INFO][4568] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0 coredns-674b8bbfcf- kube-system bc3e67e5-3ae0-4db6-94bf-18460361bfcb 1014 0 2025-07-06 23:52:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-d-7537ff12ef coredns-674b8bbfcf-vfg28 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0e7fea581d6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfg28" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-" Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.070 [INFO][4568] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfg28" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.166 [INFO][4598] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" HandleID="k8s-pod-network.9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.166 [INFO][4598] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" HandleID="k8s-pod-network.9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf6c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-d-7537ff12ef", "pod":"coredns-674b8bbfcf-vfg28", "timestamp":"2025-07-06 23:52:59.166135656 +0000 UTC"}, Hostname:"ci-4081.3.4-d-7537ff12ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.166 [INFO][4598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.282 [INFO][4598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.283 [INFO][4598] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-d-7537ff12ef' Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.328 [INFO][4598] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.341 [INFO][4598] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.377 [INFO][4598] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.387 [INFO][4598] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.400 [INFO][4598] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.400 [INFO][4598] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.408 [INFO][4598] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2 Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.421 [INFO][4598] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.444 [INFO][4598] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.199/26] block=192.168.82.192/26 handle="k8s-pod-network.9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.444 [INFO][4598] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.199/26] handle="k8s-pod-network.9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.444 [INFO][4598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:59.517245 containerd[1462]: 2025-07-06 23:52:59.444 [INFO][4598] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.199/26] IPv6=[] ContainerID="9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" HandleID="k8s-pod-network.9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:52:59.517906 containerd[1462]: 2025-07-06 23:52:59.466 [INFO][4568] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfg28" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bc3e67e5-3ae0-4db6-94bf-18460361bfcb", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"", Pod:"coredns-674b8bbfcf-vfg28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e7fea581d6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:59.517906 containerd[1462]: 2025-07-06 23:52:59.467 [INFO][4568] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.199/32] ContainerID="9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfg28" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:52:59.517906 containerd[1462]: 2025-07-06 23:52:59.467 [INFO][4568] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e7fea581d6 ContainerID="9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfg28" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:52:59.517906 containerd[1462]: 2025-07-06 23:52:59.479 [INFO][4568] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfg28" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:52:59.517906 containerd[1462]: 2025-07-06 23:52:59.480 [INFO][4568] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfg28" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bc3e67e5-3ae0-4db6-94bf-18460361bfcb", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2", Pod:"coredns-674b8bbfcf-vfg28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e7fea581d6", MAC:"7e:5c:65:be:f4:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:52:59.517906 containerd[1462]: 2025-07-06 23:52:59.504 [INFO][4568] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfg28" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:52:59.547103 systemd[1]: Started cri-containerd-ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087.scope - libcontainer container ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087. Jul 6 23:52:59.585182 containerd[1462]: time="2025-07-06T23:52:59.582824344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:59.585182 containerd[1462]: time="2025-07-06T23:52:59.582925218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:59.585182 containerd[1462]: time="2025-07-06T23:52:59.582942906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:59.588599 containerd[1462]: time="2025-07-06T23:52:59.587207334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:59.605625 systemd[1]: run-netns-cni\x2d17d97e07\x2d4079\x2da5c5\x2d9a4b\x2d6c568c90bcfc.mount: Deactivated successfully. Jul 6 23:52:59.639855 systemd-networkd[1355]: cali2ef5d209f6c: Gained IPv6LL Jul 6 23:52:59.665206 systemd[1]: run-containerd-runc-k8s.io-9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2-runc.c3PAfP.mount: Deactivated successfully. Jul 6 23:52:59.698225 systemd[1]: Started cri-containerd-9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2.scope - libcontainer container 9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2. Jul 6 23:52:59.714097 containerd[1462]: time="2025-07-06T23:52:59.713893186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-w2rcs,Uid:dda5be44-0474-421e-81ed-886448d2d1f0,Namespace:calico-system,Attempt:1,} returns sandbox id \"159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55\"" Jul 6 23:52:59.746082 kubelet[2505]: E0706 23:52:59.745708 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:59.803758 containerd[1462]: time="2025-07-06T23:52:59.803712060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vfg28,Uid:bc3e67e5-3ae0-4db6-94bf-18460361bfcb,Namespace:kube-system,Attempt:1,} returns sandbox id \"9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2\"" Jul 6 23:52:59.811075 kubelet[2505]: I0706 23:52:59.810518 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xwpvc" podStartSLOduration=40.810395294 podStartE2EDuration="40.810395294s" podCreationTimestamp="2025-07-06 23:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:52:59.783089562 +0000 UTC m=+46.589289139" watchObservedRunningTime="2025-07-06 23:52:59.810395294 +0000 UTC m=+46.616594877" Jul 6 23:52:59.813172 kubelet[2505]: E0706 23:52:59.812382 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:52:59.825378 containerd[1462]: time="2025-07-06T23:52:59.824480256Z" level=info msg="CreateContainer within sandbox \"9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:52:59.832168 containerd[1462]: 2025-07-06 23:52:59.631 [INFO][4665] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:52:59.832168 containerd[1462]: 2025-07-06 23:52:59.632 [INFO][4665] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" iface="eth0" netns="/var/run/netns/cni-8ef657c8-2d29-5b19-f923-58cc79581a54" Jul 6 23:52:59.832168 containerd[1462]: 2025-07-06 23:52:59.632 [INFO][4665] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" iface="eth0" netns="/var/run/netns/cni-8ef657c8-2d29-5b19-f923-58cc79581a54" Jul 6 23:52:59.832168 containerd[1462]: 2025-07-06 23:52:59.632 [INFO][4665] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" iface="eth0" netns="/var/run/netns/cni-8ef657c8-2d29-5b19-f923-58cc79581a54" Jul 6 23:52:59.832168 containerd[1462]: 2025-07-06 23:52:59.632 [INFO][4665] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:52:59.832168 containerd[1462]: 2025-07-06 23:52:59.632 [INFO][4665] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:52:59.832168 containerd[1462]: 2025-07-06 23:52:59.766 [INFO][4752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" HandleID="k8s-pod-network.4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:52:59.832168 containerd[1462]: 2025-07-06 23:52:59.767 [INFO][4752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:52:59.832168 containerd[1462]: 2025-07-06 23:52:59.767 [INFO][4752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:52:59.832168 containerd[1462]: 2025-07-06 23:52:59.797 [WARNING][4752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" HandleID="k8s-pod-network.4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:52:59.832168 containerd[1462]: 2025-07-06 23:52:59.797 [INFO][4752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" HandleID="k8s-pod-network.4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:52:59.832168 containerd[1462]: 2025-07-06 23:52:59.812 [INFO][4752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:52:59.832168 containerd[1462]: 2025-07-06 23:52:59.826 [INFO][4665] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:52:59.836110 containerd[1462]: time="2025-07-06T23:52:59.835703376Z" level=info msg="TearDown network for sandbox \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\" successfully" Jul 6 23:52:59.836110 containerd[1462]: time="2025-07-06T23:52:59.835738790Z" level=info msg="StopPodSandbox for \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\" returns successfully" Jul 6 23:52:59.839311 systemd[1]: run-netns-cni\x2d8ef657c8\x2d2d29\x2d5b19\x2df923\x2d58cc79581a54.mount: Deactivated successfully. Jul 6 23:52:59.851637 containerd[1462]: time="2025-07-06T23:52:59.851596769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf87b6475-95fh9,Uid:dd2dd146-3eba-44dc-85ba-a095b70face3,Namespace:calico-system,Attempt:1,}" Jul 6 23:52:59.856975 containerd[1462]: time="2025-07-06T23:52:59.856810117Z" level=info msg="CreateContainer within sandbox \"9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a3afd2902b4c078aa9324a9cf84541a17b645bed4644ba88806b6861a9e2ea5\"" Jul 6 23:52:59.859669 containerd[1462]: time="2025-07-06T23:52:59.859620029Z" level=info msg="StartContainer for \"4a3afd2902b4c078aa9324a9cf84541a17b645bed4644ba88806b6861a9e2ea5\"" Jul 6 23:52:59.938594 containerd[1462]: time="2025-07-06T23:52:59.938471312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f4bc6d54-zlkgl,Uid:afca6f34-d92a-4a5d-ad63-7c0fa937928f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087\"" Jul 6 23:52:59.953334 systemd[1]: Started cri-containerd-4a3afd2902b4c078aa9324a9cf84541a17b645bed4644ba88806b6861a9e2ea5.scope - libcontainer container 4a3afd2902b4c078aa9324a9cf84541a17b645bed4644ba88806b6861a9e2ea5. Jul 6 23:53:00.033489 containerd[1462]: time="2025-07-06T23:53:00.032901245Z" level=info msg="StartContainer for \"4a3afd2902b4c078aa9324a9cf84541a17b645bed4644ba88806b6861a9e2ea5\" returns successfully" Jul 6 23:53:00.214334 systemd-networkd[1355]: cali07c5e8cac6e: Gained IPv6LL Jul 6 23:53:00.304223 systemd-networkd[1355]: cali8605e2f683b: Link UP Jul 6 23:53:00.305698 systemd-networkd[1355]: cali8605e2f683b: Gained carrier Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.053 [INFO][4790] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0 calico-kube-controllers-cf87b6475- calico-system dd2dd146-3eba-44dc-85ba-a095b70face3 1031 0 2025-07-06 23:52:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cf87b6475 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.4-d-7537ff12ef calico-kube-controllers-cf87b6475-95fh9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8605e2f683b [] [] }} ContainerID="9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" Namespace="calico-system" Pod="calico-kube-controllers-cf87b6475-95fh9" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-" Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.054 [INFO][4790] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" Namespace="calico-system" Pod="calico-kube-controllers-cf87b6475-95fh9" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.172 [INFO][4842] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" HandleID="k8s-pod-network.9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.175 [INFO][4842] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" HandleID="k8s-pod-network.9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-d-7537ff12ef", "pod":"calico-kube-controllers-cf87b6475-95fh9", "timestamp":"2025-07-06 23:53:00.172184692 +0000 UTC"}, Hostname:"ci-4081.3.4-d-7537ff12ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.176 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.176 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.176 [INFO][4842] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-d-7537ff12ef' Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.196 [INFO][4842] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.211 [INFO][4842] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.235 [INFO][4842] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.240 [INFO][4842] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.247 [INFO][4842] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.247 [INFO][4842] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.252 [INFO][4842] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.264 [INFO][4842] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.288 [INFO][4842] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.200/26] block=192.168.82.192/26 handle="k8s-pod-network.9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.288 [INFO][4842] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.200/26] handle="k8s-pod-network.9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" host="ci-4081.3.4-d-7537ff12ef" Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.288 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:00.341553 containerd[1462]: 2025-07-06 23:53:00.288 [INFO][4842] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.200/26] IPv6=[] ContainerID="9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" HandleID="k8s-pod-network.9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:53:00.345864 containerd[1462]: 2025-07-06 23:53:00.295 [INFO][4790] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" Namespace="calico-system" Pod="calico-kube-controllers-cf87b6475-95fh9" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0", GenerateName:"calico-kube-controllers-cf87b6475-", Namespace:"calico-system", SelfLink:"", UID:"dd2dd146-3eba-44dc-85ba-a095b70face3", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cf87b6475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"", Pod:"calico-kube-controllers-cf87b6475-95fh9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8605e2f683b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:00.345864 containerd[1462]: 2025-07-06 23:53:00.297 [INFO][4790] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.200/32] ContainerID="9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" Namespace="calico-system" Pod="calico-kube-controllers-cf87b6475-95fh9" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:53:00.345864 containerd[1462]: 2025-07-06 23:53:00.297 [INFO][4790] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8605e2f683b ContainerID="9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" Namespace="calico-system" Pod="calico-kube-controllers-cf87b6475-95fh9" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:53:00.345864 containerd[1462]: 2025-07-06 23:53:00.302 [INFO][4790] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" Namespace="calico-system" Pod="calico-kube-controllers-cf87b6475-95fh9" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:53:00.345864 containerd[1462]: 2025-07-06 23:53:00.306 [INFO][4790] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" Namespace="calico-system" Pod="calico-kube-controllers-cf87b6475-95fh9" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0", GenerateName:"calico-kube-controllers-cf87b6475-", Namespace:"calico-system", SelfLink:"", UID:"dd2dd146-3eba-44dc-85ba-a095b70face3", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cf87b6475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c", Pod:"calico-kube-controllers-cf87b6475-95fh9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8605e2f683b", MAC:"5e:e8:95:56:45:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:00.345864 containerd[1462]: 2025-07-06 23:53:00.335 [INFO][4790] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c" Namespace="calico-system" Pod="calico-kube-controllers-cf87b6475-95fh9" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:53:00.387047 containerd[1462]: time="2025-07-06T23:53:00.386669478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:53:00.387047 containerd[1462]: time="2025-07-06T23:53:00.386758593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:53:00.387047 containerd[1462]: time="2025-07-06T23:53:00.386903041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:00.390312 containerd[1462]: time="2025-07-06T23:53:00.389427223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:53:00.408366 systemd-networkd[1355]: califbf062a4ac7: Gained IPv6LL Jul 6 23:53:00.448406 systemd[1]: Started cri-containerd-9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c.scope - libcontainer container 9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c. Jul 6 23:53:00.574247 containerd[1462]: time="2025-07-06T23:53:00.574166402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf87b6475-95fh9,Uid:dd2dd146-3eba-44dc-85ba-a095b70face3,Namespace:calico-system,Attempt:1,} returns sandbox id \"9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c\"" Jul 6 23:53:00.600227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921906809.mount: Deactivated successfully. Jul 6 23:53:00.613749 containerd[1462]: time="2025-07-06T23:53:00.613681530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:00.614480 containerd[1462]: time="2025-07-06T23:53:00.614418278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 6 23:53:00.615183 containerd[1462]: time="2025-07-06T23:53:00.615144736Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:00.620232 containerd[1462]: time="2025-07-06T23:53:00.620180753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:00.622227 containerd[1462]: time="2025-07-06T23:53:00.622023482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.511098585s" Jul 6 23:53:00.622227 containerd[1462]: time="2025-07-06T23:53:00.622159902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 6 23:53:00.628964 containerd[1462]: time="2025-07-06T23:53:00.628929025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:53:00.632988 containerd[1462]: time="2025-07-06T23:53:00.632921401Z" level=info msg="CreateContainer within sandbox \"3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:53:00.660276 containerd[1462]: time="2025-07-06T23:53:00.660213433Z" level=info msg="CreateContainer within sandbox \"3c3498e1dc31968e0f169f1cb11a5a1cbe0021b8045c455f94a1d74deb0635a7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7bdce663d23dd42b12094a3a927edc5710d0c3d30a790edac6ea65171b59ec58\"" Jul 6 23:53:00.661313 containerd[1462]: time="2025-07-06T23:53:00.661278615Z" level=info msg="StartContainer for \"7bdce663d23dd42b12094a3a927edc5710d0c3d30a790edac6ea65171b59ec58\"" Jul 6 23:53:00.663222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1426762238.mount: Deactivated successfully. Jul 6 23:53:00.664732 systemd-networkd[1355]: calic7eef8ab358: Gained IPv6LL Jul 6 23:53:00.708499 systemd[1]: Started cri-containerd-7bdce663d23dd42b12094a3a927edc5710d0c3d30a790edac6ea65171b59ec58.scope - libcontainer container 7bdce663d23dd42b12094a3a927edc5710d0c3d30a790edac6ea65171b59ec58. Jul 6 23:53:00.786472 kubelet[2505]: E0706 23:53:00.785868 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:00.799487 containerd[1462]: time="2025-07-06T23:53:00.799431045Z" level=info msg="StartContainer for \"7bdce663d23dd42b12094a3a927edc5710d0c3d30a790edac6ea65171b59ec58\" returns successfully" Jul 6 23:53:00.815656 kubelet[2505]: E0706 23:53:00.815607 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:00.851537 kubelet[2505]: I0706 23:53:00.850252 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vfg28" podStartSLOduration=41.850232714 podStartE2EDuration="41.850232714s" podCreationTimestamp="2025-07-06 23:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:53:00.819000131 +0000 UTC m=+47.625199718" watchObservedRunningTime="2025-07-06 23:53:00.850232714 +0000 UTC m=+47.656432294" Jul 6 23:53:01.302433 systemd-networkd[1355]: cali0e7fea581d6: Gained IPv6LL Jul 6 23:53:01.495037 systemd-networkd[1355]: cali8605e2f683b: Gained IPv6LL Jul 6 23:53:01.821726 kubelet[2505]: E0706 23:53:01.821682 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:01.827962 kubelet[2505]: E0706 23:53:01.827784 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:02.324556 containerd[1462]: time="2025-07-06T23:53:02.323650026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:02.324556 containerd[1462]: time="2025-07-06T23:53:02.324515889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 6 23:53:02.325649 containerd[1462]: time="2025-07-06T23:53:02.325527958Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:02.328799 containerd[1462]: time="2025-07-06T23:53:02.328726666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:02.330607 containerd[1462]: time="2025-07-06T23:53:02.330541394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.70139899s" Jul 6 23:53:02.330607 containerd[1462]: time="2025-07-06T23:53:02.330589370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 6 23:53:02.332692 containerd[1462]: time="2025-07-06T23:53:02.332477767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:53:02.339903 containerd[1462]: time="2025-07-06T23:53:02.339837211Z" level=info msg="CreateContainer within sandbox \"165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:53:02.370070 containerd[1462]: time="2025-07-06T23:53:02.369975762Z" level=info msg="CreateContainer within sandbox \"165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0a6ada792aaf33de8ce3f344541c13f7b10b3ed407d94dafc85054cb1cad03eb\"" Jul 6 23:53:02.372989 containerd[1462]: time="2025-07-06T23:53:02.372317376Z" level=info msg="StartContainer for \"0a6ada792aaf33de8ce3f344541c13f7b10b3ed407d94dafc85054cb1cad03eb\"" Jul 6 23:53:02.442704 systemd[1]: Started cri-containerd-0a6ada792aaf33de8ce3f344541c13f7b10b3ed407d94dafc85054cb1cad03eb.scope - libcontainer container 0a6ada792aaf33de8ce3f344541c13f7b10b3ed407d94dafc85054cb1cad03eb. Jul 6 23:53:02.502527 containerd[1462]: time="2025-07-06T23:53:02.502481419Z" level=info msg="StartContainer for \"0a6ada792aaf33de8ce3f344541c13f7b10b3ed407d94dafc85054cb1cad03eb\" returns successfully" Jul 6 23:53:02.833125 kubelet[2505]: E0706 23:53:02.832752 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:05.344961 containerd[1462]: time="2025-07-06T23:53:05.344310166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:05.346711 containerd[1462]: time="2025-07-06T23:53:05.346560261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 6 23:53:05.347826 containerd[1462]: time="2025-07-06T23:53:05.347510365Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:05.351805 containerd[1462]: time="2025-07-06T23:53:05.351727573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:05.354677 containerd[1462]: time="2025-07-06T23:53:05.352701158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.020176647s" Jul 6 23:53:05.354677 containerd[1462]: time="2025-07-06T23:53:05.352748873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:53:05.374507 containerd[1462]: time="2025-07-06T23:53:05.373946377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:53:05.378176 containerd[1462]: time="2025-07-06T23:53:05.378050611Z" level=info msg="CreateContainer within sandbox \"525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:53:05.407295 containerd[1462]: time="2025-07-06T23:53:05.407246103Z" level=info msg="CreateContainer within sandbox \"525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eb441c4afe91d199c89194aefc2fc773a55913cb880558acf5f5ed2169077ff0\"" Jul 6 23:53:05.409974 containerd[1462]: time="2025-07-06T23:53:05.409929081Z" level=info msg="StartContainer for \"eb441c4afe91d199c89194aefc2fc773a55913cb880558acf5f5ed2169077ff0\"" Jul 6 23:53:05.499290 systemd[1]: Started cri-containerd-eb441c4afe91d199c89194aefc2fc773a55913cb880558acf5f5ed2169077ff0.scope - libcontainer container eb441c4afe91d199c89194aefc2fc773a55913cb880558acf5f5ed2169077ff0. Jul 6 23:53:05.587465 containerd[1462]: time="2025-07-06T23:53:05.587392310Z" level=info msg="StartContainer for \"eb441c4afe91d199c89194aefc2fc773a55913cb880558acf5f5ed2169077ff0\" returns successfully" Jul 6 23:53:05.860643 kubelet[2505]: I0706 23:53:05.860553 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-769f4b899f-n7zhk" podStartSLOduration=6.784552948 podStartE2EDuration="12.860527319s" podCreationTimestamp="2025-07-06 23:52:53 +0000 UTC" firstStartedPulling="2025-07-06 23:52:54.552493624 +0000 UTC m=+41.358693185" lastFinishedPulling="2025-07-06 23:53:00.628467974 +0000 UTC m=+47.434667556" observedRunningTime="2025-07-06 23:53:01.851314323 +0000 UTC m=+48.657513906" watchObservedRunningTime="2025-07-06 23:53:05.860527319 +0000 UTC m=+52.666726898" Jul 6 23:53:07.308991 kubelet[2505]: I0706 23:53:07.308898 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55f4bc6d54-f2gb7" podStartSLOduration=30.744390423 podStartE2EDuration="37.30883795s" podCreationTimestamp="2025-07-06 23:52:30 +0000 UTC" firstStartedPulling="2025-07-06 23:52:58.805255767 +0000 UTC m=+45.611455329" lastFinishedPulling="2025-07-06 23:53:05.369703281 +0000 UTC m=+52.175902856" observedRunningTime="2025-07-06 23:53:05.862529415 +0000 UTC m=+52.668729024" watchObservedRunningTime="2025-07-06 23:53:07.30883795 +0000 UTC m=+54.115037535" Jul 6 23:53:07.937588 systemd[1]: Started sshd@7-134.199.239.131:22-139.178.89.65:33712.service - OpenSSH per-connection server daemon (139.178.89.65:33712). Jul 6 23:53:08.060574 sshd[5053]: Accepted publickey for core from 139.178.89.65 port 33712 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:08.063476 sshd[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:08.072817 systemd-logind[1441]: New session 8 of user core. Jul 6 23:53:08.079306 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:53:08.959164 sshd[5053]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:08.973563 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:53:08.974406 systemd[1]: sshd@7-134.199.239.131:22-139.178.89.65:33712.service: Deactivated successfully. Jul 6 23:53:08.978171 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:53:08.982794 systemd-logind[1441]: Removed session 8. Jul 6 23:53:09.723020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount562527827.mount: Deactivated successfully. Jul 6 23:53:10.372823 containerd[1462]: time="2025-07-06T23:53:10.372763863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:10.431020 containerd[1462]: time="2025-07-06T23:53:10.430460060Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:10.448556 containerd[1462]: time="2025-07-06T23:53:10.432985600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 6 23:53:10.461429 containerd[1462]: time="2025-07-06T23:53:10.461377751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:10.463836 containerd[1462]: time="2025-07-06T23:53:10.463695067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.087843468s" Jul 6 23:53:10.465101 containerd[1462]: time="2025-07-06T23:53:10.464024920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 6 23:53:10.468882 containerd[1462]: time="2025-07-06T23:53:10.468633888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:53:10.520490 containerd[1462]: time="2025-07-06T23:53:10.520437446Z" level=info msg="CreateContainer within sandbox \"159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:53:10.575753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1832830240.mount: Deactivated successfully. Jul 6 23:53:10.594449 containerd[1462]: time="2025-07-06T23:53:10.594266758Z" level=info msg="CreateContainer within sandbox \"159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"73109097a8ad04c2bb61a3213891f183402807ef41f7232bc69ea5dd1718b882\"" Jul 6 23:53:10.595321 containerd[1462]: time="2025-07-06T23:53:10.595292667Z" level=info msg="StartContainer for \"73109097a8ad04c2bb61a3213891f183402807ef41f7232bc69ea5dd1718b882\"" Jul 6 23:53:10.804672 systemd[1]: Started cri-containerd-73109097a8ad04c2bb61a3213891f183402807ef41f7232bc69ea5dd1718b882.scope - libcontainer container 73109097a8ad04c2bb61a3213891f183402807ef41f7232bc69ea5dd1718b882. Jul 6 23:53:10.874889 containerd[1462]: time="2025-07-06T23:53:10.874749354Z" level=info msg="StartContainer for \"73109097a8ad04c2bb61a3213891f183402807ef41f7232bc69ea5dd1718b882\" returns successfully" Jul 6 23:53:10.996546 containerd[1462]: time="2025-07-06T23:53:10.996489643Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:11.000592 containerd[1462]: time="2025-07-06T23:53:11.000527278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:53:11.010365 containerd[1462]: time="2025-07-06T23:53:11.007476461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 538.797749ms" Jul 6 23:53:11.010365 containerd[1462]: time="2025-07-06T23:53:11.007604768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:53:11.014535 containerd[1462]: time="2025-07-06T23:53:11.013273300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:53:11.019578 containerd[1462]: time="2025-07-06T23:53:11.016388637Z" level=info msg="CreateContainer within sandbox \"ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:53:11.049094 kubelet[2505]: I0706 23:53:11.048745 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-w2rcs" podStartSLOduration=27.302215872 podStartE2EDuration="38.048714591s" podCreationTimestamp="2025-07-06 23:52:33 +0000 UTC" firstStartedPulling="2025-07-06 23:52:59.718908716 +0000 UTC m=+46.525108291" lastFinishedPulling="2025-07-06 23:53:10.46540743 +0000 UTC m=+57.271607010" observedRunningTime="2025-07-06 23:53:11.011023546 +0000 UTC m=+57.817223134" watchObservedRunningTime="2025-07-06 23:53:11.048714591 +0000 UTC m=+57.854914173" Jul 6 23:53:11.049633 containerd[1462]: time="2025-07-06T23:53:11.049071858Z" level=info msg="CreateContainer within sandbox \"ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a4c19e170d83f9a13b9965bdd3b58b35a40e7e089499a8a7f309ee33460c2cf5\"" Jul 6 23:53:11.051829 containerd[1462]: time="2025-07-06T23:53:11.050454282Z" level=info msg="StartContainer for \"a4c19e170d83f9a13b9965bdd3b58b35a40e7e089499a8a7f309ee33460c2cf5\"" Jul 6 23:53:11.090699 systemd[1]: Started cri-containerd-a4c19e170d83f9a13b9965bdd3b58b35a40e7e089499a8a7f309ee33460c2cf5.scope - libcontainer container a4c19e170d83f9a13b9965bdd3b58b35a40e7e089499a8a7f309ee33460c2cf5. Jul 6 23:53:11.155366 containerd[1462]: time="2025-07-06T23:53:11.155322518Z" level=info msg="StartContainer for \"a4c19e170d83f9a13b9965bdd3b58b35a40e7e089499a8a7f309ee33460c2cf5\" returns successfully" Jul 6 23:53:11.573679 systemd[1]: run-containerd-runc-k8s.io-73109097a8ad04c2bb61a3213891f183402807ef41f7232bc69ea5dd1718b882-runc.auu6yL.mount: Deactivated successfully. Jul 6 23:53:12.981628 kubelet[2505]: I0706 23:53:12.981544 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55f4bc6d54-zlkgl" podStartSLOduration=31.920461704 podStartE2EDuration="42.981506081s" podCreationTimestamp="2025-07-06 23:52:30 +0000 UTC" firstStartedPulling="2025-07-06 23:52:59.950835423 +0000 UTC m=+46.757034985" lastFinishedPulling="2025-07-06 23:53:11.0118798 +0000 UTC m=+57.818079362" observedRunningTime="2025-07-06 23:53:11.919650671 +0000 UTC m=+58.725850254" watchObservedRunningTime="2025-07-06 23:53:12.981506081 +0000 UTC m=+59.787705662" Jul 6 23:53:13.981654 systemd[1]: Started sshd@8-134.199.239.131:22-139.178.89.65:45818.service - OpenSSH per-connection server daemon (139.178.89.65:45818). Jul 6 23:53:14.133541 containerd[1462]: time="2025-07-06T23:53:14.132837683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:14.137726 containerd[1462]: time="2025-07-06T23:53:14.136226144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 6 23:53:14.142163 containerd[1462]: time="2025-07-06T23:53:14.140155234Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:14.142320 containerd[1462]: time="2025-07-06T23:53:14.142199365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:14.143860 containerd[1462]: time="2025-07-06T23:53:14.143472752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.130160736s" Jul 6 23:53:14.143860 containerd[1462]: time="2025-07-06T23:53:14.143511908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 6 23:53:14.252415 sshd[5234]: Accepted publickey for core from 139.178.89.65 port 45818 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:14.255813 sshd[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:14.273468 containerd[1462]: time="2025-07-06T23:53:14.271384670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:53:14.271724 systemd-logind[1441]: New session 9 of user core. Jul 6 23:53:14.277375 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:53:14.300001 containerd[1462]: time="2025-07-06T23:53:14.299957847Z" level=info msg="StopPodSandbox for \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\"" Jul 6 23:53:14.390206 containerd[1462]: time="2025-07-06T23:53:14.389628542Z" level=info msg="CreateContainer within sandbox \"9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:53:14.535318 containerd[1462]: time="2025-07-06T23:53:14.534699806Z" level=info msg="CreateContainer within sandbox \"9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5ebf165eabf6eb7b270e38aa6f6d398bf8884027722bb7bcaef060307b029871\"" Jul 6 23:53:14.566178 containerd[1462]: time="2025-07-06T23:53:14.565188360Z" level=info msg="StartContainer for \"5ebf165eabf6eb7b270e38aa6f6d398bf8884027722bb7bcaef060307b029871\"" Jul 6 23:53:14.776126 systemd[1]: Started cri-containerd-5ebf165eabf6eb7b270e38aa6f6d398bf8884027722bb7bcaef060307b029871.scope - libcontainer container 5ebf165eabf6eb7b270e38aa6f6d398bf8884027722bb7bcaef060307b029871. Jul 6 23:53:14.955941 containerd[1462]: time="2025-07-06T23:53:14.955450474Z" level=info msg="StartContainer for \"5ebf165eabf6eb7b270e38aa6f6d398bf8884027722bb7bcaef060307b029871\" returns successfully" Jul 6 23:53:15.418112 containerd[1462]: 2025-07-06 23:53:14.805 [WARNING][5254] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"dda5be44-0474-421e-81ed-886448d2d1f0", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55", Pod:"goldmane-768f4c5c69-w2rcs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califbf062a4ac7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:15.418112 containerd[1462]: 2025-07-06 23:53:14.814 [INFO][5254] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:53:15.418112 containerd[1462]: 2025-07-06 23:53:14.814 [INFO][5254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" iface="eth0" netns="" Jul 6 23:53:15.418112 containerd[1462]: 2025-07-06 23:53:14.814 [INFO][5254] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:53:15.418112 containerd[1462]: 2025-07-06 23:53:14.814 [INFO][5254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:53:15.418112 containerd[1462]: 2025-07-06 23:53:15.299 [INFO][5288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" HandleID="k8s-pod-network.afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Workload="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:53:15.418112 containerd[1462]: 2025-07-06 23:53:15.324 [INFO][5288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:15.418112 containerd[1462]: 2025-07-06 23:53:15.327 [INFO][5288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:15.418112 containerd[1462]: 2025-07-06 23:53:15.371 [WARNING][5288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" HandleID="k8s-pod-network.afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Workload="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:53:15.418112 containerd[1462]: 2025-07-06 23:53:15.372 [INFO][5288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" HandleID="k8s-pod-network.afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Workload="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:53:15.418112 containerd[1462]: 2025-07-06 23:53:15.383 [INFO][5288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:15.418112 containerd[1462]: 2025-07-06 23:53:15.396 [INFO][5254] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:53:15.418112 containerd[1462]: time="2025-07-06T23:53:15.417684450Z" level=info msg="TearDown network for sandbox \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\" successfully" Jul 6 23:53:15.418112 containerd[1462]: time="2025-07-06T23:53:15.417710218Z" level=info msg="StopPodSandbox for \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\" returns successfully" Jul 6 23:53:15.656148 containerd[1462]: time="2025-07-06T23:53:15.655853688Z" level=info msg="RemovePodSandbox for \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\"" Jul 6 23:53:15.664668 containerd[1462]: time="2025-07-06T23:53:15.663122893Z" level=info msg="Forcibly stopping sandbox \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\"" Jul 6 23:53:15.706753 sshd[5234]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:15.713308 systemd[1]: sshd@8-134.199.239.131:22-139.178.89.65:45818.service: Deactivated successfully. Jul 6 23:53:15.717814 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:53:15.723952 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:53:15.726579 systemd-logind[1441]: Removed session 9. Jul 6 23:53:15.765295 systemd[1]: run-containerd-runc-k8s.io-5ebf165eabf6eb7b270e38aa6f6d398bf8884027722bb7bcaef060307b029871-runc.LcOZsg.mount: Deactivated successfully. Jul 6 23:53:15.878883 containerd[1462]: 2025-07-06 23:53:15.800 [WARNING][5331] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"dda5be44-0474-421e-81ed-886448d2d1f0", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"159d5894facd6bd8ed37afba8d11f57ba7caca781341fa6cdaf46eb7078a7b55", Pod:"goldmane-768f4c5c69-w2rcs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califbf062a4ac7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:15.878883 containerd[1462]: 2025-07-06 23:53:15.800 [INFO][5331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:53:15.878883 containerd[1462]: 2025-07-06 23:53:15.800 [INFO][5331] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" iface="eth0" netns="" Jul 6 23:53:15.878883 containerd[1462]: 2025-07-06 23:53:15.800 [INFO][5331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:53:15.878883 containerd[1462]: 2025-07-06 23:53:15.800 [INFO][5331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:53:15.878883 containerd[1462]: 2025-07-06 23:53:15.848 [INFO][5357] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" HandleID="k8s-pod-network.afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Workload="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:53:15.878883 containerd[1462]: 2025-07-06 23:53:15.851 [INFO][5357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:15.878883 containerd[1462]: 2025-07-06 23:53:15.851 [INFO][5357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:15.878883 containerd[1462]: 2025-07-06 23:53:15.863 [WARNING][5357] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" HandleID="k8s-pod-network.afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Workload="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:53:15.878883 containerd[1462]: 2025-07-06 23:53:15.863 [INFO][5357] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" HandleID="k8s-pod-network.afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Workload="ci--4081.3.4--d--7537ff12ef-k8s-goldmane--768f4c5c69--w2rcs-eth0" Jul 6 23:53:15.878883 containerd[1462]: 2025-07-06 23:53:15.867 [INFO][5357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:15.878883 containerd[1462]: 2025-07-06 23:53:15.872 [INFO][5331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d" Jul 6 23:53:15.882858 containerd[1462]: time="2025-07-06T23:53:15.878924155Z" level=info msg="TearDown network for sandbox \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\" successfully" Jul 6 23:53:15.895583 kubelet[2505]: I0706 23:53:15.893861 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cf87b6475-95fh9" podStartSLOduration=28.175448064 podStartE2EDuration="41.841190729s" podCreationTimestamp="2025-07-06 23:52:34 +0000 UTC" firstStartedPulling="2025-07-06 23:53:00.577173451 +0000 UTC m=+47.383373012" lastFinishedPulling="2025-07-06 23:53:14.242916101 +0000 UTC m=+61.049115677" observedRunningTime="2025-07-06 23:53:15.675966304 +0000 UTC m=+62.482165886" watchObservedRunningTime="2025-07-06 23:53:15.841190729 +0000 UTC m=+62.647390341" Jul 6 23:53:15.900239 containerd[1462]: time="2025-07-06T23:53:15.900172843Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:15.924425 containerd[1462]: time="2025-07-06T23:53:15.924357852Z" level=info msg="RemovePodSandbox \"afe2b8c8bf70d9648612e77127e26daa9c8c7289fca304be6eabd0e152be748d\" returns successfully" Jul 6 23:53:15.929722 containerd[1462]: time="2025-07-06T23:53:15.929357558Z" level=info msg="StopPodSandbox for \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\"" Jul 6 23:53:16.038457 containerd[1462]: 2025-07-06 23:53:15.991 [WARNING][5375] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0", GenerateName:"calico-kube-controllers-cf87b6475-", Namespace:"calico-system", SelfLink:"", UID:"dd2dd146-3eba-44dc-85ba-a095b70face3", ResourceVersion:"1212", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cf87b6475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c", Pod:"calico-kube-controllers-cf87b6475-95fh9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8605e2f683b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:16.038457 containerd[1462]: 2025-07-06 23:53:15.992 [INFO][5375] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:53:16.038457 containerd[1462]: 2025-07-06 23:53:15.992 [INFO][5375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" iface="eth0" netns="" Jul 6 23:53:16.038457 containerd[1462]: 2025-07-06 23:53:15.992 [INFO][5375] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:53:16.038457 containerd[1462]: 2025-07-06 23:53:15.992 [INFO][5375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:53:16.038457 containerd[1462]: 2025-07-06 23:53:16.019 [INFO][5382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" HandleID="k8s-pod-network.4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:53:16.038457 containerd[1462]: 2025-07-06 23:53:16.019 [INFO][5382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:16.038457 containerd[1462]: 2025-07-06 23:53:16.019 [INFO][5382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:16.038457 containerd[1462]: 2025-07-06 23:53:16.030 [WARNING][5382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" HandleID="k8s-pod-network.4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:53:16.038457 containerd[1462]: 2025-07-06 23:53:16.031 [INFO][5382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" HandleID="k8s-pod-network.4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:53:16.038457 containerd[1462]: 2025-07-06 23:53:16.033 [INFO][5382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:16.038457 containerd[1462]: 2025-07-06 23:53:16.035 [INFO][5375] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:53:16.040511 containerd[1462]: time="2025-07-06T23:53:16.039370560Z" level=info msg="TearDown network for sandbox \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\" successfully" Jul 6 23:53:16.040511 containerd[1462]: time="2025-07-06T23:53:16.039425017Z" level=info msg="StopPodSandbox for \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\" returns successfully" Jul 6 23:53:16.040875 containerd[1462]: time="2025-07-06T23:53:16.040850375Z" level=info msg="RemovePodSandbox for \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\"" Jul 6 23:53:16.040966 containerd[1462]: time="2025-07-06T23:53:16.040936989Z" level=info msg="Forcibly stopping sandbox \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\"" Jul 6 23:53:16.139470 containerd[1462]: 2025-07-06 23:53:16.093 [WARNING][5396] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0", GenerateName:"calico-kube-controllers-cf87b6475-", Namespace:"calico-system", SelfLink:"", UID:"dd2dd146-3eba-44dc-85ba-a095b70face3", ResourceVersion:"1212", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cf87b6475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"9eb05b1a17fca6be8432eab0d92d53281cdd9e70bdc51b13ed793e515fe12b7c", Pod:"calico-kube-controllers-cf87b6475-95fh9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8605e2f683b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:16.139470 containerd[1462]: 2025-07-06 23:53:16.093 [INFO][5396] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:53:16.139470 containerd[1462]: 2025-07-06 23:53:16.093 [INFO][5396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" iface="eth0" netns="" Jul 6 23:53:16.139470 containerd[1462]: 2025-07-06 23:53:16.093 [INFO][5396] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:53:16.139470 containerd[1462]: 2025-07-06 23:53:16.093 [INFO][5396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:53:16.139470 containerd[1462]: 2025-07-06 23:53:16.124 [INFO][5403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" HandleID="k8s-pod-network.4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:53:16.139470 containerd[1462]: 2025-07-06 23:53:16.124 [INFO][5403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:16.139470 containerd[1462]: 2025-07-06 23:53:16.124 [INFO][5403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:16.139470 containerd[1462]: 2025-07-06 23:53:16.132 [WARNING][5403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" HandleID="k8s-pod-network.4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:53:16.139470 containerd[1462]: 2025-07-06 23:53:16.132 [INFO][5403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" HandleID="k8s-pod-network.4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--kube--controllers--cf87b6475--95fh9-eth0" Jul 6 23:53:16.139470 containerd[1462]: 2025-07-06 23:53:16.135 [INFO][5403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:16.139470 containerd[1462]: 2025-07-06 23:53:16.137 [INFO][5396] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2" Jul 6 23:53:16.142513 containerd[1462]: time="2025-07-06T23:53:16.139687927Z" level=info msg="TearDown network for sandbox \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\" successfully" Jul 6 23:53:16.149592 containerd[1462]: time="2025-07-06T23:53:16.149500858Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:16.149812 containerd[1462]: time="2025-07-06T23:53:16.149597812Z" level=info msg="RemovePodSandbox \"4ca5f01eb5ef55a58adfacfe8f2c4963eaa9a0321f34498fd6de6915fc25abc2\" returns successfully" Jul 6 23:53:16.150235 containerd[1462]: time="2025-07-06T23:53:16.150210534Z" level=info msg="StopPodSandbox for \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\"" Jul 6 23:53:16.255472 containerd[1462]: 2025-07-06 23:53:16.209 [WARNING][5417] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0", GenerateName:"calico-apiserver-55f4bc6d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"afca6f34-d92a-4a5d-ad63-7c0fa937928f", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f4bc6d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087", Pod:"calico-apiserver-55f4bc6d54-zlkgl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7eef8ab358", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:16.255472 containerd[1462]: 2025-07-06 23:53:16.209 [INFO][5417] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:53:16.255472 containerd[1462]: 2025-07-06 23:53:16.209 [INFO][5417] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" iface="eth0" netns="" Jul 6 23:53:16.255472 containerd[1462]: 2025-07-06 23:53:16.209 [INFO][5417] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:53:16.255472 containerd[1462]: 2025-07-06 23:53:16.209 [INFO][5417] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:53:16.255472 containerd[1462]: 2025-07-06 23:53:16.239 [INFO][5424] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" HandleID="k8s-pod-network.bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:53:16.255472 containerd[1462]: 2025-07-06 23:53:16.239 [INFO][5424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:16.255472 containerd[1462]: 2025-07-06 23:53:16.239 [INFO][5424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:16.255472 containerd[1462]: 2025-07-06 23:53:16.247 [WARNING][5424] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" HandleID="k8s-pod-network.bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:53:16.255472 containerd[1462]: 2025-07-06 23:53:16.247 [INFO][5424] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" HandleID="k8s-pod-network.bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:53:16.255472 containerd[1462]: 2025-07-06 23:53:16.249 [INFO][5424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:16.255472 containerd[1462]: 2025-07-06 23:53:16.252 [INFO][5417] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:53:16.255472 containerd[1462]: time="2025-07-06T23:53:16.255293606Z" level=info msg="TearDown network for sandbox \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\" successfully" Jul 6 23:53:16.255472 containerd[1462]: time="2025-07-06T23:53:16.255342126Z" level=info msg="StopPodSandbox for \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\" returns successfully" Jul 6 23:53:16.262298 containerd[1462]: time="2025-07-06T23:53:16.262255253Z" level=info msg="RemovePodSandbox for \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\"" Jul 6 23:53:16.262298 containerd[1462]: time="2025-07-06T23:53:16.262297342Z" level=info msg="Forcibly stopping sandbox \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\"" Jul 6 23:53:16.379750 containerd[1462]: 2025-07-06 23:53:16.322 [WARNING][5438] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0", GenerateName:"calico-apiserver-55f4bc6d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"afca6f34-d92a-4a5d-ad63-7c0fa937928f", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f4bc6d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"ffcbda00f2a3e55030ec76242f92edcb37bc90eaf7d85f9a1cee75ebde3bc087", Pod:"calico-apiserver-55f4bc6d54-zlkgl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7eef8ab358", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:16.379750 containerd[1462]: 2025-07-06 23:53:16.324 [INFO][5438] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:53:16.379750 containerd[1462]: 2025-07-06 23:53:16.324 [INFO][5438] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" iface="eth0" netns="" Jul 6 23:53:16.379750 containerd[1462]: 2025-07-06 23:53:16.324 [INFO][5438] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:53:16.379750 containerd[1462]: 2025-07-06 23:53:16.324 [INFO][5438] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:53:16.379750 containerd[1462]: 2025-07-06 23:53:16.358 [INFO][5446] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" HandleID="k8s-pod-network.bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:53:16.379750 containerd[1462]: 2025-07-06 23:53:16.358 [INFO][5446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:16.379750 containerd[1462]: 2025-07-06 23:53:16.358 [INFO][5446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:16.379750 containerd[1462]: 2025-07-06 23:53:16.367 [WARNING][5446] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" HandleID="k8s-pod-network.bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:53:16.379750 containerd[1462]: 2025-07-06 23:53:16.367 [INFO][5446] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" HandleID="k8s-pod-network.bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--zlkgl-eth0" Jul 6 23:53:16.379750 containerd[1462]: 2025-07-06 23:53:16.369 [INFO][5446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:16.379750 containerd[1462]: 2025-07-06 23:53:16.373 [INFO][5438] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9" Jul 6 23:53:16.379750 containerd[1462]: time="2025-07-06T23:53:16.379663028Z" level=info msg="TearDown network for sandbox \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\" successfully" Jul 6 23:53:16.384184 containerd[1462]: time="2025-07-06T23:53:16.383052833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:16.384184 containerd[1462]: time="2025-07-06T23:53:16.383188347Z" level=info msg="RemovePodSandbox \"bf0e5daf92c8402a1df914b034c9e16692dffa09dcf71742307f8bd705e50aa9\" returns successfully" Jul 6 23:53:16.384184 containerd[1462]: time="2025-07-06T23:53:16.383715871Z" level=info msg="StopPodSandbox for \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\"" Jul 6 23:53:16.547645 containerd[1462]: 2025-07-06 23:53:16.457 [WARNING][5460] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f74bf16c-5f59-4774-ac59-c82b3e42ab4b", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8", Pod:"csi-node-driver-fvz6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5244bb283cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:16.547645 containerd[1462]: 2025-07-06 23:53:16.457 [INFO][5460] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:53:16.547645 containerd[1462]: 2025-07-06 23:53:16.457 [INFO][5460] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" iface="eth0" netns="" Jul 6 23:53:16.547645 containerd[1462]: 2025-07-06 23:53:16.457 [INFO][5460] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:53:16.547645 containerd[1462]: 2025-07-06 23:53:16.457 [INFO][5460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:53:16.547645 containerd[1462]: 2025-07-06 23:53:16.510 [INFO][5467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" HandleID="k8s-pod-network.26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Workload="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:53:16.547645 containerd[1462]: 2025-07-06 23:53:16.510 [INFO][5467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:16.547645 containerd[1462]: 2025-07-06 23:53:16.510 [INFO][5467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:16.547645 containerd[1462]: 2025-07-06 23:53:16.522 [WARNING][5467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" HandleID="k8s-pod-network.26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Workload="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:53:16.547645 containerd[1462]: 2025-07-06 23:53:16.522 [INFO][5467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" HandleID="k8s-pod-network.26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Workload="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:53:16.547645 containerd[1462]: 2025-07-06 23:53:16.529 [INFO][5467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:16.547645 containerd[1462]: 2025-07-06 23:53:16.538 [INFO][5460] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:53:16.549639 containerd[1462]: time="2025-07-06T23:53:16.547675556Z" level=info msg="TearDown network for sandbox \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\" successfully" Jul 6 23:53:16.549639 containerd[1462]: time="2025-07-06T23:53:16.547699904Z" level=info msg="StopPodSandbox for \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\" returns successfully" Jul 6 23:53:16.549639 containerd[1462]: time="2025-07-06T23:53:16.548788706Z" level=info msg="RemovePodSandbox for \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\"" Jul 6 23:53:16.549639 containerd[1462]: time="2025-07-06T23:53:16.548815222Z" level=info msg="Forcibly stopping sandbox \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\"" Jul 6 23:53:16.707954 containerd[1462]: 2025-07-06 23:53:16.628 [WARNING][5485] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f74bf16c-5f59-4774-ac59-c82b3e42ab4b", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8", Pod:"csi-node-driver-fvz6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5244bb283cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:16.707954 containerd[1462]: 2025-07-06 23:53:16.628 [INFO][5485] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:53:16.707954 containerd[1462]: 2025-07-06 23:53:16.628 [INFO][5485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" iface="eth0" netns="" Jul 6 23:53:16.707954 containerd[1462]: 2025-07-06 23:53:16.628 [INFO][5485] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:53:16.707954 containerd[1462]: 2025-07-06 23:53:16.629 [INFO][5485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:53:16.707954 containerd[1462]: 2025-07-06 23:53:16.673 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" HandleID="k8s-pod-network.26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Workload="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:53:16.707954 containerd[1462]: 2025-07-06 23:53:16.673 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:16.707954 containerd[1462]: 2025-07-06 23:53:16.674 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:16.707954 containerd[1462]: 2025-07-06 23:53:16.688 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" HandleID="k8s-pod-network.26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Workload="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:53:16.707954 containerd[1462]: 2025-07-06 23:53:16.688 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" HandleID="k8s-pod-network.26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Workload="ci--4081.3.4--d--7537ff12ef-k8s-csi--node--driver--fvz6x-eth0" Jul 6 23:53:16.707954 containerd[1462]: 2025-07-06 23:53:16.699 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:16.707954 containerd[1462]: 2025-07-06 23:53:16.702 [INFO][5485] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0" Jul 6 23:53:16.707954 containerd[1462]: time="2025-07-06T23:53:16.707902496Z" level=info msg="TearDown network for sandbox \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\" successfully" Jul 6 23:53:16.712375 containerd[1462]: time="2025-07-06T23:53:16.712126721Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:16.712375 containerd[1462]: time="2025-07-06T23:53:16.712243465Z" level=info msg="RemovePodSandbox \"26f7aa1f5ec269917c58c281e032427bbbd108d45970c6b490946c754c9b1ed0\" returns successfully" Jul 6 23:53:16.712924 containerd[1462]: time="2025-07-06T23:53:16.712700477Z" level=info msg="StopPodSandbox for \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\"" Jul 6 23:53:16.836330 containerd[1462]: 2025-07-06 23:53:16.772 [WARNING][5506] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-whisker--6744cfb56f--t5f4t-eth0" Jul 6 23:53:16.836330 containerd[1462]: 2025-07-06 23:53:16.772 [INFO][5506] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:53:16.836330 containerd[1462]: 2025-07-06 23:53:16.772 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" iface="eth0" netns="" Jul 6 23:53:16.836330 containerd[1462]: 2025-07-06 23:53:16.772 [INFO][5506] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:53:16.836330 containerd[1462]: 2025-07-06 23:53:16.772 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:53:16.836330 containerd[1462]: 2025-07-06 23:53:16.810 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" HandleID="k8s-pod-network.c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-whisker--6744cfb56f--t5f4t-eth0" Jul 6 23:53:16.836330 containerd[1462]: 2025-07-06 23:53:16.810 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:16.836330 containerd[1462]: 2025-07-06 23:53:16.810 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:16.836330 containerd[1462]: 2025-07-06 23:53:16.825 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" HandleID="k8s-pod-network.c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-whisker--6744cfb56f--t5f4t-eth0" Jul 6 23:53:16.836330 containerd[1462]: 2025-07-06 23:53:16.825 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" HandleID="k8s-pod-network.c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-whisker--6744cfb56f--t5f4t-eth0" Jul 6 23:53:16.836330 containerd[1462]: 2025-07-06 23:53:16.828 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:16.836330 containerd[1462]: 2025-07-06 23:53:16.833 [INFO][5506] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:53:16.837000 containerd[1462]: time="2025-07-06T23:53:16.836383060Z" level=info msg="TearDown network for sandbox \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\" successfully" Jul 6 23:53:16.837000 containerd[1462]: time="2025-07-06T23:53:16.836407221Z" level=info msg="StopPodSandbox for \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\" returns successfully" Jul 6 23:53:16.837906 containerd[1462]: time="2025-07-06T23:53:16.837849080Z" level=info msg="RemovePodSandbox for \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\"" Jul 6 23:53:16.837906 containerd[1462]: time="2025-07-06T23:53:16.837885412Z" level=info msg="Forcibly stopping sandbox \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\"" Jul 6 23:53:16.959918 containerd[1462]: time="2025-07-06T23:53:16.959457422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:16.963474 containerd[1462]: time="2025-07-06T23:53:16.963414063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 6 23:53:16.965236 containerd[1462]: time="2025-07-06T23:53:16.965199777Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:16.967942 containerd[1462]: time="2025-07-06T23:53:16.967896520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:16.971756 containerd[1462]: time="2025-07-06T23:53:16.971715988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.700287346s" Jul 6 23:53:16.971756 containerd[1462]: time="2025-07-06T23:53:16.971756520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 6 23:53:17.014616 containerd[1462]: 2025-07-06 23:53:16.925 [WARNING][5528] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" WorkloadEndpoint="ci--4081.3.4--d--7537ff12ef-k8s-whisker--6744cfb56f--t5f4t-eth0" Jul 6 23:53:17.014616 containerd[1462]: 2025-07-06 23:53:16.925 [INFO][5528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:53:17.014616 containerd[1462]: 2025-07-06 23:53:16.925 [INFO][5528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" iface="eth0" netns="" Jul 6 23:53:17.014616 containerd[1462]: 2025-07-06 23:53:16.925 [INFO][5528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:53:17.014616 containerd[1462]: 2025-07-06 23:53:16.925 [INFO][5528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:53:17.014616 containerd[1462]: 2025-07-06 23:53:16.987 [INFO][5536] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" HandleID="k8s-pod-network.c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-whisker--6744cfb56f--t5f4t-eth0" Jul 6 23:53:17.014616 containerd[1462]: 2025-07-06 23:53:16.987 [INFO][5536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:17.014616 containerd[1462]: 2025-07-06 23:53:16.987 [INFO][5536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:17.014616 containerd[1462]: 2025-07-06 23:53:17.001 [WARNING][5536] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" HandleID="k8s-pod-network.c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-whisker--6744cfb56f--t5f4t-eth0" Jul 6 23:53:17.014616 containerd[1462]: 2025-07-06 23:53:17.001 [INFO][5536] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" HandleID="k8s-pod-network.c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-whisker--6744cfb56f--t5f4t-eth0" Jul 6 23:53:17.014616 containerd[1462]: 2025-07-06 23:53:17.007 [INFO][5536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:17.014616 containerd[1462]: 2025-07-06 23:53:17.010 [INFO][5528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c" Jul 6 23:53:17.015249 containerd[1462]: time="2025-07-06T23:53:17.015215064Z" level=info msg="TearDown network for sandbox \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\" successfully" Jul 6 23:53:17.025220 containerd[1462]: time="2025-07-06T23:53:17.025164808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:17.025472 containerd[1462]: time="2025-07-06T23:53:17.025451582Z" level=info msg="RemovePodSandbox \"c6cbd0e9561420488fa9685a367ae9a12343adaa05574e401fe14bd3948c4d5c\" returns successfully" Jul 6 23:53:17.026763 containerd[1462]: time="2025-07-06T23:53:17.026723971Z" level=info msg="StopPodSandbox for \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\"" Jul 6 23:53:17.190720 containerd[1462]: time="2025-07-06T23:53:17.190659559Z" level=info msg="CreateContainer within sandbox \"165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:53:17.201013 containerd[1462]: 2025-07-06 23:53:17.116 [WARNING][5550] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"47d0f9e3-1b02-4028-8726-1149e0c33163", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693", Pod:"coredns-674b8bbfcf-xwpvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ef5d209f6c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:17.201013 containerd[1462]: 2025-07-06 23:53:17.116 [INFO][5550] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:53:17.201013 containerd[1462]: 2025-07-06 23:53:17.116 [INFO][5550] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" iface="eth0" netns="" Jul 6 23:53:17.201013 containerd[1462]: 2025-07-06 23:53:17.116 [INFO][5550] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:53:17.201013 containerd[1462]: 2025-07-06 23:53:17.116 [INFO][5550] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:53:17.201013 containerd[1462]: 2025-07-06 23:53:17.180 [INFO][5557] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" HandleID="k8s-pod-network.ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:53:17.201013 containerd[1462]: 2025-07-06 23:53:17.180 [INFO][5557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:17.201013 containerd[1462]: 2025-07-06 23:53:17.180 [INFO][5557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:17.201013 containerd[1462]: 2025-07-06 23:53:17.189 [WARNING][5557] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" HandleID="k8s-pod-network.ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:53:17.201013 containerd[1462]: 2025-07-06 23:53:17.189 [INFO][5557] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" HandleID="k8s-pod-network.ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:53:17.201013 containerd[1462]: 2025-07-06 23:53:17.193 [INFO][5557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:17.201013 containerd[1462]: 2025-07-06 23:53:17.197 [INFO][5550] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:53:17.203919 containerd[1462]: time="2025-07-06T23:53:17.201450895Z" level=info msg="TearDown network for sandbox \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\" successfully" Jul 6 23:53:17.203919 containerd[1462]: time="2025-07-06T23:53:17.201487637Z" level=info msg="StopPodSandbox for \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\" returns successfully" Jul 6 23:53:17.203919 containerd[1462]: time="2025-07-06T23:53:17.203414475Z" level=info msg="RemovePodSandbox for \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\"" Jul 6 23:53:17.203919 containerd[1462]: time="2025-07-06T23:53:17.203473395Z" level=info msg="Forcibly stopping sandbox \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\"" Jul 6 23:53:17.339491 containerd[1462]: time="2025-07-06T23:53:17.339447403Z" level=info msg="CreateContainer within sandbox \"165699649b5780f9fd414f6b1c624138ba8212819298b0eed136ae23f458ddf8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"47ca8a74ff52a7c13a4e94523b05f6ceb757d407d6712a9b42c837a6dab965bc\"" Jul 6 23:53:17.350777 containerd[1462]: time="2025-07-06T23:53:17.350733071Z" level=info msg="StartContainer for \"47ca8a74ff52a7c13a4e94523b05f6ceb757d407d6712a9b42c837a6dab965bc\"" Jul 6 23:53:17.485428 systemd[1]: run-containerd-runc-k8s.io-47ca8a74ff52a7c13a4e94523b05f6ceb757d407d6712a9b42c837a6dab965bc-runc.oif5Qw.mount: Deactivated successfully. Jul 6 23:53:17.506215 containerd[1462]: 2025-07-06 23:53:17.340 [WARNING][5571] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"47d0f9e3-1b02-4028-8726-1149e0c33163", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"9deca17612d8cb59a8cdbdb8d59339de88c1db1a824acdb2995eac879af1b693", Pod:"coredns-674b8bbfcf-xwpvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ef5d209f6c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:17.506215 containerd[1462]: 2025-07-06 23:53:17.340 [INFO][5571] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:53:17.506215 containerd[1462]: 2025-07-06 23:53:17.340 [INFO][5571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" iface="eth0" netns="" Jul 6 23:53:17.506215 containerd[1462]: 2025-07-06 23:53:17.340 [INFO][5571] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:53:17.506215 containerd[1462]: 2025-07-06 23:53:17.340 [INFO][5571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:53:17.506215 containerd[1462]: 2025-07-06 23:53:17.441 [INFO][5579] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" HandleID="k8s-pod-network.ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:53:17.506215 containerd[1462]: 2025-07-06 23:53:17.442 [INFO][5579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:17.506215 containerd[1462]: 2025-07-06 23:53:17.442 [INFO][5579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:17.506215 containerd[1462]: 2025-07-06 23:53:17.492 [WARNING][5579] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" HandleID="k8s-pod-network.ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:53:17.506215 containerd[1462]: 2025-07-06 23:53:17.492 [INFO][5579] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" HandleID="k8s-pod-network.ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--xwpvc-eth0" Jul 6 23:53:17.506215 containerd[1462]: 2025-07-06 23:53:17.496 [INFO][5579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:17.506215 containerd[1462]: 2025-07-06 23:53:17.500 [INFO][5571] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c" Jul 6 23:53:17.506215 containerd[1462]: time="2025-07-06T23:53:17.505431479Z" level=info msg="TearDown network for sandbox \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\" successfully" Jul 6 23:53:17.513272 systemd[1]: Started cri-containerd-47ca8a74ff52a7c13a4e94523b05f6ceb757d407d6712a9b42c837a6dab965bc.scope - libcontainer container 47ca8a74ff52a7c13a4e94523b05f6ceb757d407d6712a9b42c837a6dab965bc. Jul 6 23:53:17.516292 containerd[1462]: time="2025-07-06T23:53:17.515820543Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:17.516292 containerd[1462]: time="2025-07-06T23:53:17.515911070Z" level=info msg="RemovePodSandbox \"ead14b59f4bd061259ac3bfc1b22ea3260b697e37d24f44511ef2d38e39c669c\" returns successfully" Jul 6 23:53:17.521734 containerd[1462]: time="2025-07-06T23:53:17.521693337Z" level=info msg="StopPodSandbox for \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\"" Jul 6 23:53:17.631280 containerd[1462]: time="2025-07-06T23:53:17.631167099Z" level=info msg="StartContainer for \"47ca8a74ff52a7c13a4e94523b05f6ceb757d407d6712a9b42c837a6dab965bc\" returns successfully" Jul 6 23:53:17.699242 containerd[1462]: 2025-07-06 23:53:17.645 [WARNING][5617] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0", GenerateName:"calico-apiserver-55f4bc6d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"70ac0620-95a8-4173-8948-facb2c4a4406", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f4bc6d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581", Pod:"calico-apiserver-55f4bc6d54-f2gb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07c5e8cac6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:17.699242 containerd[1462]: 2025-07-06 23:53:17.646 [INFO][5617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:53:17.699242 containerd[1462]: 2025-07-06 23:53:17.646 [INFO][5617] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" iface="eth0" netns="" Jul 6 23:53:17.699242 containerd[1462]: 2025-07-06 23:53:17.646 [INFO][5617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:53:17.699242 containerd[1462]: 2025-07-06 23:53:17.646 [INFO][5617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:53:17.699242 containerd[1462]: 2025-07-06 23:53:17.684 [INFO][5635] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" HandleID="k8s-pod-network.63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:53:17.699242 containerd[1462]: 2025-07-06 23:53:17.685 [INFO][5635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:17.699242 containerd[1462]: 2025-07-06 23:53:17.685 [INFO][5635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:17.699242 containerd[1462]: 2025-07-06 23:53:17.692 [WARNING][5635] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" HandleID="k8s-pod-network.63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:53:17.699242 containerd[1462]: 2025-07-06 23:53:17.693 [INFO][5635] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" HandleID="k8s-pod-network.63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:53:17.699242 containerd[1462]: 2025-07-06 23:53:17.694 [INFO][5635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:17.699242 containerd[1462]: 2025-07-06 23:53:17.696 [INFO][5617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:53:17.700663 containerd[1462]: time="2025-07-06T23:53:17.699286028Z" level=info msg="TearDown network for sandbox \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\" successfully" Jul 6 23:53:17.700663 containerd[1462]: time="2025-07-06T23:53:17.699315904Z" level=info msg="StopPodSandbox for \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\" returns successfully" Jul 6 23:53:17.700663 containerd[1462]: time="2025-07-06T23:53:17.699969513Z" level=info msg="RemovePodSandbox for \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\"" Jul 6 23:53:17.700663 containerd[1462]: time="2025-07-06T23:53:17.700007253Z" level=info msg="Forcibly stopping sandbox \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\"" Jul 6 23:53:17.842161 containerd[1462]: 2025-07-06 23:53:17.783 [WARNING][5650] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0", GenerateName:"calico-apiserver-55f4bc6d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"70ac0620-95a8-4173-8948-facb2c4a4406", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f4bc6d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"525c9f1d7f1de4f29a42fd7a8ea650b48872922fb9229c6aba29009a4594b581", Pod:"calico-apiserver-55f4bc6d54-f2gb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07c5e8cac6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:17.842161 containerd[1462]: 2025-07-06 23:53:17.784 [INFO][5650] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:53:17.842161 containerd[1462]: 2025-07-06 23:53:17.784 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" iface="eth0" netns="" Jul 6 23:53:17.842161 containerd[1462]: 2025-07-06 23:53:17.784 [INFO][5650] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:53:17.842161 containerd[1462]: 2025-07-06 23:53:17.784 [INFO][5650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:53:17.842161 containerd[1462]: 2025-07-06 23:53:17.824 [INFO][5659] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" HandleID="k8s-pod-network.63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:53:17.842161 containerd[1462]: 2025-07-06 23:53:17.825 [INFO][5659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:17.842161 containerd[1462]: 2025-07-06 23:53:17.825 [INFO][5659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:17.842161 containerd[1462]: 2025-07-06 23:53:17.833 [WARNING][5659] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" HandleID="k8s-pod-network.63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:53:17.842161 containerd[1462]: 2025-07-06 23:53:17.833 [INFO][5659] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" HandleID="k8s-pod-network.63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Workload="ci--4081.3.4--d--7537ff12ef-k8s-calico--apiserver--55f4bc6d54--f2gb7-eth0" Jul 6 23:53:17.842161 containerd[1462]: 2025-07-06 23:53:17.835 [INFO][5659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:17.842161 containerd[1462]: 2025-07-06 23:53:17.838 [INFO][5650] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765" Jul 6 23:53:17.843769 containerd[1462]: time="2025-07-06T23:53:17.842018570Z" level=info msg="TearDown network for sandbox \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\" successfully" Jul 6 23:53:17.847028 containerd[1462]: time="2025-07-06T23:53:17.846971197Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:17.847214 containerd[1462]: time="2025-07-06T23:53:17.847138380Z" level=info msg="RemovePodSandbox \"63ea30bdfc939fb30138fb77a94c2b6abfc2346b5e613ea8f8491b942c731765\" returns successfully" Jul 6 23:53:17.848274 containerd[1462]: time="2025-07-06T23:53:17.848239272Z" level=info msg="StopPodSandbox for \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\"" Jul 6 23:53:17.980081 containerd[1462]: 2025-07-06 23:53:17.917 [WARNING][5673] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bc3e67e5-3ae0-4db6-94bf-18460361bfcb", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2", Pod:"coredns-674b8bbfcf-vfg28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e7fea581d6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:17.980081 containerd[1462]: 2025-07-06 23:53:17.917 [INFO][5673] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:53:17.980081 containerd[1462]: 2025-07-06 23:53:17.917 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" iface="eth0" netns="" Jul 6 23:53:17.980081 containerd[1462]: 2025-07-06 23:53:17.917 [INFO][5673] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:53:17.980081 containerd[1462]: 2025-07-06 23:53:17.917 [INFO][5673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:53:17.980081 containerd[1462]: 2025-07-06 23:53:17.957 [INFO][5680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" HandleID="k8s-pod-network.851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:53:17.980081 containerd[1462]: 2025-07-06 23:53:17.958 [INFO][5680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:17.980081 containerd[1462]: 2025-07-06 23:53:17.958 [INFO][5680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:17.980081 containerd[1462]: 2025-07-06 23:53:17.972 [WARNING][5680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" HandleID="k8s-pod-network.851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:53:17.980081 containerd[1462]: 2025-07-06 23:53:17.972 [INFO][5680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" HandleID="k8s-pod-network.851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:53:17.980081 containerd[1462]: 2025-07-06 23:53:17.974 [INFO][5680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:17.980081 containerd[1462]: 2025-07-06 23:53:17.976 [INFO][5673] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:53:17.980081 containerd[1462]: time="2025-07-06T23:53:17.980023240Z" level=info msg="TearDown network for sandbox \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\" successfully" Jul 6 23:53:17.981417 containerd[1462]: time="2025-07-06T23:53:17.981377635Z" level=info msg="StopPodSandbox for \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\" returns successfully" Jul 6 23:53:17.982578 containerd[1462]: time="2025-07-06T23:53:17.982552517Z" level=info msg="RemovePodSandbox for \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\"" Jul 6 23:53:17.982677 containerd[1462]: time="2025-07-06T23:53:17.982585819Z" level=info msg="Forcibly stopping sandbox \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\"" Jul 6 23:53:18.088124 containerd[1462]: 2025-07-06 23:53:18.037 [WARNING][5694] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bc3e67e5-3ae0-4db6-94bf-18460361bfcb", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 52, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-d-7537ff12ef", ContainerID:"9a5a1803f8291a2eadd64175290f4d6ecc6751bce29a8553358e615542dda2a2", Pod:"coredns-674b8bbfcf-vfg28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e7fea581d6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:53:18.088124 containerd[1462]: 2025-07-06 23:53:18.038 [INFO][5694] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:53:18.088124 containerd[1462]: 2025-07-06 23:53:18.038 [INFO][5694] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" iface="eth0" netns="" Jul 6 23:53:18.088124 containerd[1462]: 2025-07-06 23:53:18.038 [INFO][5694] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:53:18.088124 containerd[1462]: 2025-07-06 23:53:18.038 [INFO][5694] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:53:18.088124 containerd[1462]: 2025-07-06 23:53:18.071 [INFO][5701] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" HandleID="k8s-pod-network.851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:53:18.088124 containerd[1462]: 2025-07-06 23:53:18.071 [INFO][5701] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:53:18.088124 containerd[1462]: 2025-07-06 23:53:18.071 [INFO][5701] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:53:18.088124 containerd[1462]: 2025-07-06 23:53:18.080 [WARNING][5701] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" HandleID="k8s-pod-network.851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:53:18.088124 containerd[1462]: 2025-07-06 23:53:18.080 [INFO][5701] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" HandleID="k8s-pod-network.851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Workload="ci--4081.3.4--d--7537ff12ef-k8s-coredns--674b8bbfcf--vfg28-eth0" Jul 6 23:53:18.088124 containerd[1462]: 2025-07-06 23:53:18.082 [INFO][5701] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:53:18.088124 containerd[1462]: 2025-07-06 23:53:18.084 [INFO][5694] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43" Jul 6 23:53:18.088124 containerd[1462]: time="2025-07-06T23:53:18.087495156Z" level=info msg="TearDown network for sandbox \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\" successfully" Jul 6 23:53:18.091441 containerd[1462]: time="2025-07-06T23:53:18.090280491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:53:18.091441 containerd[1462]: time="2025-07-06T23:53:18.090360901Z" level=info msg="RemovePodSandbox \"851394640cef3c0a2007cb7180e17ab8d0daad324815844a5a04349ce126db43\" returns successfully" Jul 6 23:53:18.699840 kubelet[2505]: I0706 23:53:18.690615 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fvz6x" podStartSLOduration=26.17182953 podStartE2EDuration="44.690589348s" podCreationTimestamp="2025-07-06 23:52:34 +0000 UTC" firstStartedPulling="2025-07-06 23:52:58.54200112 +0000 UTC m=+45.348200681" lastFinishedPulling="2025-07-06 23:53:17.060760925 +0000 UTC m=+63.866960499" observedRunningTime="2025-07-06 23:53:18.659682479 +0000 UTC m=+65.465882061" watchObservedRunningTime="2025-07-06 23:53:18.690589348 +0000 UTC m=+65.496788927" Jul 6 23:53:18.854410 kubelet[2505]: I0706 23:53:18.851886 2505 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:53:18.854410 kubelet[2505]: I0706 23:53:18.854267 2505 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:53:20.723452 systemd[1]: Started sshd@9-134.199.239.131:22-139.178.89.65:60114.service - OpenSSH per-connection server daemon (139.178.89.65:60114). Jul 6 23:53:20.873076 sshd[5713]: Accepted publickey for core from 139.178.89.65 port 60114 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:20.874736 sshd[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:20.880438 systemd-logind[1441]: New session 10 of user core. Jul 6 23:53:20.882281 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:53:21.515254 sshd[5713]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:21.528182 systemd[1]: sshd@9-134.199.239.131:22-139.178.89.65:60114.service: Deactivated successfully. Jul 6 23:53:21.533922 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:53:21.537846 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:53:21.544494 systemd[1]: Started sshd@10-134.199.239.131:22-139.178.89.65:60126.service - OpenSSH per-connection server daemon (139.178.89.65:60126). Jul 6 23:53:21.546531 systemd-logind[1441]: Removed session 10. Jul 6 23:53:21.629387 sshd[5727]: Accepted publickey for core from 139.178.89.65 port 60126 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:21.631295 sshd[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:21.636170 systemd-logind[1441]: New session 11 of user core. Jul 6 23:53:21.644257 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:53:21.908342 sshd[5727]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:21.919651 systemd[1]: sshd@10-134.199.239.131:22-139.178.89.65:60126.service: Deactivated successfully. Jul 6 23:53:21.922950 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:53:21.926596 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:53:21.931967 systemd[1]: Started sshd@11-134.199.239.131:22-139.178.89.65:60132.service - OpenSSH per-connection server daemon (139.178.89.65:60132). Jul 6 23:53:21.936548 systemd-logind[1441]: Removed session 11. Jul 6 23:53:21.994235 sshd[5739]: Accepted publickey for core from 139.178.89.65 port 60132 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:21.994979 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:22.003459 systemd-logind[1441]: New session 12 of user core. Jul 6 23:53:22.009362 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:53:22.150879 sshd[5739]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:22.154605 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:53:22.156673 systemd[1]: sshd@11-134.199.239.131:22-139.178.89.65:60132.service: Deactivated successfully. Jul 6 23:53:22.159882 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:53:22.161944 systemd-logind[1441]: Removed session 12. Jul 6 23:53:27.179584 systemd[1]: Started sshd@12-134.199.239.131:22-139.178.89.65:60142.service - OpenSSH per-connection server daemon (139.178.89.65:60142). Jul 6 23:53:27.294279 sshd[5777]: Accepted publickey for core from 139.178.89.65 port 60142 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:27.297010 sshd[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:27.302389 systemd-logind[1441]: New session 13 of user core. Jul 6 23:53:27.310380 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:53:27.644560 sshd[5777]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:27.647838 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:53:27.648877 systemd[1]: sshd@12-134.199.239.131:22-139.178.89.65:60142.service: Deactivated successfully. Jul 6 23:53:27.651127 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:53:27.653819 systemd-logind[1441]: Removed session 13. Jul 6 23:53:28.360014 kubelet[2505]: E0706 23:53:28.358976 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:32.661321 systemd[1]: Started sshd@13-134.199.239.131:22-139.178.89.65:57628.service - OpenSSH per-connection server daemon (139.178.89.65:57628). Jul 6 23:53:32.739903 sshd[5789]: Accepted publickey for core from 139.178.89.65 port 57628 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:32.742434 sshd[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:32.749576 systemd-logind[1441]: New session 14 of user core. Jul 6 23:53:32.752259 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:53:32.919925 sshd[5789]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:32.925971 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:53:32.926139 systemd[1]: sshd@13-134.199.239.131:22-139.178.89.65:57628.service: Deactivated successfully. Jul 6 23:53:32.928358 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:53:32.929810 systemd-logind[1441]: Removed session 14. Jul 6 23:53:35.359477 kubelet[2505]: E0706 23:53:35.358978 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:36.358383 kubelet[2505]: E0706 23:53:36.358349 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:37.938637 systemd[1]: Started sshd@14-134.199.239.131:22-139.178.89.65:57632.service - OpenSSH per-connection server daemon (139.178.89.65:57632). Jul 6 23:53:38.080167 sshd[5827]: Accepted publickey for core from 139.178.89.65 port 57632 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:38.083339 sshd[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:38.089670 systemd-logind[1441]: New session 15 of user core. Jul 6 23:53:38.094289 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:53:38.879300 sshd[5827]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:38.884645 systemd[1]: sshd@14-134.199.239.131:22-139.178.89.65:57632.service: Deactivated successfully. Jul 6 23:53:38.887685 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:53:38.889137 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:53:38.890188 systemd-logind[1441]: Removed session 15. Jul 6 23:53:43.897410 systemd[1]: Started sshd@15-134.199.239.131:22-139.178.89.65:49302.service - OpenSSH per-connection server daemon (139.178.89.65:49302). Jul 6 23:53:43.954758 sshd[5863]: Accepted publickey for core from 139.178.89.65 port 49302 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:43.957213 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:43.961972 systemd-logind[1441]: New session 16 of user core. Jul 6 23:53:43.969353 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:53:44.249420 sshd[5863]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:44.259809 systemd[1]: sshd@15-134.199.239.131:22-139.178.89.65:49302.service: Deactivated successfully. Jul 6 23:53:44.262673 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:53:44.263866 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:53:44.271567 systemd[1]: Started sshd@16-134.199.239.131:22-139.178.89.65:49304.service - OpenSSH per-connection server daemon (139.178.89.65:49304). Jul 6 23:53:44.274125 systemd-logind[1441]: Removed session 16. Jul 6 23:53:44.318107 sshd[5876]: Accepted publickey for core from 139.178.89.65 port 49304 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:44.320072 sshd[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:44.325943 systemd-logind[1441]: New session 17 of user core. Jul 6 23:53:44.343746 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:53:44.648826 sshd[5876]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:44.662421 systemd[1]: sshd@16-134.199.239.131:22-139.178.89.65:49304.service: Deactivated successfully. Jul 6 23:53:44.664737 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:53:44.666870 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:53:44.673501 systemd[1]: Started sshd@17-134.199.239.131:22-139.178.89.65:49310.service - OpenSSH per-connection server daemon (139.178.89.65:49310). Jul 6 23:53:44.676200 systemd-logind[1441]: Removed session 17. Jul 6 23:53:44.746332 sshd[5887]: Accepted publickey for core from 139.178.89.65 port 49310 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:44.748995 sshd[5887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:44.755341 systemd-logind[1441]: New session 18 of user core. Jul 6 23:53:44.765349 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:53:45.603700 systemd[1]: run-containerd-runc-k8s.io-5ebf165eabf6eb7b270e38aa6f6d398bf8884027722bb7bcaef060307b029871-runc.6EnVcA.mount: Deactivated successfully. Jul 6 23:53:45.860316 sshd[5887]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:45.872293 systemd[1]: sshd@17-134.199.239.131:22-139.178.89.65:49310.service: Deactivated successfully. Jul 6 23:53:45.877017 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:53:45.881189 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:53:45.890472 systemd[1]: Started sshd@18-134.199.239.131:22-139.178.89.65:49320.service - OpenSSH per-connection server daemon (139.178.89.65:49320). Jul 6 23:53:45.893805 systemd-logind[1441]: Removed session 18. Jul 6 23:53:45.975033 sshd[5923]: Accepted publickey for core from 139.178.89.65 port 49320 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:45.976958 sshd[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:45.982966 systemd-logind[1441]: New session 19 of user core. Jul 6 23:53:45.994373 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:53:46.721757 sshd[5923]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:46.733882 systemd[1]: sshd@18-134.199.239.131:22-139.178.89.65:49320.service: Deactivated successfully. Jul 6 23:53:46.737869 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:53:46.742876 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:53:46.750132 systemd[1]: Started sshd@19-134.199.239.131:22-139.178.89.65:49336.service - OpenSSH per-connection server daemon (139.178.89.65:49336). Jul 6 23:53:46.753856 systemd-logind[1441]: Removed session 19. Jul 6 23:53:46.841020 sshd[5936]: Accepted publickey for core from 139.178.89.65 port 49336 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:46.842772 sshd[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:46.848350 systemd-logind[1441]: New session 20 of user core. Jul 6 23:53:46.856355 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:53:47.022654 sshd[5936]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:47.028790 systemd[1]: sshd@19-134.199.239.131:22-139.178.89.65:49336.service: Deactivated successfully. Jul 6 23:53:47.030691 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:53:47.033270 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:53:47.037133 systemd-logind[1441]: Removed session 20. Jul 6 23:53:47.360662 kubelet[2505]: E0706 23:53:47.360186 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:53:52.048334 systemd[1]: Started sshd@20-134.199.239.131:22-139.178.89.65:43292.service - OpenSSH per-connection server daemon (139.178.89.65:43292). Jul 6 23:53:52.155816 sshd[5954]: Accepted publickey for core from 139.178.89.65 port 43292 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:52.159111 sshd[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:52.166639 systemd-logind[1441]: New session 21 of user core. Jul 6 23:53:52.169331 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:53:52.499565 sshd[5954]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:52.505039 systemd[1]: sshd@20-134.199.239.131:22-139.178.89.65:43292.service: Deactivated successfully. Jul 6 23:53:52.507257 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:53:52.508289 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:53:52.510334 systemd-logind[1441]: Removed session 21. Jul 6 23:53:57.519496 systemd[1]: Started sshd@21-134.199.239.131:22-139.178.89.65:43306.service - OpenSSH per-connection server daemon (139.178.89.65:43306). Jul 6 23:53:57.669533 sshd[5989]: Accepted publickey for core from 139.178.89.65 port 43306 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:53:57.672459 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:57.678281 systemd-logind[1441]: New session 22 of user core. Jul 6 23:53:57.683287 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:53:58.128135 sshd[5989]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:58.132595 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:53:58.132736 systemd[1]: sshd@21-134.199.239.131:22-139.178.89.65:43306.service: Deactivated successfully. Jul 6 23:53:58.134801 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:53:58.136473 systemd-logind[1441]: Removed session 22. Jul 6 23:54:02.360702 kubelet[2505]: E0706 23:54:02.360650 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 6 23:54:03.146255 systemd[1]: Started sshd@22-134.199.239.131:22-139.178.89.65:56860.service - OpenSSH per-connection server daemon (139.178.89.65:56860). Jul 6 23:54:03.214459 sshd[6001]: Accepted publickey for core from 139.178.89.65 port 56860 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:54:03.216513 sshd[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:03.223367 systemd-logind[1441]: New session 23 of user core. Jul 6 23:54:03.231340 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:54:03.915817 sshd[6001]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:03.921303 systemd[1]: sshd@22-134.199.239.131:22-139.178.89.65:56860.service: Deactivated successfully. Jul 6 23:54:03.924564 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:54:03.929147 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:54:03.932047 systemd-logind[1441]: Removed session 23.