Jul 6 23:54:38.926445 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:54:38.926472 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:38.926485 kernel: BIOS-provided physical RAM map: Jul 6 23:54:38.926492 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 6 23:54:38.926499 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 6 23:54:38.926509 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 6 23:54:38.926525 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jul 6 23:54:38.926536 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jul 6 23:54:38.926545 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:54:38.926558 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 6 23:54:38.926565 kernel: NX (Execute Disable) protection: active Jul 6 23:54:38.926572 kernel: APIC: Static calls initialized Jul 6 23:54:38.926582 kernel: SMBIOS 2.8 present. Jul 6 23:54:38.926590 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 6 23:54:38.926598 kernel: Hypervisor detected: KVM Jul 6 23:54:38.926609 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:54:38.926619 kernel: kvm-clock: using sched offset of 3246158959 cycles Jul 6 23:54:38.926627 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:54:38.926635 kernel: tsc: Detected 2494.134 MHz processor Jul 6 23:54:38.926643 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:54:38.926652 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:54:38.926660 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jul 6 23:54:38.926667 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 6 23:54:38.926675 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:54:38.928949 kernel: ACPI: Early table checksum verification disabled Jul 6 23:54:38.928958 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jul 6 23:54:38.928966 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:54:38.928974 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:54:38.928982 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:54:38.928991 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 6 23:54:38.928999 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:54:38.929006 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:54:38.929014 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:54:38.929025 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:54:38.929033 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 6 23:54:38.929041 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 6 23:54:38.929049 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 6 23:54:38.929057 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 6 23:54:38.929065 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 6 23:54:38.929073 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 6 23:54:38.929087 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 6 23:54:38.929095 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:54:38.929104 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:54:38.929112 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 6 23:54:38.929120 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 6 23:54:38.929133 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jul 6 23:54:38.929142 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jul 6 23:54:38.929154 kernel: Zone ranges: Jul 6 23:54:38.929162 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:54:38.929170 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jul 6 23:54:38.929178 kernel: Normal empty Jul 6 23:54:38.929187 kernel: Movable zone start for each node Jul 6 23:54:38.929195 kernel: Early memory node ranges Jul 6 23:54:38.929203 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 6 23:54:38.929211 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jul 6 23:54:38.929220 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jul 6 23:54:38.929231 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:54:38.929239 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 6 23:54:38.929250 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jul 6 23:54:38.929258 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:54:38.929266 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:54:38.929275 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:54:38.929283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:54:38.929291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:54:38.929300 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:54:38.929311 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:54:38.929319 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:54:38.929327 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:54:38.929335 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:54:38.929343 kernel: TSC deadline timer available Jul 6 23:54:38.929352 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:54:38.929360 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:54:38.929368 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 6 23:54:38.929379 kernel: Booting paravirtualized kernel on KVM Jul 6 23:54:38.929391 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:54:38.929399 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:54:38.929407 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:54:38.929416 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:54:38.929424 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:54:38.929432 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 6 23:54:38.929441 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:38.929450 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:54:38.929460 kernel: random: crng init done Jul 6 23:54:38.929468 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:54:38.929477 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:54:38.929485 kernel: Fallback order for Node 0: 0 Jul 6 23:54:38.929493 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jul 6 23:54:38.929502 kernel: Policy zone: DMA32 Jul 6 23:54:38.929510 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:54:38.929519 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 125148K reserved, 0K cma-reserved) Jul 6 23:54:38.929527 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:54:38.929538 kernel: Kernel/User page tables isolation: enabled Jul 6 23:54:38.929546 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:54:38.929554 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:54:38.929562 kernel: Dynamic Preempt: voluntary Jul 6 23:54:38.929571 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:54:38.929580 kernel: rcu: RCU event tracing is enabled. Jul 6 23:54:38.929588 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:54:38.929597 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:54:38.929605 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:54:38.929616 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:54:38.929625 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:54:38.929633 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:54:38.929641 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 6 23:54:38.929649 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:54:38.929660 kernel: Console: colour VGA+ 80x25 Jul 6 23:54:38.929668 kernel: printk: console [tty0] enabled Jul 6 23:54:38.929684 kernel: printk: console [ttyS0] enabled Jul 6 23:54:38.929694 kernel: ACPI: Core revision 20230628 Jul 6 23:54:38.929706 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:54:38.929714 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:54:38.929722 kernel: x2apic enabled Jul 6 23:54:38.929730 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:54:38.929739 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:54:38.929747 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Jul 6 23:54:38.929756 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) Jul 6 23:54:38.929764 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 6 23:54:38.929773 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 6 23:54:38.929793 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:54:38.929802 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:54:38.929813 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:54:38.929824 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 6 23:54:38.929833 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:54:38.929842 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:54:38.929851 kernel: MDS: Mitigation: Clear CPU buffers Jul 6 23:54:38.929860 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:54:38.929868 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:54:38.929883 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:54:38.929892 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:54:38.929901 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:54:38.929909 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:54:38.929918 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 6 23:54:38.929927 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:54:38.929936 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:54:38.929947 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:54:38.929956 kernel: landlock: Up and running. Jul 6 23:54:38.929965 kernel: SELinux: Initializing. Jul 6 23:54:38.929974 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:54:38.929982 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:54:38.929991 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 6 23:54:38.930000 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:38.930009 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:38.930018 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:38.930029 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 6 23:54:38.930038 kernel: signal: max sigframe size: 1776 Jul 6 23:54:38.930047 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:54:38.930056 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:54:38.930065 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:54:38.930073 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:54:38.930082 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:54:38.930091 kernel: .... node #0, CPUs: #1 Jul 6 23:54:38.930100 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:54:38.930113 kernel: smpboot: Max logical packages: 1 Jul 6 23:54:38.930124 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) Jul 6 23:54:38.930133 kernel: devtmpfs: initialized Jul 6 23:54:38.930142 kernel: x86/mm: Memory block size: 128MB Jul 6 23:54:38.930150 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:54:38.930159 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:54:38.930168 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:54:38.930177 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:54:38.930186 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:54:38.930198 kernel: audit: type=2000 audit(1751846077.799:1): state=initialized audit_enabled=0 res=1 Jul 6 23:54:38.930206 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:54:38.930215 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:54:38.930224 kernel: cpuidle: using governor menu Jul 6 23:54:38.930233 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:54:38.930241 kernel: dca service started, version 1.12.1 Jul 6 23:54:38.930250 kernel: PCI: Using configuration type 1 for base access Jul 6 23:54:38.930259 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:54:38.930267 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:54:38.930279 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:54:38.930288 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:54:38.930296 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:54:38.930305 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:54:38.930314 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:54:38.930322 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:54:38.930331 kernel: ACPI: Interpreter enabled Jul 6 23:54:38.930340 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:54:38.930349 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:54:38.930358 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:54:38.930369 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:54:38.930378 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 6 23:54:38.932389 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:54:38.932629 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:54:38.932762 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 6 23:54:38.932863 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 6 23:54:38.932876 kernel: acpiphp: Slot [3] registered Jul 6 23:54:38.932891 kernel: acpiphp: Slot [4] registered Jul 6 23:54:38.932900 kernel: acpiphp: Slot [5] registered Jul 6 23:54:38.932909 kernel: acpiphp: Slot [6] registered Jul 6 23:54:38.932917 kernel: acpiphp: Slot [7] registered Jul 6 23:54:38.932926 kernel: acpiphp: Slot [8] registered Jul 6 23:54:38.932935 kernel: acpiphp: Slot [9] registered Jul 6 23:54:38.932943 kernel: acpiphp: Slot [10] registered Jul 6 23:54:38.932952 kernel: acpiphp: Slot [11] registered Jul 6 23:54:38.932961 kernel: acpiphp: Slot [12] registered Jul 6 23:54:38.932973 kernel: acpiphp: Slot [13] registered Jul 6 23:54:38.932981 kernel: acpiphp: Slot [14] registered Jul 6 23:54:38.932990 kernel: acpiphp: Slot [15] registered Jul 6 23:54:38.932999 kernel: acpiphp: Slot [16] registered Jul 6 23:54:38.933008 kernel: acpiphp: Slot [17] registered Jul 6 23:54:38.933017 kernel: acpiphp: Slot [18] registered Jul 6 23:54:38.933025 kernel: acpiphp: Slot [19] registered Jul 6 23:54:38.933034 kernel: acpiphp: Slot [20] registered Jul 6 23:54:38.933043 kernel: acpiphp: Slot [21] registered Jul 6 23:54:38.933055 kernel: acpiphp: Slot [22] registered Jul 6 23:54:38.933064 kernel: acpiphp: Slot [23] registered Jul 6 23:54:38.933072 kernel: acpiphp: Slot [24] registered Jul 6 23:54:38.933081 kernel: acpiphp: Slot [25] registered Jul 6 23:54:38.933090 kernel: acpiphp: Slot [26] registered Jul 6 23:54:38.933099 kernel: acpiphp: Slot [27] registered Jul 6 23:54:38.933108 kernel: acpiphp: Slot [28] registered Jul 6 23:54:38.933116 kernel: acpiphp: Slot [29] registered Jul 6 23:54:38.933125 kernel: acpiphp: Slot [30] registered Jul 6 23:54:38.933134 kernel: acpiphp: Slot [31] registered Jul 6 23:54:38.933145 kernel: PCI host bridge to bus 0000:00 Jul 6 23:54:38.933257 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:54:38.933346 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:54:38.933432 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:54:38.933518 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 6 23:54:38.933603 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 6 23:54:38.933697 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:54:38.933824 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 6 23:54:38.933945 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 6 23:54:38.934053 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 6 23:54:38.934150 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jul 6 23:54:38.934296 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 6 23:54:38.934399 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 6 23:54:38.934512 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 6 23:54:38.934608 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 6 23:54:38.937617 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jul 6 23:54:38.937827 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jul 6 23:54:38.937986 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 6 23:54:38.938132 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 6 23:54:38.938283 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 6 23:54:38.938458 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 6 23:54:38.938566 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 6 23:54:38.938740 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 6 23:54:38.938863 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jul 6 23:54:38.938959 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 6 23:54:38.939054 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:54:38.939228 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:54:38.939374 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jul 6 23:54:38.939474 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jul 6 23:54:38.939597 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 6 23:54:38.942814 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:54:38.942941 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jul 6 23:54:38.943042 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jul 6 23:54:38.943147 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 6 23:54:38.943265 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jul 6 23:54:38.943378 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jul 6 23:54:38.943537 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jul 6 23:54:38.943708 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 6 23:54:38.943862 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:54:38.943961 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jul 6 23:54:38.944080 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jul 6 23:54:38.944226 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 6 23:54:38.944364 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:54:38.944492 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jul 6 23:54:38.944589 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jul 6 23:54:38.946728 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jul 6 23:54:38.946940 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jul 6 23:54:38.947104 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jul 6 23:54:38.947263 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 6 23:54:38.947286 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:54:38.947304 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:54:38.947320 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:54:38.947336 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:54:38.947347 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 6 23:54:38.947360 kernel: iommu: Default domain type: Translated Jul 6 23:54:38.947370 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:54:38.947379 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:54:38.947388 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:54:38.947397 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 6 23:54:38.947406 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jul 6 23:54:38.947509 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 6 23:54:38.947635 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 6 23:54:38.947752 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:54:38.947770 kernel: vgaarb: loaded Jul 6 23:54:38.947779 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:54:38.947789 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:54:38.947798 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:54:38.947807 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:54:38.947816 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:54:38.947825 kernel: pnp: PnP ACPI init Jul 6 23:54:38.947834 kernel: pnp: PnP ACPI: found 4 devices Jul 6 23:54:38.947843 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:54:38.947855 kernel: NET: Registered PF_INET protocol family Jul 6 23:54:38.947864 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:54:38.947873 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 6 23:54:38.947882 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:54:38.947892 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:54:38.947901 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:54:38.947910 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 6 23:54:38.947919 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:54:38.947931 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:54:38.947940 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:54:38.947949 kernel: NET: Registered PF_XDP protocol family Jul 6 23:54:38.948045 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:54:38.948134 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:54:38.948219 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:54:38.948324 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 6 23:54:38.948461 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 6 23:54:38.948607 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 6 23:54:38.950863 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 6 23:54:38.950891 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 6 23:54:38.950998 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 49295 usecs Jul 6 23:54:38.951011 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:54:38.951021 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:54:38.951031 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Jul 6 23:54:38.951040 kernel: Initialise system trusted keyrings Jul 6 23:54:38.951050 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 6 23:54:38.951066 kernel: Key type asymmetric registered Jul 6 23:54:38.951075 kernel: Asymmetric key parser 'x509' registered Jul 6 23:54:38.951084 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:54:38.951093 kernel: io scheduler mq-deadline registered Jul 6 23:54:38.951102 kernel: io scheduler kyber registered Jul 6 23:54:38.951111 kernel: io scheduler bfq registered Jul 6 23:54:38.951120 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:54:38.951130 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 6 23:54:38.951139 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 6 23:54:38.951151 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 6 23:54:38.951160 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:54:38.951169 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:54:38.951179 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:54:38.951187 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:54:38.951196 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:54:38.951205 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:54:38.951325 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 6 23:54:38.951421 kernel: rtc_cmos 00:03: registered as rtc0 Jul 6 23:54:38.951509 kernel: rtc_cmos 00:03: setting system clock to 2025-07-06T23:54:38 UTC (1751846078) Jul 6 23:54:38.951625 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 6 23:54:38.951637 kernel: intel_pstate: CPU model not supported Jul 6 23:54:38.951646 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:54:38.951656 kernel: Segment Routing with IPv6 Jul 6 23:54:38.951665 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:54:38.951674 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:54:38.951693 kernel: Key type dns_resolver registered Jul 6 23:54:38.951706 kernel: IPI shorthand broadcast: enabled Jul 6 23:54:38.951715 kernel: sched_clock: Marking stable (1165004092, 120199585)->(1383316466, -98112789) Jul 6 23:54:38.951728 kernel: registered taskstats version 1 Jul 6 23:54:38.951741 kernel: Loading compiled-in X.509 certificates Jul 6 23:54:38.951755 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:54:38.951770 kernel: Key type .fscrypt registered Jul 6 23:54:38.951779 kernel: Key type fscrypt-provisioning registered Jul 6 23:54:38.951788 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:54:38.951797 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:54:38.951810 kernel: ima: No architecture policies found Jul 6 23:54:38.951819 kernel: clk: Disabling unused clocks Jul 6 23:54:38.951828 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:54:38.951837 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:54:38.951846 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:54:38.951877 kernel: Run /init as init process Jul 6 23:54:38.951889 kernel: with arguments: Jul 6 23:54:38.951899 kernel: /init Jul 6 23:54:38.951908 kernel: with environment: Jul 6 23:54:38.951920 kernel: HOME=/ Jul 6 23:54:38.951929 kernel: TERM=linux Jul 6 23:54:38.951938 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:54:38.951950 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:54:38.951963 systemd[1]: Detected virtualization kvm. Jul 6 23:54:38.951973 systemd[1]: Detected architecture x86-64. Jul 6 23:54:38.951983 systemd[1]: Running in initrd. Jul 6 23:54:38.951996 systemd[1]: No hostname configured, using default hostname. Jul 6 23:54:38.952014 systemd[1]: Hostname set to . Jul 6 23:54:38.952029 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:54:38.952043 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:54:38.952058 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:54:38.952074 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:54:38.952089 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:54:38.952105 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:54:38.952125 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:54:38.952140 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:54:38.952158 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:54:38.952169 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:54:38.952180 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:54:38.952190 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:54:38.952200 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:54:38.952213 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:54:38.952226 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:54:38.952244 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:54:38.952263 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:54:38.952279 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:54:38.952294 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:54:38.952309 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:54:38.952320 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:54:38.952330 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:54:38.952340 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:54:38.952350 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:54:38.952360 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:54:38.952370 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:54:38.952380 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:54:38.952396 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:54:38.952406 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:54:38.952415 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:54:38.952426 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:38.952436 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:54:38.952486 systemd-journald[183]: Collecting audit messages is disabled. Jul 6 23:54:38.952523 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:54:38.952537 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:54:38.952553 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:54:38.952573 systemd-journald[183]: Journal started Jul 6 23:54:38.952595 systemd-journald[183]: Runtime Journal (/run/log/journal/b56a2c9bbaf74dbbb45e4b77c79cce96) is 4.9M, max 39.3M, 34.4M free. Jul 6 23:54:38.934730 systemd-modules-load[184]: Inserted module 'overlay' Jul 6 23:54:38.960897 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:54:38.968738 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:54:39.005422 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:54:39.005460 kernel: Bridge firewalling registered Jul 6 23:54:38.972817 systemd-modules-load[184]: Inserted module 'br_netfilter' Jul 6 23:54:39.006085 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:54:39.009747 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:39.016906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:39.019867 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:54:39.021043 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:54:39.024599 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:54:39.039554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:54:39.046239 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:54:39.056664 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:54:39.058005 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:39.062883 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:54:39.067875 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:54:39.076909 dracut-cmdline[216]: dracut-dracut-053 Jul 6 23:54:39.081558 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:39.116946 systemd-resolved[218]: Positive Trust Anchors: Jul 6 23:54:39.117548 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:54:39.117590 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:54:39.123454 systemd-resolved[218]: Defaulting to hostname 'linux'. Jul 6 23:54:39.125263 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:54:39.126241 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:54:39.172764 kernel: SCSI subsystem initialized Jul 6 23:54:39.182715 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:54:39.193710 kernel: iscsi: registered transport (tcp) Jul 6 23:54:39.215897 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:54:39.215977 kernel: QLogic iSCSI HBA Driver Jul 6 23:54:39.268672 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:54:39.274959 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:54:39.301730 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:54:39.301806 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:54:39.303407 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:54:39.350731 kernel: raid6: avx2x4 gen() 16886 MB/s Jul 6 23:54:39.365752 kernel: raid6: avx2x2 gen() 15076 MB/s Jul 6 23:54:39.382790 kernel: raid6: avx2x1 gen() 11564 MB/s Jul 6 23:54:39.382863 kernel: raid6: using algorithm avx2x4 gen() 16886 MB/s Jul 6 23:54:39.400833 kernel: raid6: .... xor() 6762 MB/s, rmw enabled Jul 6 23:54:39.400918 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:54:39.433725 kernel: xor: automatically using best checksumming function avx Jul 6 23:54:39.664716 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:54:39.677883 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:54:39.683926 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:54:39.702559 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jul 6 23:54:39.707953 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:54:39.716170 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:54:39.733274 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jul 6 23:54:39.770104 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:54:39.782046 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:54:39.846458 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:54:39.858047 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:54:39.880354 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:54:39.881833 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:54:39.883080 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:54:39.885257 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:54:39.897322 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:54:39.921328 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:54:39.945255 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jul 6 23:54:39.945495 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:54:39.952725 kernel: scsi host0: Virtio SCSI HBA Jul 6 23:54:39.960041 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:54:39.960098 kernel: AES CTR mode by8 optimization enabled Jul 6 23:54:39.965718 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 6 23:54:39.991155 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:54:39.991319 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:39.994943 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:39.995284 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:39.995437 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:39.995871 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:40.006705 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:54:40.006761 kernel: GPT:9289727 != 125829119 Jul 6 23:54:40.006775 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:54:40.006795 kernel: GPT:9289727 != 125829119 Jul 6 23:54:40.006807 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:54:40.006819 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:54:40.013043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:40.014110 kernel: ACPI: bus type USB registered Jul 6 23:54:40.015818 kernel: usbcore: registered new interface driver usbfs Jul 6 23:54:40.025763 kernel: usbcore: registered new interface driver hub Jul 6 23:54:40.040711 kernel: usbcore: registered new device driver usb Jul 6 23:54:40.042247 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jul 6 23:54:40.042483 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jul 6 23:54:40.048704 kernel: libata version 3.00 loaded. Jul 6 23:54:40.054907 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 6 23:54:40.062716 kernel: scsi host1: ata_piix Jul 6 23:54:40.064699 kernel: scsi host2: ata_piix Jul 6 23:54:40.064933 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jul 6 23:54:40.064957 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jul 6 23:54:40.083129 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 6 23:54:40.083367 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 6 23:54:40.083513 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 6 23:54:40.083641 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jul 6 23:54:40.083775 kernel: hub 1-0:1.0: USB hub found Jul 6 23:54:40.083946 kernel: hub 1-0:1.0: 2 ports detected Jul 6 23:54:40.094565 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:40.105587 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:40.114707 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (445) Jul 6 23:54:40.121052 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:54:40.126861 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (457) Jul 6 23:54:40.127005 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:54:40.142520 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:54:40.143067 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:54:40.143837 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:40.156455 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:54:40.161961 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:54:40.164488 disk-uuid[548]: Primary Header is updated. Jul 6 23:54:40.164488 disk-uuid[548]: Secondary Entries is updated. Jul 6 23:54:40.164488 disk-uuid[548]: Secondary Header is updated. Jul 6 23:54:40.168753 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:54:40.177722 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:54:41.177751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:54:41.177994 disk-uuid[549]: The operation has completed successfully. Jul 6 23:54:41.231165 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:54:41.231306 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:54:41.254022 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:54:41.258328 sh[560]: Success Jul 6 23:54:41.273715 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:54:41.342935 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:54:41.356831 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:54:41.358494 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:54:41.388732 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:54:41.391552 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:54:41.391627 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:54:41.391642 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:54:41.392914 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:54:41.402308 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:54:41.403665 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:54:41.416012 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:54:41.419933 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:54:41.437025 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:41.437127 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:54:41.437142 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:54:41.442091 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:54:41.452985 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:54:41.453720 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:41.459807 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:54:41.467956 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:54:41.599007 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:54:41.607008 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:54:41.622881 ignition[646]: Ignition 2.19.0 Jul 6 23:54:41.623601 ignition[646]: Stage: fetch-offline Jul 6 23:54:41.623663 ignition[646]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:41.623673 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:54:41.623803 ignition[646]: parsed url from cmdline: "" Jul 6 23:54:41.623811 ignition[646]: no config URL provided Jul 6 23:54:41.623819 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:54:41.626064 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:54:41.623828 ignition[646]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:54:41.623835 ignition[646]: failed to fetch config: resource requires networking Jul 6 23:54:41.624137 ignition[646]: Ignition finished successfully Jul 6 23:54:41.639575 systemd-networkd[748]: lo: Link UP Jul 6 23:54:41.639586 systemd-networkd[748]: lo: Gained carrier Jul 6 23:54:41.641961 systemd-networkd[748]: Enumeration completed Jul 6 23:54:41.642402 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 6 23:54:41.642410 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 6 23:54:41.643564 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:54:41.643569 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:54:41.643791 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:54:41.644299 systemd[1]: Reached target network.target - Network. Jul 6 23:54:41.646796 systemd-networkd[748]: eth0: Link UP Jul 6 23:54:41.646801 systemd-networkd[748]: eth0: Gained carrier Jul 6 23:54:41.646811 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 6 23:54:41.652003 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:54:41.653063 systemd-networkd[748]: eth1: Link UP Jul 6 23:54:41.653069 systemd-networkd[748]: eth1: Gained carrier Jul 6 23:54:41.653087 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:54:41.665761 systemd-networkd[748]: eth0: DHCPv4 address 146.190.157.121/20, gateway 146.190.144.1 acquired from 169.254.169.253 Jul 6 23:54:41.669845 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.28/20 acquired from 169.254.169.253 Jul 6 23:54:41.677012 ignition[752]: Ignition 2.19.0 Jul 6 23:54:41.677023 ignition[752]: Stage: fetch Jul 6 23:54:41.677212 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:41.677224 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:54:41.677342 ignition[752]: parsed url from cmdline: "" Jul 6 23:54:41.677346 ignition[752]: no config URL provided Jul 6 23:54:41.677354 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:54:41.677363 ignition[752]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:54:41.677383 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 6 23:54:41.704672 ignition[752]: GET result: OK Jul 6 23:54:41.705424 ignition[752]: parsing config with SHA512: cdf2cab4c182be971b2f5509bfbfc158e600a665f8e16ea89a44bf3c17217c1143a19337629da2d45e935529aebcb92d91cb762d93f549cad6d187646c927783 Jul 6 23:54:41.710126 unknown[752]: fetched base config from "system" Jul 6 23:54:41.710137 unknown[752]: fetched base config from "system" Jul 6 23:54:41.710700 ignition[752]: fetch: fetch complete Jul 6 23:54:41.710151 unknown[752]: fetched user config from "digitalocean" Jul 6 23:54:41.710706 ignition[752]: fetch: fetch passed Jul 6 23:54:41.712375 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:54:41.710760 ignition[752]: Ignition finished successfully Jul 6 23:54:41.718964 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:54:41.745067 ignition[759]: Ignition 2.19.0 Jul 6 23:54:41.745078 ignition[759]: Stage: kargs Jul 6 23:54:41.745280 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:41.745292 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:54:41.748788 ignition[759]: kargs: kargs passed Jul 6 23:54:41.748908 ignition[759]: Ignition finished successfully Jul 6 23:54:41.750534 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:54:41.758018 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:54:41.796699 ignition[765]: Ignition 2.19.0 Jul 6 23:54:41.796712 ignition[765]: Stage: disks Jul 6 23:54:41.797028 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:41.797044 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:54:41.800914 ignition[765]: disks: disks passed Jul 6 23:54:41.801022 ignition[765]: Ignition finished successfully Jul 6 23:54:41.802580 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:54:41.806022 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:54:41.806480 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:54:41.807167 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:54:41.808029 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:54:41.808752 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:54:41.813946 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:54:41.832098 systemd-fsck[774]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:54:41.836386 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:54:41.841874 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:54:41.953007 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:54:41.954144 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:54:41.955368 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:54:41.960851 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:54:41.962807 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:54:41.967475 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jul 6 23:54:41.979024 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (782) Jul 6 23:54:41.978109 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:54:41.978569 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:54:41.978616 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:54:41.985998 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:41.986066 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:54:41.986084 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:54:41.990051 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:54:41.997290 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:54:42.000759 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:54:42.005544 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:54:42.063298 coreos-metadata[784]: Jul 06 23:54:42.063 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:54:42.073080 coreos-metadata[785]: Jul 06 23:54:42.072 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:54:42.075969 coreos-metadata[784]: Jul 06 23:54:42.075 INFO Fetch successful Jul 6 23:54:42.086789 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jul 6 23:54:42.086904 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jul 6 23:54:42.091792 coreos-metadata[785]: Jul 06 23:54:42.087 INFO Fetch successful Jul 6 23:54:42.094834 coreos-metadata[785]: Jul 06 23:54:42.094 INFO wrote hostname ci-4081.3.4-9-29085cf50e to /sysroot/etc/hostname Jul 6 23:54:42.096565 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:54:42.097056 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:54:42.104954 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:54:42.112150 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:54:42.118052 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:54:42.236568 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:54:42.243894 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:54:42.246911 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:54:42.259725 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:42.288519 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:54:42.309719 ignition[904]: INFO : Ignition 2.19.0 Jul 6 23:54:42.309719 ignition[904]: INFO : Stage: mount Jul 6 23:54:42.309719 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:42.309719 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:54:42.313737 ignition[904]: INFO : mount: mount passed Jul 6 23:54:42.313737 ignition[904]: INFO : Ignition finished successfully Jul 6 23:54:42.314290 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:54:42.320912 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:54:42.388890 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:54:42.396147 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:54:42.416721 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (915) Jul 6 23:54:42.416781 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:42.418811 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:54:42.418894 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:54:42.422866 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:54:42.424613 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:54:42.470216 ignition[932]: INFO : Ignition 2.19.0 Jul 6 23:54:42.472850 ignition[932]: INFO : Stage: files Jul 6 23:54:42.472850 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:42.472850 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:54:42.475534 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:54:42.476417 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:54:42.476417 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:54:42.479961 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:54:42.480983 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:54:42.480983 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:54:42.480610 unknown[932]: wrote ssh authorized keys file for user: core Jul 6 23:54:42.483696 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:54:42.483696 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 6 23:54:42.528028 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:54:42.928905 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:54:42.930022 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:54:42.930022 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:54:42.930022 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:54:42.930022 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:54:42.930022 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:54:42.930022 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:54:42.930022 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:54:42.930022 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:54:42.936647 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:54:42.936647 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:54:42.936647 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:54:42.936647 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:54:42.936647 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:54:42.936647 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:54:43.001066 systemd-networkd[748]: eth0: Gained IPv6LL Jul 6 23:54:43.001568 systemd-networkd[748]: eth1: Gained IPv6LL Jul 6 23:54:43.759315 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:54:44.055426 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:54:44.055426 ignition[932]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:54:44.059004 ignition[932]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:54:44.059004 ignition[932]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:54:44.059004 ignition[932]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:54:44.059004 ignition[932]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:54:44.059004 ignition[932]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:54:44.059004 ignition[932]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:54:44.059004 ignition[932]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:54:44.059004 ignition[932]: INFO : files: files passed Jul 6 23:54:44.059004 ignition[932]: INFO : Ignition finished successfully Jul 6 23:54:44.060140 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:54:44.070013 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:54:44.085131 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:54:44.089328 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:54:44.089502 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:54:44.101301 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:54:44.101301 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:54:44.103926 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:54:44.106400 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:54:44.107284 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:54:44.112951 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:54:44.161537 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:54:44.161742 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:54:44.163577 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:54:44.164203 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:54:44.165204 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:54:44.169982 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:54:44.199828 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:54:44.206973 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:54:44.232228 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:54:44.232816 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:54:44.233256 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:54:44.233607 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:54:44.234926 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:54:44.236413 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:54:44.237119 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:54:44.237991 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:54:44.238813 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:54:44.239884 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:54:44.240673 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:54:44.241586 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:54:44.242521 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:54:44.243464 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:54:44.244226 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:54:44.244888 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:54:44.245082 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:54:44.246093 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:54:44.247246 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:54:44.248367 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:54:44.249272 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:54:44.249976 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:54:44.250252 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:54:44.251505 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:54:44.251734 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:54:44.252573 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:54:44.252775 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:54:44.253496 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:54:44.253658 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:54:44.263262 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:54:44.267961 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:54:44.268330 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:54:44.268486 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:54:44.268979 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:54:44.269107 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:54:44.279707 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:54:44.279826 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:54:44.297581 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:54:44.300624 ignition[985]: INFO : Ignition 2.19.0 Jul 6 23:54:44.300624 ignition[985]: INFO : Stage: umount Jul 6 23:54:44.300624 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:44.300624 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:54:44.300624 ignition[985]: INFO : umount: umount passed Jul 6 23:54:44.300624 ignition[985]: INFO : Ignition finished successfully Jul 6 23:54:44.302584 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:54:44.303984 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:54:44.305314 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:54:44.305480 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:54:44.307921 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:54:44.308006 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:54:44.312046 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:54:44.312132 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:54:44.312740 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:54:44.312808 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:54:44.313485 systemd[1]: Stopped target network.target - Network. Jul 6 23:54:44.314221 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:54:44.314294 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:54:44.315167 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:54:44.315937 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:54:44.316060 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:54:44.316722 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:54:44.317506 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:54:44.318397 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:54:44.318465 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:54:44.319225 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:54:44.319280 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:54:44.320082 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:54:44.320161 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:54:44.321001 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:54:44.321064 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:54:44.321742 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:54:44.321826 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:54:44.322651 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:54:44.323480 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:54:44.328766 systemd-networkd[748]: eth1: DHCPv6 lease lost Jul 6 23:54:44.329366 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:54:44.329966 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:54:44.331889 systemd-networkd[748]: eth0: DHCPv6 lease lost Jul 6 23:54:44.333670 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:54:44.334164 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:54:44.335982 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:54:44.336162 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:54:44.349915 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:54:44.350402 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:54:44.350481 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:54:44.353337 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:54:44.353423 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:54:44.354323 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:54:44.354388 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:54:44.355310 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:54:44.355422 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:54:44.356312 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:54:44.370420 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:54:44.370599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:54:44.372925 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:54:44.373146 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:54:44.374575 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:54:44.374708 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:54:44.375153 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:54:44.375211 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:54:44.376190 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:54:44.376240 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:54:44.377585 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:54:44.377633 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:54:44.378465 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:54:44.378511 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:44.386990 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:54:44.389077 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:54:44.389185 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:54:44.389804 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:54:44.389889 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:54:44.390465 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:54:44.390533 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:54:44.393109 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:44.393184 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:44.395515 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:54:44.395660 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:54:44.397764 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:54:44.407018 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:54:44.417308 systemd[1]: Switching root. Jul 6 23:54:44.449467 systemd-journald[183]: Journal stopped Jul 6 23:54:45.734080 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jul 6 23:54:45.734215 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:54:45.734258 kernel: SELinux: policy capability open_perms=1 Jul 6 23:54:45.734280 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:54:45.734309 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:54:45.734329 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:54:45.734350 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:54:45.734369 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:54:45.734397 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:54:45.734417 kernel: audit: type=1403 audit(1751846084.663:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:54:45.734444 systemd[1]: Successfully loaded SELinux policy in 46.956ms. Jul 6 23:54:45.734478 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.363ms. Jul 6 23:54:45.734511 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:54:45.734535 systemd[1]: Detected virtualization kvm. Jul 6 23:54:45.734559 systemd[1]: Detected architecture x86-64. Jul 6 23:54:45.734581 systemd[1]: Detected first boot. Jul 6 23:54:45.734611 systemd[1]: Hostname set to . Jul 6 23:54:45.734631 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:54:45.734653 zram_generator::config[1027]: No configuration found. Jul 6 23:54:45.738859 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:54:45.738920 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:54:45.738945 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:54:45.738970 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:54:45.738992 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:54:45.739016 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:54:45.739038 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:54:45.739062 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:54:45.739096 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:54:45.739119 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:54:45.739142 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:54:45.739164 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:54:45.739187 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:54:45.739211 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:54:45.739235 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:54:45.739257 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:54:45.739284 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:54:45.739327 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:54:45.739350 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:54:45.739373 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:54:45.739394 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:54:45.739417 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:54:45.739444 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:54:45.739467 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:54:45.739491 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:54:45.739513 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:54:45.739536 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:54:45.739558 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:54:45.739580 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:54:45.739604 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:54:45.739628 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:54:45.739650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:54:45.751037 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:54:45.751110 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:54:45.751132 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:54:45.751163 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:54:45.751183 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:54:45.751203 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:54:45.751222 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:54:45.751239 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:54:45.751257 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:54:45.751281 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:54:45.751318 systemd[1]: Reached target machines.target - Containers. Jul 6 23:54:45.751337 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:54:45.751355 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:54:45.751374 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:54:45.751393 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:54:45.751413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:54:45.751432 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:54:45.751457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:54:45.751477 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:54:45.751497 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:54:45.751517 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:54:45.751537 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:54:45.751556 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:54:45.751585 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:54:45.751605 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:54:45.751629 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:54:45.751649 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:54:45.751669 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:54:45.752526 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:54:45.752561 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:54:45.752580 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:54:45.752600 systemd[1]: Stopped verity-setup.service. Jul 6 23:54:45.752622 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:54:45.752642 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:54:45.752674 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:54:45.756442 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:54:45.756470 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:54:45.756494 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:54:45.756564 systemd-journald[1101]: Collecting audit messages is disabled. Jul 6 23:54:45.756616 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:54:45.756635 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:54:45.756662 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:54:45.756701 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:54:45.756727 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:54:45.756747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:54:45.756768 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:54:45.756787 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:54:45.756806 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:54:45.756824 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:54:45.756846 systemd-journald[1101]: Journal started Jul 6 23:54:45.756891 systemd-journald[1101]: Runtime Journal (/run/log/journal/b56a2c9bbaf74dbbb45e4b77c79cce96) is 4.9M, max 39.3M, 34.4M free. Jul 6 23:54:45.380601 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:54:45.403096 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:54:45.769264 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:54:45.403840 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:54:45.761780 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:54:45.764308 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:54:45.765394 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:54:45.768293 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:54:45.779040 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:54:45.782757 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:54:45.804665 kernel: fuse: init (API version 7.39) Jul 6 23:54:45.803809 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:54:45.804122 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:54:45.834826 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:54:45.835520 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:54:45.835568 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:54:45.839426 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:54:45.852518 kernel: loop: module loaded Jul 6 23:54:45.851963 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:54:45.866357 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:54:45.869254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:54:45.874066 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:54:45.877501 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:54:45.878839 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:54:45.883981 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:54:45.890997 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:54:45.895216 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:54:45.897339 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:54:45.897591 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:54:45.898391 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:54:45.932848 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:54:45.951001 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:54:45.959930 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:54:45.988758 systemd-journald[1101]: Time spent on flushing to /var/log/journal/b56a2c9bbaf74dbbb45e4b77c79cce96 is 87.899ms for 985 entries. Jul 6 23:54:45.988758 systemd-journald[1101]: System Journal (/var/log/journal/b56a2c9bbaf74dbbb45e4b77c79cce96) is 8.0M, max 195.6M, 187.6M free. Jul 6 23:54:46.132031 systemd-journald[1101]: Received client request to flush runtime journal. Jul 6 23:54:46.132113 kernel: ACPI: bus type drm_connector registered Jul 6 23:54:46.132144 kernel: loop0: detected capacity change from 0 to 142488 Jul 6 23:54:46.132201 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:54:46.008060 systemd-tmpfiles[1117]: ACLs are not supported, ignoring. Jul 6 23:54:46.142834 kernel: loop1: detected capacity change from 0 to 8 Jul 6 23:54:46.008076 systemd-tmpfiles[1117]: ACLs are not supported, ignoring. Jul 6 23:54:46.019250 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:54:46.019492 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:54:46.025066 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:54:46.028675 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:54:46.040871 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:54:46.064533 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:54:46.078999 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:54:46.082739 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:54:46.092947 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:54:46.094225 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:54:46.112011 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:54:46.134273 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:54:46.165976 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:54:46.187745 kernel: loop2: detected capacity change from 0 to 140768 Jul 6 23:54:46.234807 kernel: loop3: detected capacity change from 0 to 221472 Jul 6 23:54:46.252861 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:54:46.258425 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:54:46.275782 kernel: loop4: detected capacity change from 0 to 142488 Jul 6 23:54:46.304705 kernel: loop5: detected capacity change from 0 to 8 Jul 6 23:54:46.307704 kernel: loop6: detected capacity change from 0 to 140768 Jul 6 23:54:46.324622 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jul 6 23:54:46.324652 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jul 6 23:54:46.336564 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:54:46.340872 kernel: loop7: detected capacity change from 0 to 221472 Jul 6 23:54:46.364694 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jul 6 23:54:46.365322 (sd-merge)[1175]: Merged extensions into '/usr'. Jul 6 23:54:46.372127 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:54:46.372153 systemd[1]: Reloading... Jul 6 23:54:46.544723 zram_generator::config[1203]: No configuration found. Jul 6 23:54:46.697642 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:54:46.850462 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:54:46.923174 systemd[1]: Reloading finished in 549 ms. Jul 6 23:54:46.960442 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:54:46.961421 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:54:46.973027 systemd[1]: Starting ensure-sysext.service... Jul 6 23:54:46.981862 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:54:47.004082 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:54:47.004111 systemd[1]: Reloading... Jul 6 23:54:47.024288 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:54:47.024878 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:54:47.026833 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:54:47.027482 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 6 23:54:47.027746 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 6 23:54:47.033609 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:54:47.034006 systemd-tmpfiles[1247]: Skipping /boot Jul 6 23:54:47.050927 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:54:47.051116 systemd-tmpfiles[1247]: Skipping /boot Jul 6 23:54:47.159606 zram_generator::config[1274]: No configuration found. Jul 6 23:54:47.361719 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:54:47.451108 systemd[1]: Reloading finished in 446 ms. Jul 6 23:54:47.470059 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:54:47.475445 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:54:47.492972 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:54:47.496971 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:54:47.502100 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:54:47.515069 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:54:47.519439 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:54:47.522564 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:54:47.533139 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:54:47.533458 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:54:47.540144 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:54:47.544104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:54:47.548112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:54:47.549953 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:54:47.550167 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:54:47.562519 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:54:47.569301 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:54:47.569627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:54:47.569927 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:54:47.570086 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:54:47.575744 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:54:47.576120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:54:47.586116 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:54:47.588191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:54:47.588465 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:54:47.594902 systemd[1]: Finished ensure-sysext.service. Jul 6 23:54:47.623089 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:54:47.626771 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:54:47.639258 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:54:47.639771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:54:47.641323 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:54:47.642786 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:54:47.644921 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:54:47.645168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:54:47.646382 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:54:47.647753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:54:47.659384 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:54:47.659659 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:54:47.659856 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:54:47.664809 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:54:47.687783 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:54:47.695007 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:54:47.703788 augenrules[1355]: No rules Jul 6 23:54:47.705080 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Jul 6 23:54:47.705963 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:54:47.716288 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:54:47.745316 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:54:47.756599 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:54:47.767033 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:54:47.845526 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:54:47.846159 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:54:47.915795 systemd-resolved[1324]: Positive Trust Anchors: Jul 6 23:54:47.915817 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:54:47.915866 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:54:47.927288 systemd-resolved[1324]: Using system hostname 'ci-4081.3.4-9-29085cf50e'. Jul 6 23:54:47.930862 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:54:47.931865 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:54:47.937273 systemd-networkd[1374]: lo: Link UP Jul 6 23:54:47.937635 systemd-networkd[1374]: lo: Gained carrier Jul 6 23:54:47.940068 systemd-networkd[1374]: Enumeration completed Jul 6 23:54:47.940456 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:54:47.940848 systemd-networkd[1374]: eth0: Configuring with /run/systemd/network/10-22:15:60:5b:67:56.network. Jul 6 23:54:47.941691 systemd-networkd[1374]: eth0: Link UP Jul 6 23:54:47.941766 systemd-networkd[1374]: eth0: Gained carrier Jul 6 23:54:47.942662 systemd[1]: Reached target network.target - Network. Jul 6 23:54:47.947608 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:54:47.948946 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:54:47.949919 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:54:47.998821 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jul 6 23:54:47.999545 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:54:47.999745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:54:48.006010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:54:48.010088 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:54:48.014672 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:54:48.015907 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:54:48.015963 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:54:48.015985 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:54:48.028766 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:54:48.029798 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:54:48.041783 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1370) Jul 6 23:54:48.043844 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:54:48.044026 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:54:48.053703 kernel: ISO 9660 Extensions: RRIP_1991A Jul 6 23:54:48.056438 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jul 6 23:54:48.059990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:54:48.060241 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:54:48.062198 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:54:48.062275 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:54:48.110704 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 6 23:54:48.128735 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 6 23:54:48.137705 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:54:48.181767 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 6 23:54:48.200083 systemd-networkd[1374]: eth1: Configuring with /run/systemd/network/10-2e:dd:af:91:fa:bf.network. Jul 6 23:54:48.201819 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:54:48.202395 systemd-networkd[1374]: eth1: Link UP Jul 6 23:54:48.202541 systemd-networkd[1374]: eth1: Gained carrier Jul 6 23:54:48.205472 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:54:48.209365 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:54:48.211704 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:54:48.217970 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:54:48.242705 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:54:48.265810 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:54:48.275031 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 6 23:54:48.275127 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 6 23:54:48.281722 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:54:48.283738 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 6 23:54:48.283903 kernel: [drm] features: -context_init Jul 6 23:54:48.283965 kernel: [drm] number of scanouts: 1 Jul 6 23:54:48.283982 kernel: [drm] number of cap sets: 0 Jul 6 23:54:48.288707 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 6 23:54:48.292892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:48.307701 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 6 23:54:48.309630 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:54:48.310007 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:48.310225 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:48.320704 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 6 23:54:48.333282 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:48.342194 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:48.342437 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:48.363968 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:48.493615 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:54:48.510003 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:48.518119 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:54:48.524068 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:54:48.543716 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:54:48.582997 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:54:48.584395 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:54:48.584507 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:54:48.584699 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:54:48.584805 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:54:48.585079 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:54:48.585231 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:54:48.585303 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:54:48.585363 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:54:48.585386 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:54:48.585438 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:54:48.587102 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:54:48.589241 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:54:48.596885 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:54:48.600820 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:54:48.603455 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:54:48.606467 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:54:48.608581 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:54:48.609415 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:54:48.609463 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:54:48.615463 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:54:48.616965 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:54:48.631072 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:54:48.654339 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:54:48.659894 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:54:48.671958 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:54:48.672811 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:54:48.682084 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:54:48.689917 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:54:48.695649 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:54:48.704509 dbus-daemon[1433]: [system] SELinux support is enabled Jul 6 23:54:48.710936 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:54:48.718170 coreos-metadata[1432]: Jul 06 23:54:48.717 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:54:48.721322 jq[1434]: false Jul 6 23:54:48.728023 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:54:48.732215 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:54:48.735862 coreos-metadata[1432]: Jul 06 23:54:48.734 INFO Fetch successful Jul 6 23:54:48.734038 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:54:48.741743 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:54:48.748272 extend-filesystems[1437]: Found loop4 Jul 6 23:54:48.764326 extend-filesystems[1437]: Found loop5 Jul 6 23:54:48.764326 extend-filesystems[1437]: Found loop6 Jul 6 23:54:48.764326 extend-filesystems[1437]: Found loop7 Jul 6 23:54:48.764326 extend-filesystems[1437]: Found vda Jul 6 23:54:48.764326 extend-filesystems[1437]: Found vda1 Jul 6 23:54:48.764326 extend-filesystems[1437]: Found vda2 Jul 6 23:54:48.764326 extend-filesystems[1437]: Found vda3 Jul 6 23:54:48.764326 extend-filesystems[1437]: Found usr Jul 6 23:54:48.764326 extend-filesystems[1437]: Found vda4 Jul 6 23:54:48.764326 extend-filesystems[1437]: Found vda6 Jul 6 23:54:48.764326 extend-filesystems[1437]: Found vda7 Jul 6 23:54:48.764326 extend-filesystems[1437]: Found vda9 Jul 6 23:54:48.764326 extend-filesystems[1437]: Checking size of /dev/vda9 Jul 6 23:54:48.753868 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:54:48.756568 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:54:48.803623 jq[1445]: true Jul 6 23:54:48.767227 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:54:48.786955 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:54:48.787799 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:54:48.794238 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:54:48.794469 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:54:48.827232 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:54:48.827305 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:54:48.833597 update_engine[1444]: I20250706 23:54:48.833482 1444 main.cc:92] Flatcar Update Engine starting Jul 6 23:54:48.836697 update_engine[1444]: I20250706 23:54:48.836436 1444 update_check_scheduler.cc:74] Next update check in 5m33s Jul 6 23:54:48.844934 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:54:48.845088 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jul 6 23:54:48.845123 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:54:48.851616 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1372) Jul 6 23:54:48.849438 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:54:48.862993 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:54:48.878234 extend-filesystems[1437]: Resized partition /dev/vda9 Jul 6 23:54:48.900882 extend-filesystems[1472]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:54:48.909256 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 6 23:54:48.916180 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:54:48.925789 tar[1451]: linux-amd64/helm Jul 6 23:54:48.935312 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:54:48.935598 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:54:48.953538 jq[1452]: true Jul 6 23:54:48.984824 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:54:49.007640 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:54:49.061486 systemd-logind[1443]: New seat seat0. Jul 6 23:54:49.071504 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:54:49.076533 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:54:49.076931 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:54:49.090715 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 6 23:54:49.101450 extend-filesystems[1472]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:54:49.101450 extend-filesystems[1472]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 6 23:54:49.101450 extend-filesystems[1472]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 6 23:54:49.112958 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Jul 6 23:54:49.112958 extend-filesystems[1437]: Found vdb Jul 6 23:54:49.114607 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:54:49.114855 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:54:49.169122 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:54:49.171436 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:54:49.187112 systemd[1]: Starting sshkeys.service... Jul 6 23:54:49.217447 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:54:49.230332 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:54:49.338254 coreos-metadata[1501]: Jul 06 23:54:49.338 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:54:49.353483 coreos-metadata[1501]: Jul 06 23:54:49.353 INFO Fetch successful Jul 6 23:54:49.370477 unknown[1501]: wrote ssh authorized keys file for user: core Jul 6 23:54:49.408655 update-ssh-keys[1511]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:54:49.410749 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:54:49.414957 systemd[1]: Finished sshkeys.service. Jul 6 23:54:49.427476 locksmithd[1467]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:54:49.525715 containerd[1469]: time="2025-07-06T23:54:49.523644507Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:54:49.559923 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:54:49.587203 containerd[1469]: time="2025-07-06T23:54:49.587138092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:54:49.591776 containerd[1469]: time="2025-07-06T23:54:49.589975515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:54:49.591776 containerd[1469]: time="2025-07-06T23:54:49.590021393Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:54:49.591776 containerd[1469]: time="2025-07-06T23:54:49.590039997Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:54:49.591776 containerd[1469]: time="2025-07-06T23:54:49.590219161Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:54:49.591776 containerd[1469]: time="2025-07-06T23:54:49.590238487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:54:49.591776 containerd[1469]: time="2025-07-06T23:54:49.590310738Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:54:49.591776 containerd[1469]: time="2025-07-06T23:54:49.590329977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:54:49.591776 containerd[1469]: time="2025-07-06T23:54:49.590651770Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:54:49.591776 containerd[1469]: time="2025-07-06T23:54:49.590696547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:54:49.591776 containerd[1469]: time="2025-07-06T23:54:49.590756246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:54:49.591776 containerd[1469]: time="2025-07-06T23:54:49.590788173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:54:49.592244 containerd[1469]: time="2025-07-06T23:54:49.590920878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:54:49.592244 containerd[1469]: time="2025-07-06T23:54:49.591243845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:54:49.592244 containerd[1469]: time="2025-07-06T23:54:49.591460791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:54:49.592244 containerd[1469]: time="2025-07-06T23:54:49.591488161Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:54:49.592244 containerd[1469]: time="2025-07-06T23:54:49.591618893Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:54:49.592602 containerd[1469]: time="2025-07-06T23:54:49.592567642Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:54:49.598334 containerd[1469]: time="2025-07-06T23:54:49.598004232Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:54:49.598334 containerd[1469]: time="2025-07-06T23:54:49.598099890Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:54:49.598334 containerd[1469]: time="2025-07-06T23:54:49.598125440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:54:49.598334 containerd[1469]: time="2025-07-06T23:54:49.598147221Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:54:49.598334 containerd[1469]: time="2025-07-06T23:54:49.598200890Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:54:49.598972 containerd[1469]: time="2025-07-06T23:54:49.598924372Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:54:49.599693 containerd[1469]: time="2025-07-06T23:54:49.599643152Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:54:49.599869 containerd[1469]: time="2025-07-06T23:54:49.599844037Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:54:49.599869 containerd[1469]: time="2025-07-06T23:54:49.599867847Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:54:49.599950 containerd[1469]: time="2025-07-06T23:54:49.599882732Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:54:49.599950 containerd[1469]: time="2025-07-06T23:54:49.599896352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:54:49.599950 containerd[1469]: time="2025-07-06T23:54:49.599908647Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:54:49.599950 containerd[1469]: time="2025-07-06T23:54:49.599925777Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:54:49.599950 containerd[1469]: time="2025-07-06T23:54:49.599942701Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:54:49.600085 containerd[1469]: time="2025-07-06T23:54:49.599961499Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:54:49.600085 containerd[1469]: time="2025-07-06T23:54:49.599976710Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:54:49.600085 containerd[1469]: time="2025-07-06T23:54:49.599988282Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:54:49.600085 containerd[1469]: time="2025-07-06T23:54:49.600003139Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:54:49.600085 containerd[1469]: time="2025-07-06T23:54:49.600030483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.600085 containerd[1469]: time="2025-07-06T23:54:49.600047817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.600085 containerd[1469]: time="2025-07-06T23:54:49.600059283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.600085 containerd[1469]: time="2025-07-06T23:54:49.600073810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.600085 containerd[1469]: time="2025-07-06T23:54:49.600084892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600106028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600125182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600147784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600162819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600177246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600187639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600213374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600236376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600267320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600307722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600327699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600339080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600400174Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:54:49.602054 containerd[1469]: time="2025-07-06T23:54:49.600420358Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:54:49.602506 containerd[1469]: time="2025-07-06T23:54:49.600431259Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:54:49.602506 containerd[1469]: time="2025-07-06T23:54:49.600447772Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:54:49.602506 containerd[1469]: time="2025-07-06T23:54:49.600464839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.602506 containerd[1469]: time="2025-07-06T23:54:49.600482848Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:54:49.602506 containerd[1469]: time="2025-07-06T23:54:49.600503153Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:54:49.602506 containerd[1469]: time="2025-07-06T23:54:49.600514498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:54:49.602751 containerd[1469]: time="2025-07-06T23:54:49.601174404Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:54:49.602751 containerd[1469]: time="2025-07-06T23:54:49.601240540Z" level=info msg="Connect containerd service" Jul 6 23:54:49.602751 containerd[1469]: time="2025-07-06T23:54:49.601286729Z" level=info msg="using legacy CRI server" Jul 6 23:54:49.602751 containerd[1469]: time="2025-07-06T23:54:49.601295509Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:54:49.602751 containerd[1469]: time="2025-07-06T23:54:49.601422663Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:54:49.608094 containerd[1469]: time="2025-07-06T23:54:49.604521660Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:54:49.608094 containerd[1469]: time="2025-07-06T23:54:49.604654897Z" level=info msg="Start subscribing containerd event" Jul 6 23:54:49.608094 containerd[1469]: time="2025-07-06T23:54:49.604750603Z" level=info msg="Start recovering state" Jul 6 23:54:49.608094 containerd[1469]: time="2025-07-06T23:54:49.604853539Z" level=info msg="Start event monitor" Jul 6 23:54:49.608094 containerd[1469]: time="2025-07-06T23:54:49.604870225Z" level=info msg="Start snapshots syncer" Jul 6 23:54:49.608094 containerd[1469]: time="2025-07-06T23:54:49.604885416Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:54:49.608094 containerd[1469]: time="2025-07-06T23:54:49.604896621Z" level=info msg="Start streaming server" Jul 6 23:54:49.608094 containerd[1469]: time="2025-07-06T23:54:49.604939493Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:54:49.608094 containerd[1469]: time="2025-07-06T23:54:49.604987730Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:54:49.605171 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:54:49.610067 containerd[1469]: time="2025-07-06T23:54:49.610010344Z" level=info msg="containerd successfully booted in 0.090743s" Jul 6 23:54:49.630194 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:54:49.641056 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:54:49.656872 systemd-networkd[1374]: eth0: Gained IPv6LL Jul 6 23:54:49.657358 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:54:49.663163 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:54:49.665867 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:54:49.673617 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:54:49.678252 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:54:49.689030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:54:49.693005 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:54:49.698021 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:54:49.720824 systemd-networkd[1374]: eth1: Gained IPv6LL Jul 6 23:54:49.721322 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:54:49.724560 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:54:49.737318 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:54:49.749267 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:54:49.749942 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:54:49.766183 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:54:49.912782 tar[1451]: linux-amd64/LICENSE Jul 6 23:54:49.912782 tar[1451]: linux-amd64/README.md Jul 6 23:54:49.925190 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:54:50.977808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:54:50.979518 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:54:50.984023 systemd[1]: Startup finished in 1.308s (kernel) + 5.948s (initrd) + 6.366s (userspace) = 13.623s. Jul 6 23:54:50.989706 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:54:51.707950 kubelet[1556]: E0706 23:54:51.707841 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:54:51.709743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:54:51.709904 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:54:51.710310 systemd[1]: kubelet.service: Consumed 1.359s CPU time. Jul 6 23:54:52.355113 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:54:52.369203 systemd[1]: Started sshd@0-146.190.157.121:22-139.178.89.65:34404.service - OpenSSH per-connection server daemon (139.178.89.65:34404). Jul 6 23:54:52.437690 sshd[1568]: Accepted publickey for core from 139.178.89.65 port 34404 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:54:52.440546 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:52.453660 systemd-logind[1443]: New session 1 of user core. Jul 6 23:54:52.455445 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:54:52.462138 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:54:52.479436 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:54:52.486173 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:54:52.503627 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:54:52.616328 systemd[1572]: Queued start job for default target default.target. Jul 6 23:54:52.629080 systemd[1572]: Created slice app.slice - User Application Slice. Jul 6 23:54:52.629117 systemd[1572]: Reached target paths.target - Paths. Jul 6 23:54:52.629133 systemd[1572]: Reached target timers.target - Timers. Jul 6 23:54:52.630752 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:54:52.645899 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:54:52.646032 systemd[1572]: Reached target sockets.target - Sockets. Jul 6 23:54:52.646048 systemd[1572]: Reached target basic.target - Basic System. Jul 6 23:54:52.646099 systemd[1572]: Reached target default.target - Main User Target. Jul 6 23:54:52.646133 systemd[1572]: Startup finished in 133ms. Jul 6 23:54:52.646251 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:54:52.656014 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:54:52.729159 systemd[1]: Started sshd@1-146.190.157.121:22-139.178.89.65:34408.service - OpenSSH per-connection server daemon (139.178.89.65:34408). Jul 6 23:54:52.769045 sshd[1583]: Accepted publickey for core from 139.178.89.65 port 34408 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:54:52.771260 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:52.777326 systemd-logind[1443]: New session 2 of user core. Jul 6 23:54:52.783932 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:54:52.846992 sshd[1583]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:52.858889 systemd[1]: sshd@1-146.190.157.121:22-139.178.89.65:34408.service: Deactivated successfully. Jul 6 23:54:52.861100 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:54:52.862947 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:54:52.868264 systemd[1]: Started sshd@2-146.190.157.121:22-139.178.89.65:34422.service - OpenSSH per-connection server daemon (139.178.89.65:34422). Jul 6 23:54:52.869964 systemd-logind[1443]: Removed session 2. Jul 6 23:54:52.919111 sshd[1590]: Accepted publickey for core from 139.178.89.65 port 34422 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:54:52.920735 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:52.926116 systemd-logind[1443]: New session 3 of user core. Jul 6 23:54:52.936936 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:54:52.993369 sshd[1590]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:53.006748 systemd[1]: sshd@2-146.190.157.121:22-139.178.89.65:34422.service: Deactivated successfully. Jul 6 23:54:53.008947 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:54:53.010622 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:54:53.017057 systemd[1]: Started sshd@3-146.190.157.121:22-139.178.89.65:34424.service - OpenSSH per-connection server daemon (139.178.89.65:34424). Jul 6 23:54:53.018224 systemd-logind[1443]: Removed session 3. Jul 6 23:54:53.057666 sshd[1597]: Accepted publickey for core from 139.178.89.65 port 34424 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:54:53.059463 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:53.064956 systemd-logind[1443]: New session 4 of user core. Jul 6 23:54:53.071935 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:54:53.136309 sshd[1597]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:53.146218 systemd[1]: sshd@3-146.190.157.121:22-139.178.89.65:34424.service: Deactivated successfully. Jul 6 23:54:53.148229 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:54:53.149821 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:54:53.157084 systemd[1]: Started sshd@4-146.190.157.121:22-139.178.89.65:34426.service - OpenSSH per-connection server daemon (139.178.89.65:34426). Jul 6 23:54:53.159226 systemd-logind[1443]: Removed session 4. Jul 6 23:54:53.198503 sshd[1604]: Accepted publickey for core from 139.178.89.65 port 34426 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:54:53.200146 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:53.204910 systemd-logind[1443]: New session 5 of user core. Jul 6 23:54:53.218979 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:54:53.288909 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:54:53.289639 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:54:53.302288 sudo[1607]: pam_unix(sudo:session): session closed for user root Jul 6 23:54:53.306817 sshd[1604]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:53.320667 systemd[1]: sshd@4-146.190.157.121:22-139.178.89.65:34426.service: Deactivated successfully. Jul 6 23:54:53.322802 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:54:53.325783 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:54:53.330404 systemd[1]: Started sshd@5-146.190.157.121:22-139.178.89.65:34430.service - OpenSSH per-connection server daemon (139.178.89.65:34430). Jul 6 23:54:53.332341 systemd-logind[1443]: Removed session 5. Jul 6 23:54:53.377262 sshd[1612]: Accepted publickey for core from 139.178.89.65 port 34430 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:54:53.379271 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:53.385750 systemd-logind[1443]: New session 6 of user core. Jul 6 23:54:53.397026 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:54:53.459948 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:54:53.460268 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:54:53.464490 sudo[1616]: pam_unix(sudo:session): session closed for user root Jul 6 23:54:53.471420 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:54:53.472320 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:54:53.487132 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:54:53.500836 auditctl[1619]: No rules Jul 6 23:54:53.501284 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:54:53.501534 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:54:53.509577 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:54:53.539226 augenrules[1637]: No rules Jul 6 23:54:53.540458 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:54:53.542121 sudo[1615]: pam_unix(sudo:session): session closed for user root Jul 6 23:54:53.547211 sshd[1612]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:53.556036 systemd[1]: sshd@5-146.190.157.121:22-139.178.89.65:34430.service: Deactivated successfully. Jul 6 23:54:53.557831 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:54:53.559787 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:54:53.563091 systemd[1]: Started sshd@6-146.190.157.121:22-139.178.89.65:34440.service - OpenSSH per-connection server daemon (139.178.89.65:34440). Jul 6 23:54:53.565296 systemd-logind[1443]: Removed session 6. Jul 6 23:54:53.607970 sshd[1645]: Accepted publickey for core from 139.178.89.65 port 34440 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:54:53.610052 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:53.615268 systemd-logind[1443]: New session 7 of user core. Jul 6 23:54:53.630293 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:54:53.689399 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:54:53.691212 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:54:54.135264 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:54:54.135378 (dockerd)[1666]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:54:54.619656 dockerd[1666]: time="2025-07-06T23:54:54.619588436Z" level=info msg="Starting up" Jul 6 23:54:54.741159 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3743957804-merged.mount: Deactivated successfully. Jul 6 23:54:54.811659 dockerd[1666]: time="2025-07-06T23:54:54.811210838Z" level=info msg="Loading containers: start." Jul 6 23:54:54.934873 kernel: Initializing XFRM netlink socket Jul 6 23:54:54.962150 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:54:54.962414 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:54:54.962482 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:54:55.021530 systemd-networkd[1374]: docker0: Link UP Jul 6 23:54:55.021891 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jul 6 23:54:55.035713 dockerd[1666]: time="2025-07-06T23:54:55.035654660Z" level=info msg="Loading containers: done." Jul 6 23:54:55.061412 dockerd[1666]: time="2025-07-06T23:54:55.061324034Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:54:55.061659 dockerd[1666]: time="2025-07-06T23:54:55.061479436Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:54:55.061659 dockerd[1666]: time="2025-07-06T23:54:55.061641779Z" level=info msg="Daemon has completed initialization" Jul 6 23:54:55.103883 dockerd[1666]: time="2025-07-06T23:54:55.103761149Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:54:55.103995 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:54:55.738337 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1033478890-merged.mount: Deactivated successfully. Jul 6 23:54:55.918391 containerd[1469]: time="2025-07-06T23:54:55.918342302Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:54:56.453547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710186120.mount: Deactivated successfully. Jul 6 23:54:57.493709 containerd[1469]: time="2025-07-06T23:54:57.493501596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:57.494897 containerd[1469]: time="2025-07-06T23:54:57.494835413Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 6 23:54:57.495220 containerd[1469]: time="2025-07-06T23:54:57.495163427Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:57.498747 containerd[1469]: time="2025-07-06T23:54:57.498361953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:57.501037 containerd[1469]: time="2025-07-06T23:54:57.500947492Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.582565461s" Jul 6 23:54:57.501037 containerd[1469]: time="2025-07-06T23:54:57.500990997Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 6 23:54:57.501699 containerd[1469]: time="2025-07-06T23:54:57.501650157Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:54:58.794762 containerd[1469]: time="2025-07-06T23:54:58.794699483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:58.795989 containerd[1469]: time="2025-07-06T23:54:58.795914859Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 6 23:54:58.796386 containerd[1469]: time="2025-07-06T23:54:58.796322216Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:58.799252 containerd[1469]: time="2025-07-06T23:54:58.799193119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:58.800480 containerd[1469]: time="2025-07-06T23:54:58.800203246Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.298524315s" Jul 6 23:54:58.800480 containerd[1469]: time="2025-07-06T23:54:58.800241442Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 6 23:54:58.800763 containerd[1469]: time="2025-07-06T23:54:58.800731737Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:55:00.039584 containerd[1469]: time="2025-07-06T23:55:00.038793194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:00.039584 containerd[1469]: time="2025-07-06T23:55:00.039524822Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 6 23:55:00.040741 containerd[1469]: time="2025-07-06T23:55:00.040667908Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:00.044517 containerd[1469]: time="2025-07-06T23:55:00.044463753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:00.045968 containerd[1469]: time="2025-07-06T23:55:00.045922449Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.245164649s" Jul 6 23:55:00.046122 containerd[1469]: time="2025-07-06T23:55:00.046105262Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 6 23:55:00.046902 containerd[1469]: time="2025-07-06T23:55:00.046817036Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:55:01.258123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3743808717.mount: Deactivated successfully. Jul 6 23:55:01.794555 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:55:01.806103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:02.127816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:02.140435 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:02.243581 kubelet[1892]: E0706 23:55:02.243481 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:02.250350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:02.250512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:02.262370 containerd[1469]: time="2025-07-06T23:55:02.261266048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:02.262370 containerd[1469]: time="2025-07-06T23:55:02.262299071Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 6 23:55:02.263294 containerd[1469]: time="2025-07-06T23:55:02.263246208Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:02.266423 containerd[1469]: time="2025-07-06T23:55:02.266325759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:02.267668 containerd[1469]: time="2025-07-06T23:55:02.267610630Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.220589006s" Jul 6 23:55:02.267850 containerd[1469]: time="2025-07-06T23:55:02.267832623Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 6 23:55:02.268850 containerd[1469]: time="2025-07-06T23:55:02.268809749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:55:02.270485 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jul 6 23:55:02.766717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2900004799.mount: Deactivated successfully. Jul 6 23:55:03.838860 containerd[1469]: time="2025-07-06T23:55:03.838646462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:03.845285 containerd[1469]: time="2025-07-06T23:55:03.845218850Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 6 23:55:03.846066 containerd[1469]: time="2025-07-06T23:55:03.845474137Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:03.849561 containerd[1469]: time="2025-07-06T23:55:03.849495117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:03.851448 containerd[1469]: time="2025-07-06T23:55:03.851378391Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.58251929s" Jul 6 23:55:03.851448 containerd[1469]: time="2025-07-06T23:55:03.851443943Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:55:03.852194 containerd[1469]: time="2025-07-06T23:55:03.852156480Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:55:04.349100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount393487891.mount: Deactivated successfully. Jul 6 23:55:04.354489 containerd[1469]: time="2025-07-06T23:55:04.354389399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:04.355990 containerd[1469]: time="2025-07-06T23:55:04.355939543Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:55:04.356692 containerd[1469]: time="2025-07-06T23:55:04.356652234Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:04.359107 containerd[1469]: time="2025-07-06T23:55:04.359063701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:04.361032 containerd[1469]: time="2025-07-06T23:55:04.360992113Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 508.793718ms" Jul 6 23:55:04.361032 containerd[1469]: time="2025-07-06T23:55:04.361032034Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:55:04.361895 containerd[1469]: time="2025-07-06T23:55:04.361559219Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:55:04.872499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount636415065.mount: Deactivated successfully. Jul 6 23:55:05.337960 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jul 6 23:55:06.859555 containerd[1469]: time="2025-07-06T23:55:06.859479267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:06.861424 containerd[1469]: time="2025-07-06T23:55:06.861354446Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 6 23:55:06.862394 containerd[1469]: time="2025-07-06T23:55:06.862336865Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:06.866745 containerd[1469]: time="2025-07-06T23:55:06.865288899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:06.867320 containerd[1469]: time="2025-07-06T23:55:06.867271479Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.505667474s" Jul 6 23:55:06.867465 containerd[1469]: time="2025-07-06T23:55:06.867447059Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 6 23:55:10.013435 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:10.028192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:10.077269 systemd[1]: Reloading requested from client PID 2036 ('systemctl') (unit session-7.scope)... Jul 6 23:55:10.077583 systemd[1]: Reloading... Jul 6 23:55:10.270732 zram_generator::config[2076]: No configuration found. Jul 6 23:55:10.430883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:10.551556 systemd[1]: Reloading finished in 473 ms. Jul 6 23:55:10.640042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:10.644499 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:55:10.647641 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:10.648076 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:55:10.648376 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:10.660181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:10.841022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:10.841462 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:55:10.924156 kubelet[2132]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:55:10.924623 kubelet[2132]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:55:10.924746 kubelet[2132]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:55:10.925013 kubelet[2132]: I0706 23:55:10.924967 2132 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:55:11.165264 kubelet[2132]: I0706 23:55:11.165057 2132 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:55:11.165264 kubelet[2132]: I0706 23:55:11.165129 2132 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:55:11.166104 kubelet[2132]: I0706 23:55:11.166033 2132 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:55:11.199252 kubelet[2132]: I0706 23:55:11.199205 2132 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:55:11.200984 kubelet[2132]: E0706 23:55:11.200414 2132 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://146.190.157.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.157.121:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:11.209096 kubelet[2132]: E0706 23:55:11.209045 2132 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:55:11.209096 kubelet[2132]: I0706 23:55:11.209087 2132 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:55:11.214732 kubelet[2132]: I0706 23:55:11.214669 2132 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:55:11.215701 kubelet[2132]: I0706 23:55:11.215611 2132 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:55:11.215937 kubelet[2132]: I0706 23:55:11.215873 2132 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:55:11.216183 kubelet[2132]: I0706 23:55:11.215931 2132 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-9-29085cf50e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:55:11.216349 kubelet[2132]: I0706 23:55:11.216194 2132 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:55:11.216349 kubelet[2132]: I0706 23:55:11.216208 2132 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:55:11.216407 kubelet[2132]: I0706 23:55:11.216379 2132 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:55:11.222121 kubelet[2132]: I0706 23:55:11.221599 2132 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:55:11.222121 kubelet[2132]: I0706 23:55:11.221667 2132 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:55:11.222121 kubelet[2132]: I0706 23:55:11.221795 2132 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:55:11.222121 kubelet[2132]: I0706 23:55:11.221837 2132 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:55:11.226718 kubelet[2132]: W0706 23:55:11.226613 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.157.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-9-29085cf50e&limit=500&resourceVersion=0": dial tcp 146.190.157.121:6443: connect: connection refused Jul 6 23:55:11.227367 kubelet[2132]: E0706 23:55:11.227327 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.157.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-9-29085cf50e&limit=500&resourceVersion=0\": dial tcp 146.190.157.121:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:11.227665 kubelet[2132]: I0706 23:55:11.227645 2132 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:55:11.231751 kubelet[2132]: I0706 23:55:11.231539 2132 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:55:11.232714 kubelet[2132]: W0706 23:55:11.232382 2132 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:55:11.233138 kubelet[2132]: W0706 23:55:11.233077 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.157.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.157.121:6443: connect: connection refused Jul 6 23:55:11.233236 kubelet[2132]: E0706 23:55:11.233148 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.157.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.157.121:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:11.233752 kubelet[2132]: I0706 23:55:11.233734 2132 server.go:1274] "Started kubelet" Jul 6 23:55:11.235543 kubelet[2132]: I0706 23:55:11.235454 2132 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:55:11.254726 kubelet[2132]: I0706 23:55:11.253421 2132 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:55:11.254726 kubelet[2132]: E0706 23:55:11.250202 2132 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.157.121:6443/api/v1/namespaces/default/events\": dial tcp 146.190.157.121:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-9-29085cf50e.184fcec1a93b8ab9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-9-29085cf50e,UID:ci-4081.3.4-9-29085cf50e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-9-29085cf50e,},FirstTimestamp:2025-07-06 23:55:11.233673913 +0000 UTC m=+0.384335191,LastTimestamp:2025-07-06 23:55:11.233673913 +0000 UTC m=+0.384335191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-9-29085cf50e,}" Jul 6 23:55:11.254726 kubelet[2132]: I0706 23:55:11.254366 2132 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:55:11.255192 kubelet[2132]: I0706 23:55:11.255173 2132 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:55:11.255593 kubelet[2132]: I0706 23:55:11.255572 2132 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:55:11.260292 kubelet[2132]: E0706 23:55:11.260254 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-9-29085cf50e\" not found" Jul 6 23:55:11.261563 kubelet[2132]: I0706 23:55:11.260859 2132 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:55:11.265279 kubelet[2132]: I0706 23:55:11.265240 2132 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:55:11.266955 kubelet[2132]: I0706 23:55:11.265808 2132 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:55:11.266955 kubelet[2132]: I0706 23:55:11.265882 2132 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:55:11.267451 kubelet[2132]: I0706 23:55:11.267428 2132 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:55:11.267658 kubelet[2132]: I0706 23:55:11.267640 2132 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:55:11.268514 kubelet[2132]: E0706 23:55:11.268480 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.157.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-9-29085cf50e?timeout=10s\": dial tcp 146.190.157.121:6443: connect: connection refused" interval="200ms" Jul 6 23:55:11.270254 kubelet[2132]: W0706 23:55:11.270184 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.157.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.157.121:6443: connect: connection refused Jul 6 23:55:11.270471 kubelet[2132]: E0706 23:55:11.270449 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.157.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.157.121:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:11.272166 kubelet[2132]: E0706 23:55:11.272129 2132 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:55:11.272377 kubelet[2132]: I0706 23:55:11.272358 2132 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:55:11.297337 kubelet[2132]: I0706 23:55:11.297269 2132 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:55:11.299284 kubelet[2132]: I0706 23:55:11.299247 2132 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:55:11.299284 kubelet[2132]: I0706 23:55:11.299283 2132 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:55:11.299449 kubelet[2132]: I0706 23:55:11.299341 2132 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:55:11.299449 kubelet[2132]: E0706 23:55:11.299396 2132 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:55:11.316943 kubelet[2132]: W0706 23:55:11.316165 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.157.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.157.121:6443: connect: connection refused Jul 6 23:55:11.316943 kubelet[2132]: E0706 23:55:11.316262 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.157.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.157.121:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:11.318818 kubelet[2132]: I0706 23:55:11.318788 2132 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:55:11.319014 kubelet[2132]: I0706 23:55:11.319001 2132 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:55:11.319099 kubelet[2132]: I0706 23:55:11.319088 2132 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:55:11.321387 kubelet[2132]: I0706 23:55:11.321353 2132 policy_none.go:49] "None policy: Start" Jul 6 23:55:11.323174 kubelet[2132]: I0706 23:55:11.323144 2132 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:55:11.323619 kubelet[2132]: I0706 23:55:11.323414 2132 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:55:11.333500 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:55:11.348653 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:55:11.354622 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:55:11.361716 kubelet[2132]: E0706 23:55:11.361623 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-9-29085cf50e\" not found" Jul 6 23:55:11.364340 kubelet[2132]: I0706 23:55:11.364260 2132 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:55:11.364667 kubelet[2132]: I0706 23:55:11.364631 2132 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:55:11.364766 kubelet[2132]: I0706 23:55:11.364653 2132 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:55:11.365303 kubelet[2132]: I0706 23:55:11.365267 2132 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:55:11.370898 kubelet[2132]: E0706 23:55:11.370830 2132 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-9-29085cf50e\" not found" Jul 6 23:55:11.415505 systemd[1]: Created slice kubepods-burstable-podb471b59253ab86a62c6833901bd19f4f.slice - libcontainer container kubepods-burstable-podb471b59253ab86a62c6833901bd19f4f.slice. Jul 6 23:55:11.442170 systemd[1]: Created slice kubepods-burstable-pod19c9c40a17782ab5fb1965c2dae52d01.slice - libcontainer container kubepods-burstable-pod19c9c40a17782ab5fb1965c2dae52d01.slice. Jul 6 23:55:11.451370 systemd[1]: Created slice kubepods-burstable-podb8dacc1a6672f0f60541677bf0dd6abf.slice - libcontainer container kubepods-burstable-podb8dacc1a6672f0f60541677bf0dd6abf.slice. Jul 6 23:55:11.472286 kubelet[2132]: I0706 23:55:11.471580 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19c9c40a17782ab5fb1965c2dae52d01-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-9-29085cf50e\" (UID: \"19c9c40a17782ab5fb1965c2dae52d01\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:11.472286 kubelet[2132]: I0706 23:55:11.471656 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b8dacc1a6672f0f60541677bf0dd6abf-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-9-29085cf50e\" (UID: \"b8dacc1a6672f0f60541677bf0dd6abf\") " pod="kube-system/kube-scheduler-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:11.472286 kubelet[2132]: I0706 23:55:11.471716 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b471b59253ab86a62c6833901bd19f4f-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-9-29085cf50e\" (UID: \"b471b59253ab86a62c6833901bd19f4f\") " pod="kube-system/kube-apiserver-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:11.472286 kubelet[2132]: I0706 23:55:11.471746 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b471b59253ab86a62c6833901bd19f4f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-9-29085cf50e\" (UID: \"b471b59253ab86a62c6833901bd19f4f\") " pod="kube-system/kube-apiserver-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:11.472286 kubelet[2132]: I0706 23:55:11.471783 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19c9c40a17782ab5fb1965c2dae52d01-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-9-29085cf50e\" (UID: \"19c9c40a17782ab5fb1965c2dae52d01\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:11.472618 kubelet[2132]: I0706 23:55:11.471806 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19c9c40a17782ab5fb1965c2dae52d01-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-9-29085cf50e\" (UID: \"19c9c40a17782ab5fb1965c2dae52d01\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:11.472618 kubelet[2132]: I0706 23:55:11.471830 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19c9c40a17782ab5fb1965c2dae52d01-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-9-29085cf50e\" (UID: \"19c9c40a17782ab5fb1965c2dae52d01\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:11.472618 kubelet[2132]: I0706 23:55:11.471852 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b471b59253ab86a62c6833901bd19f4f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-9-29085cf50e\" (UID: \"b471b59253ab86a62c6833901bd19f4f\") " pod="kube-system/kube-apiserver-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:11.472618 kubelet[2132]: I0706 23:55:11.471876 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19c9c40a17782ab5fb1965c2dae52d01-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-9-29085cf50e\" (UID: \"19c9c40a17782ab5fb1965c2dae52d01\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:11.472618 kubelet[2132]: E0706 23:55:11.472092 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.157.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-9-29085cf50e?timeout=10s\": dial tcp 146.190.157.121:6443: connect: connection refused" interval="400ms" Jul 6 23:55:11.472932 kubelet[2132]: I0706 23:55:11.472889 2132 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:11.473492 kubelet[2132]: E0706 23:55:11.473455 2132 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://146.190.157.121:6443/api/v1/nodes\": dial tcp 146.190.157.121:6443: connect: connection refused" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:11.675393 kubelet[2132]: I0706 23:55:11.674903 2132 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:11.675393 kubelet[2132]: E0706 23:55:11.675332 2132 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://146.190.157.121:6443/api/v1/nodes\": dial tcp 146.190.157.121:6443: connect: connection refused" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:11.738898 kubelet[2132]: E0706 23:55:11.738797 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:11.739753 containerd[1469]: time="2025-07-06T23:55:11.739719175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-9-29085cf50e,Uid:b471b59253ab86a62c6833901bd19f4f,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:11.741804 systemd-resolved[1324]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jul 6 23:55:11.748600 kubelet[2132]: E0706 23:55:11.748522 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:11.755423 containerd[1469]: time="2025-07-06T23:55:11.755364736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-9-29085cf50e,Uid:19c9c40a17782ab5fb1965c2dae52d01,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:11.755904 kubelet[2132]: E0706 23:55:11.755751 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:11.758639 containerd[1469]: time="2025-07-06T23:55:11.758216358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-9-29085cf50e,Uid:b8dacc1a6672f0f60541677bf0dd6abf,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:11.872668 kubelet[2132]: E0706 23:55:11.872602 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.157.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-9-29085cf50e?timeout=10s\": dial tcp 146.190.157.121:6443: connect: connection refused" interval="800ms" Jul 6 23:55:12.077055 kubelet[2132]: I0706 23:55:12.076992 2132 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:12.077631 kubelet[2132]: E0706 23:55:12.077412 2132 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://146.190.157.121:6443/api/v1/nodes\": dial tcp 146.190.157.121:6443: connect: connection refused" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:12.191147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount857706908.mount: Deactivated successfully. Jul 6 23:55:12.196652 containerd[1469]: time="2025-07-06T23:55:12.196523821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:12.197866 containerd[1469]: time="2025-07-06T23:55:12.197739767Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:55:12.199347 containerd[1469]: time="2025-07-06T23:55:12.199191011Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:12.201574 containerd[1469]: time="2025-07-06T23:55:12.201506132Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:55:12.202069 containerd[1469]: time="2025-07-06T23:55:12.202031408Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:12.203704 containerd[1469]: time="2025-07-06T23:55:12.202858400Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:55:12.206963 containerd[1469]: time="2025-07-06T23:55:12.206889513Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:12.211198 containerd[1469]: time="2025-07-06T23:55:12.211120623Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 452.815496ms" Jul 6 23:55:12.213127 containerd[1469]: time="2025-07-06T23:55:12.213060806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:12.217743 containerd[1469]: time="2025-07-06T23:55:12.217642434Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 461.98368ms" Jul 6 23:55:12.223572 containerd[1469]: time="2025-07-06T23:55:12.223035286Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 483.229788ms" Jul 6 23:55:12.253961 kubelet[2132]: W0706 23:55:12.253843 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.157.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.157.121:6443: connect: connection refused Jul 6 23:55:12.255466 kubelet[2132]: E0706 23:55:12.254449 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.157.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.157.121:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:12.259244 kubelet[2132]: W0706 23:55:12.259047 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.157.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-9-29085cf50e&limit=500&resourceVersion=0": dial tcp 146.190.157.121:6443: connect: connection refused Jul 6 23:55:12.259244 kubelet[2132]: E0706 23:55:12.259184 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.157.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-9-29085cf50e&limit=500&resourceVersion=0\": dial tcp 146.190.157.121:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:12.440997 containerd[1469]: time="2025-07-06T23:55:12.439205942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:12.440997 containerd[1469]: time="2025-07-06T23:55:12.439269968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:12.442705 containerd[1469]: time="2025-07-06T23:55:12.442387472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:12.443448 containerd[1469]: time="2025-07-06T23:55:12.440423701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:12.443448 containerd[1469]: time="2025-07-06T23:55:12.440503935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:12.443448 containerd[1469]: time="2025-07-06T23:55:12.440520733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:12.444231 containerd[1469]: time="2025-07-06T23:55:12.443367976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:12.444231 containerd[1469]: time="2025-07-06T23:55:12.442622659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:12.444231 containerd[1469]: time="2025-07-06T23:55:12.442890802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:12.444231 containerd[1469]: time="2025-07-06T23:55:12.442956924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:12.444231 containerd[1469]: time="2025-07-06T23:55:12.442972063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:12.444231 containerd[1469]: time="2025-07-06T23:55:12.443064012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:12.484749 systemd[1]: Started cri-containerd-88e41663b310688c1c9af64adc31b5d04cf0c77bc14be042a14c314e26aca969.scope - libcontainer container 88e41663b310688c1c9af64adc31b5d04cf0c77bc14be042a14c314e26aca969. Jul 6 23:55:12.498431 systemd[1]: Started cri-containerd-62094d076beafa9acb8cda97fe36304b3a21db160b4b30f60933fb5129332659.scope - libcontainer container 62094d076beafa9acb8cda97fe36304b3a21db160b4b30f60933fb5129332659. Jul 6 23:55:12.508834 systemd[1]: Started cri-containerd-9fbbe91ac4404ec68c7945213c39a2c4cddaabdd8626dfc2f3603bb1dd8a5996.scope - libcontainer container 9fbbe91ac4404ec68c7945213c39a2c4cddaabdd8626dfc2f3603bb1dd8a5996. Jul 6 23:55:12.596418 containerd[1469]: time="2025-07-06T23:55:12.596067654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-9-29085cf50e,Uid:b471b59253ab86a62c6833901bd19f4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"88e41663b310688c1c9af64adc31b5d04cf0c77bc14be042a14c314e26aca969\"" Jul 6 23:55:12.603949 kubelet[2132]: E0706 23:55:12.603902 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:12.614364 containerd[1469]: time="2025-07-06T23:55:12.614157942Z" level=info msg="CreateContainer within sandbox \"88e41663b310688c1c9af64adc31b5d04cf0c77bc14be042a14c314e26aca969\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:55:12.615852 containerd[1469]: time="2025-07-06T23:55:12.615801691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-9-29085cf50e,Uid:b8dacc1a6672f0f60541677bf0dd6abf,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fbbe91ac4404ec68c7945213c39a2c4cddaabdd8626dfc2f3603bb1dd8a5996\"" Jul 6 23:55:12.617783 kubelet[2132]: E0706 23:55:12.617602 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:12.621407 containerd[1469]: time="2025-07-06T23:55:12.621265110Z" level=info msg="CreateContainer within sandbox \"9fbbe91ac4404ec68c7945213c39a2c4cddaabdd8626dfc2f3603bb1dd8a5996\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:55:12.629792 containerd[1469]: time="2025-07-06T23:55:12.629721017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-9-29085cf50e,Uid:19c9c40a17782ab5fb1965c2dae52d01,Namespace:kube-system,Attempt:0,} returns sandbox id \"62094d076beafa9acb8cda97fe36304b3a21db160b4b30f60933fb5129332659\"" Jul 6 23:55:12.631319 kubelet[2132]: E0706 23:55:12.631282 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:12.634850 containerd[1469]: time="2025-07-06T23:55:12.634803731Z" level=info msg="CreateContainer within sandbox \"62094d076beafa9acb8cda97fe36304b3a21db160b4b30f60933fb5129332659\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:55:12.637633 containerd[1469]: time="2025-07-06T23:55:12.637579618Z" level=info msg="CreateContainer within sandbox \"88e41663b310688c1c9af64adc31b5d04cf0c77bc14be042a14c314e26aca969\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57acfe5e7dabff8154bb8e0f5eab017f4b9e9897c17681701d22ac746c347233\"" Jul 6 23:55:12.639096 containerd[1469]: time="2025-07-06T23:55:12.639043837Z" level=info msg="StartContainer for \"57acfe5e7dabff8154bb8e0f5eab017f4b9e9897c17681701d22ac746c347233\"" Jul 6 23:55:12.654722 containerd[1469]: time="2025-07-06T23:55:12.653609458Z" level=info msg="CreateContainer within sandbox \"9fbbe91ac4404ec68c7945213c39a2c4cddaabdd8626dfc2f3603bb1dd8a5996\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1baa72d17975a7a784f8550c26e7a4938106562e2f2ca6ff216d0dcfc54a9bef\"" Jul 6 23:55:12.655294 containerd[1469]: time="2025-07-06T23:55:12.655242866Z" level=info msg="StartContainer for \"1baa72d17975a7a784f8550c26e7a4938106562e2f2ca6ff216d0dcfc54a9bef\"" Jul 6 23:55:12.664836 containerd[1469]: time="2025-07-06T23:55:12.664780496Z" level=info msg="CreateContainer within sandbox \"62094d076beafa9acb8cda97fe36304b3a21db160b4b30f60933fb5129332659\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4ad3869cf37c166b91165b907ffaa9130f634d6006abacc446281e240a1f4165\"" Jul 6 23:55:12.665551 containerd[1469]: time="2025-07-06T23:55:12.665513081Z" level=info msg="StartContainer for \"4ad3869cf37c166b91165b907ffaa9130f634d6006abacc446281e240a1f4165\"" Jul 6 23:55:12.673610 kubelet[2132]: E0706 23:55:12.673554 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.157.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-9-29085cf50e?timeout=10s\": dial tcp 146.190.157.121:6443: connect: connection refused" interval="1.6s" Jul 6 23:55:12.702945 systemd[1]: Started cri-containerd-57acfe5e7dabff8154bb8e0f5eab017f4b9e9897c17681701d22ac746c347233.scope - libcontainer container 57acfe5e7dabff8154bb8e0f5eab017f4b9e9897c17681701d22ac746c347233. Jul 6 23:55:12.715937 systemd[1]: Started cri-containerd-1baa72d17975a7a784f8550c26e7a4938106562e2f2ca6ff216d0dcfc54a9bef.scope - libcontainer container 1baa72d17975a7a784f8550c26e7a4938106562e2f2ca6ff216d0dcfc54a9bef. Jul 6 23:55:12.726397 systemd[1]: Started cri-containerd-4ad3869cf37c166b91165b907ffaa9130f634d6006abacc446281e240a1f4165.scope - libcontainer container 4ad3869cf37c166b91165b907ffaa9130f634d6006abacc446281e240a1f4165. Jul 6 23:55:12.807446 kubelet[2132]: W0706 23:55:12.807300 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.157.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.157.121:6443: connect: connection refused Jul 6 23:55:12.807446 kubelet[2132]: E0706 23:55:12.807437 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.157.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.157.121:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:12.812665 containerd[1469]: time="2025-07-06T23:55:12.812545040Z" level=info msg="StartContainer for \"57acfe5e7dabff8154bb8e0f5eab017f4b9e9897c17681701d22ac746c347233\" returns successfully" Jul 6 23:55:12.820986 containerd[1469]: time="2025-07-06T23:55:12.820482017Z" level=info msg="StartContainer for \"1baa72d17975a7a784f8550c26e7a4938106562e2f2ca6ff216d0dcfc54a9bef\" returns successfully" Jul 6 23:55:12.864205 containerd[1469]: time="2025-07-06T23:55:12.864116653Z" level=info msg="StartContainer for \"4ad3869cf37c166b91165b907ffaa9130f634d6006abacc446281e240a1f4165\" returns successfully" Jul 6 23:55:12.867927 kubelet[2132]: W0706 23:55:12.867804 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.157.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.157.121:6443: connect: connection refused Jul 6 23:55:12.868136 kubelet[2132]: E0706 23:55:12.867940 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.157.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.157.121:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:12.879291 kubelet[2132]: I0706 23:55:12.879250 2132 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:12.879718 kubelet[2132]: E0706 23:55:12.879665 2132 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://146.190.157.121:6443/api/v1/nodes\": dial tcp 146.190.157.121:6443: connect: connection refused" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:13.329583 kubelet[2132]: E0706 23:55:13.329222 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:13.332893 kubelet[2132]: E0706 23:55:13.332475 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:13.335448 kubelet[2132]: E0706 23:55:13.335375 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:14.339260 kubelet[2132]: E0706 23:55:14.339184 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:14.481864 kubelet[2132]: I0706 23:55:14.481823 2132 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:15.069880 kubelet[2132]: E0706 23:55:15.069830 2132 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.4-9-29085cf50e\" not found" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:15.151030 kubelet[2132]: E0706 23:55:15.150915 2132 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.4-9-29085cf50e.184fcec1a93b8ab9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-9-29085cf50e,UID:ci-4081.3.4-9-29085cf50e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-9-29085cf50e,},FirstTimestamp:2025-07-06 23:55:11.233673913 +0000 UTC m=+0.384335191,LastTimestamp:2025-07-06 23:55:11.233673913 +0000 UTC m=+0.384335191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-9-29085cf50e,}" Jul 6 23:55:15.202536 kubelet[2132]: I0706 23:55:15.202450 2132 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:15.208481 kubelet[2132]: E0706 23:55:15.208149 2132 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.4-9-29085cf50e.184fcec1ab85ef05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-9-29085cf50e,UID:ci-4081.3.4-9-29085cf50e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-9-29085cf50e,},FirstTimestamp:2025-07-06 23:55:11.272103685 +0000 UTC m=+0.422764961,LastTimestamp:2025-07-06 23:55:11.272103685 +0000 UTC m=+0.422764961,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-9-29085cf50e,}" Jul 6 23:55:15.234566 kubelet[2132]: I0706 23:55:15.234499 2132 apiserver.go:52] "Watching apiserver" Jul 6 23:55:15.266892 kubelet[2132]: I0706 23:55:15.266850 2132 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:55:16.021580 kubelet[2132]: W0706 23:55:16.021433 2132 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:55:16.023443 kubelet[2132]: E0706 23:55:16.022940 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:16.345781 kubelet[2132]: E0706 23:55:16.345723 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:17.319485 systemd[1]: Reloading requested from client PID 2407 ('systemctl') (unit session-7.scope)... Jul 6 23:55:17.319504 systemd[1]: Reloading... Jul 6 23:55:17.428778 zram_generator::config[2442]: No configuration found. Jul 6 23:55:17.626730 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:17.779307 systemd[1]: Reloading finished in 459 ms. Jul 6 23:55:17.834863 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:17.851655 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:55:17.852666 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:17.862051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:18.055009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:18.058386 (kubelet)[2497]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:55:18.142720 kubelet[2497]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:55:18.142720 kubelet[2497]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:55:18.142720 kubelet[2497]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:55:18.142720 kubelet[2497]: I0706 23:55:18.141513 2497 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:55:18.151091 kubelet[2497]: I0706 23:55:18.151039 2497 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:55:18.151303 kubelet[2497]: I0706 23:55:18.151287 2497 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:55:18.151890 kubelet[2497]: I0706 23:55:18.151865 2497 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:55:18.158380 kubelet[2497]: I0706 23:55:18.158330 2497 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:55:18.163196 kubelet[2497]: I0706 23:55:18.163144 2497 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:55:18.168788 kubelet[2497]: E0706 23:55:18.168744 2497 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:55:18.169097 kubelet[2497]: I0706 23:55:18.169076 2497 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:55:18.173745 kubelet[2497]: I0706 23:55:18.173651 2497 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:55:18.174252 kubelet[2497]: I0706 23:55:18.174098 2497 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:55:18.174802 kubelet[2497]: I0706 23:55:18.174465 2497 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:55:18.177501 kubelet[2497]: I0706 23:55:18.174515 2497 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-9-29085cf50e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:55:18.178342 kubelet[2497]: I0706 23:55:18.177768 2497 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:55:18.178342 kubelet[2497]: I0706 23:55:18.177792 2497 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:55:18.178342 kubelet[2497]: I0706 23:55:18.177841 2497 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:55:18.178342 kubelet[2497]: I0706 23:55:18.178025 2497 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:55:18.178342 kubelet[2497]: I0706 23:55:18.178045 2497 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:55:18.178342 kubelet[2497]: I0706 23:55:18.178124 2497 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:55:18.178342 kubelet[2497]: I0706 23:55:18.178142 2497 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:55:18.187800 kubelet[2497]: I0706 23:55:18.186935 2497 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:55:18.189824 kubelet[2497]: I0706 23:55:18.188825 2497 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:55:18.191755 kubelet[2497]: I0706 23:55:18.191040 2497 server.go:1274] "Started kubelet" Jul 6 23:55:18.193672 kubelet[2497]: I0706 23:55:18.193627 2497 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:55:18.194252 kubelet[2497]: I0706 23:55:18.194230 2497 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:55:18.198780 kubelet[2497]: I0706 23:55:18.198745 2497 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:55:18.205942 kubelet[2497]: I0706 23:55:18.205900 2497 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:55:18.208454 kubelet[2497]: I0706 23:55:18.208423 2497 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:55:18.209137 kubelet[2497]: I0706 23:55:18.206171 2497 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:55:18.210702 kubelet[2497]: I0706 23:55:18.209959 2497 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:55:18.210702 kubelet[2497]: I0706 23:55:18.210448 2497 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:55:18.210702 kubelet[2497]: I0706 23:55:18.210636 2497 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:55:18.219542 kubelet[2497]: I0706 23:55:18.217813 2497 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:55:18.223522 kubelet[2497]: I0706 23:55:18.223486 2497 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:55:18.223765 kubelet[2497]: I0706 23:55:18.223748 2497 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:55:18.225274 kubelet[2497]: I0706 23:55:18.225233 2497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:55:18.227442 kubelet[2497]: I0706 23:55:18.227405 2497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:55:18.227442 kubelet[2497]: I0706 23:55:18.227434 2497 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:55:18.227602 kubelet[2497]: I0706 23:55:18.227462 2497 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:55:18.227602 kubelet[2497]: E0706 23:55:18.227518 2497 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:55:18.278502 kubelet[2497]: I0706 23:55:18.278461 2497 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:55:18.278502 kubelet[2497]: I0706 23:55:18.278489 2497 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:55:18.278502 kubelet[2497]: I0706 23:55:18.278517 2497 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:55:18.278784 kubelet[2497]: I0706 23:55:18.278695 2497 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:55:18.278784 kubelet[2497]: I0706 23:55:18.278705 2497 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:55:18.278784 kubelet[2497]: I0706 23:55:18.278724 2497 policy_none.go:49] "None policy: Start" Jul 6 23:55:18.279537 kubelet[2497]: I0706 23:55:18.279516 2497 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:55:18.279537 kubelet[2497]: I0706 23:55:18.279540 2497 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:55:18.279755 kubelet[2497]: I0706 23:55:18.279701 2497 state_mem.go:75] "Updated machine memory state" Jul 6 23:55:18.287711 kubelet[2497]: I0706 23:55:18.287226 2497 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:55:18.287711 kubelet[2497]: I0706 23:55:18.287527 2497 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:55:18.287711 kubelet[2497]: I0706 23:55:18.287542 2497 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:55:18.289866 kubelet[2497]: I0706 23:55:18.287972 2497 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:55:18.335635 kubelet[2497]: W0706 23:55:18.335346 2497 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:55:18.338527 kubelet[2497]: W0706 23:55:18.338001 2497 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:55:18.339220 kubelet[2497]: W0706 23:55:18.339198 2497 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:55:18.339493 kubelet[2497]: E0706 23:55:18.339385 2497 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.4-9-29085cf50e\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:18.398112 kubelet[2497]: I0706 23:55:18.398069 2497 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:18.408391 kubelet[2497]: I0706 23:55:18.408291 2497 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:18.410144 kubelet[2497]: I0706 23:55:18.409647 2497 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.4-9-29085cf50e" Jul 6 23:55:18.413552 kubelet[2497]: I0706 23:55:18.413518 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19c9c40a17782ab5fb1965c2dae52d01-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-9-29085cf50e\" (UID: \"19c9c40a17782ab5fb1965c2dae52d01\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:18.413923 kubelet[2497]: I0706 23:55:18.413747 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19c9c40a17782ab5fb1965c2dae52d01-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-9-29085cf50e\" (UID: \"19c9c40a17782ab5fb1965c2dae52d01\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:18.413923 kubelet[2497]: I0706 23:55:18.413774 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19c9c40a17782ab5fb1965c2dae52d01-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-9-29085cf50e\" (UID: \"19c9c40a17782ab5fb1965c2dae52d01\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:18.413923 kubelet[2497]: I0706 23:55:18.413795 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19c9c40a17782ab5fb1965c2dae52d01-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-9-29085cf50e\" (UID: \"19c9c40a17782ab5fb1965c2dae52d01\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:18.413923 kubelet[2497]: I0706 23:55:18.413815 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b8dacc1a6672f0f60541677bf0dd6abf-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-9-29085cf50e\" (UID: \"b8dacc1a6672f0f60541677bf0dd6abf\") " pod="kube-system/kube-scheduler-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:18.414340 kubelet[2497]: I0706 23:55:18.414121 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b471b59253ab86a62c6833901bd19f4f-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-9-29085cf50e\" (UID: \"b471b59253ab86a62c6833901bd19f4f\") " pod="kube-system/kube-apiserver-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:18.414340 kubelet[2497]: I0706 23:55:18.414148 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b471b59253ab86a62c6833901bd19f4f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-9-29085cf50e\" (UID: \"b471b59253ab86a62c6833901bd19f4f\") " pod="kube-system/kube-apiserver-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:18.414340 kubelet[2497]: I0706 23:55:18.414167 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b471b59253ab86a62c6833901bd19f4f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-9-29085cf50e\" (UID: \"b471b59253ab86a62c6833901bd19f4f\") " pod="kube-system/kube-apiserver-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:18.414340 kubelet[2497]: I0706 23:55:18.414197 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19c9c40a17782ab5fb1965c2dae52d01-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-9-29085cf50e\" (UID: \"19c9c40a17782ab5fb1965c2dae52d01\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:18.636974 kubelet[2497]: E0706 23:55:18.635739 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:18.640221 kubelet[2497]: E0706 23:55:18.639912 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:18.640221 kubelet[2497]: E0706 23:55:18.639980 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:19.186870 kubelet[2497]: I0706 23:55:19.186333 2497 apiserver.go:52] "Watching apiserver" Jul 6 23:55:19.211654 kubelet[2497]: I0706 23:55:19.211601 2497 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:55:19.257808 kubelet[2497]: E0706 23:55:19.256714 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:19.259966 kubelet[2497]: E0706 23:55:19.259488 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:19.273866 kubelet[2497]: W0706 23:55:19.273746 2497 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:55:19.274134 kubelet[2497]: E0706 23:55:19.273971 2497 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.4-9-29085cf50e\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-9-29085cf50e" Jul 6 23:55:19.274868 kubelet[2497]: E0706 23:55:19.274746 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:19.305067 kubelet[2497]: I0706 23:55:19.304794 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-9-29085cf50e" podStartSLOduration=3.304772495 podStartE2EDuration="3.304772495s" podCreationTimestamp="2025-07-06 23:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:55:19.294430048 +0000 UTC m=+1.229776447" watchObservedRunningTime="2025-07-06 23:55:19.304772495 +0000 UTC m=+1.240118972" Jul 6 23:55:19.316315 kubelet[2497]: I0706 23:55:19.316244 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-9-29085cf50e" podStartSLOduration=1.316221181 podStartE2EDuration="1.316221181s" podCreationTimestamp="2025-07-06 23:55:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:55:19.305230737 +0000 UTC m=+1.240577138" watchObservedRunningTime="2025-07-06 23:55:19.316221181 +0000 UTC m=+1.251567572" Jul 6 23:55:19.316547 kubelet[2497]: I0706 23:55:19.316361 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-9-29085cf50e" podStartSLOduration=1.316353394 podStartE2EDuration="1.316353394s" podCreationTimestamp="2025-07-06 23:55:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:55:19.315751942 +0000 UTC m=+1.251098338" watchObservedRunningTime="2025-07-06 23:55:19.316353394 +0000 UTC m=+1.251699793" Jul 6 23:55:20.259585 kubelet[2497]: E0706 23:55:20.259547 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:21.261615 kubelet[2497]: E0706 23:55:21.261583 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:22.037835 kubelet[2497]: E0706 23:55:22.037233 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:22.263994 kubelet[2497]: E0706 23:55:22.263959 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:23.777135 kubelet[2497]: I0706 23:55:23.777082 2497 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:55:23.777860 containerd[1469]: time="2025-07-06T23:55:23.777738836Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:55:23.778320 kubelet[2497]: I0706 23:55:23.777993 2497 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:55:23.869083 kubelet[2497]: E0706 23:55:23.868589 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:24.269414 kubelet[2497]: E0706 23:55:24.269338 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:24.350248 systemd[1]: Created slice kubepods-besteffort-pod235a07e2_9c57_4132_8bfb_cf2648f6b5f7.slice - libcontainer container kubepods-besteffort-pod235a07e2_9c57_4132_8bfb_cf2648f6b5f7.slice. Jul 6 23:55:24.356277 kubelet[2497]: I0706 23:55:24.355758 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/235a07e2-9c57-4132-8bfb-cf2648f6b5f7-kube-proxy\") pod \"kube-proxy-4x9m4\" (UID: \"235a07e2-9c57-4132-8bfb-cf2648f6b5f7\") " pod="kube-system/kube-proxy-4x9m4" Jul 6 23:55:24.356277 kubelet[2497]: I0706 23:55:24.355808 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw5v9\" (UniqueName: \"kubernetes.io/projected/235a07e2-9c57-4132-8bfb-cf2648f6b5f7-kube-api-access-hw5v9\") pod \"kube-proxy-4x9m4\" (UID: \"235a07e2-9c57-4132-8bfb-cf2648f6b5f7\") " pod="kube-system/kube-proxy-4x9m4" Jul 6 23:55:24.356277 kubelet[2497]: I0706 23:55:24.355828 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/235a07e2-9c57-4132-8bfb-cf2648f6b5f7-xtables-lock\") pod \"kube-proxy-4x9m4\" (UID: \"235a07e2-9c57-4132-8bfb-cf2648f6b5f7\") " pod="kube-system/kube-proxy-4x9m4" Jul 6 23:55:24.356277 kubelet[2497]: I0706 23:55:24.355844 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/235a07e2-9c57-4132-8bfb-cf2648f6b5f7-lib-modules\") pod \"kube-proxy-4x9m4\" (UID: \"235a07e2-9c57-4132-8bfb-cf2648f6b5f7\") " pod="kube-system/kube-proxy-4x9m4" Jul 6 23:55:24.465758 kubelet[2497]: E0706 23:55:24.465519 2497 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 6 23:55:24.465758 kubelet[2497]: E0706 23:55:24.465557 2497 projected.go:194] Error preparing data for projected volume kube-api-access-hw5v9 for pod kube-system/kube-proxy-4x9m4: configmap "kube-root-ca.crt" not found Jul 6 23:55:24.465758 kubelet[2497]: E0706 23:55:24.465637 2497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/235a07e2-9c57-4132-8bfb-cf2648f6b5f7-kube-api-access-hw5v9 podName:235a07e2-9c57-4132-8bfb-cf2648f6b5f7 nodeName:}" failed. No retries permitted until 2025-07-06 23:55:24.965606968 +0000 UTC m=+6.900953346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hw5v9" (UniqueName: "kubernetes.io/projected/235a07e2-9c57-4132-8bfb-cf2648f6b5f7-kube-api-access-hw5v9") pod "kube-proxy-4x9m4" (UID: "235a07e2-9c57-4132-8bfb-cf2648f6b5f7") : configmap "kube-root-ca.crt" not found Jul 6 23:55:24.876099 systemd[1]: Created slice kubepods-besteffort-pod773f220a_4cc6_488a_adfd_4c0b7f9a2d35.slice - libcontainer container kubepods-besteffort-pod773f220a_4cc6_488a_adfd_4c0b7f9a2d35.slice. Jul 6 23:55:24.960352 kubelet[2497]: I0706 23:55:24.960273 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxpb7\" (UniqueName: \"kubernetes.io/projected/773f220a-4cc6-488a-adfd-4c0b7f9a2d35-kube-api-access-nxpb7\") pod \"tigera-operator-5bf8dfcb4-59dgn\" (UID: \"773f220a-4cc6-488a-adfd-4c0b7f9a2d35\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-59dgn" Jul 6 23:55:24.960960 kubelet[2497]: I0706 23:55:24.960396 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/773f220a-4cc6-488a-adfd-4c0b7f9a2d35-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-59dgn\" (UID: \"773f220a-4cc6-488a-adfd-4c0b7f9a2d35\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-59dgn" Jul 6 23:55:25.650879 systemd-resolved[1324]: Clock change detected. Flushing caches. Jul 6 23:55:25.651301 systemd-timesyncd[1340]: Contacted time server 24.144.88.190:123 (2.flatcar.pool.ntp.org). Jul 6 23:55:25.651371 systemd-timesyncd[1340]: Initial clock synchronization to Sun 2025-07-06 23:55:25.650783 UTC. Jul 6 23:55:25.725190 containerd[1469]: time="2025-07-06T23:55:25.725003613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-59dgn,Uid:773f220a-4cc6-488a-adfd-4c0b7f9a2d35,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:55:25.767030 containerd[1469]: time="2025-07-06T23:55:25.766395270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:25.767030 containerd[1469]: time="2025-07-06T23:55:25.766469594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:25.767030 containerd[1469]: time="2025-07-06T23:55:25.766486162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:25.768265 containerd[1469]: time="2025-07-06T23:55:25.768129356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:25.806358 kubelet[2497]: E0706 23:55:25.806316 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:25.808768 systemd[1]: Started cri-containerd-a653b4263ce8ef9f3de86a6898bf61af50ff459650fcfdd1231a77b6f302f6bb.scope - libcontainer container a653b4263ce8ef9f3de86a6898bf61af50ff459650fcfdd1231a77b6f302f6bb. Jul 6 23:55:25.811158 containerd[1469]: time="2025-07-06T23:55:25.810890512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4x9m4,Uid:235a07e2-9c57-4132-8bfb-cf2648f6b5f7,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:25.847932 containerd[1469]: time="2025-07-06T23:55:25.847394800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:25.848277 containerd[1469]: time="2025-07-06T23:55:25.847685415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:25.848277 containerd[1469]: time="2025-07-06T23:55:25.847711419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:25.848277 containerd[1469]: time="2025-07-06T23:55:25.847861414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:25.872371 systemd[1]: Started cri-containerd-a965f20d931cec8b6f25a4602812b0b3e42326dfa7af836e867e29d0cc1c7e0f.scope - libcontainer container a965f20d931cec8b6f25a4602812b0b3e42326dfa7af836e867e29d0cc1c7e0f. Jul 6 23:55:25.889580 containerd[1469]: time="2025-07-06T23:55:25.889483729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-59dgn,Uid:773f220a-4cc6-488a-adfd-4c0b7f9a2d35,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a653b4263ce8ef9f3de86a6898bf61af50ff459650fcfdd1231a77b6f302f6bb\"" Jul 6 23:55:25.893829 containerd[1469]: time="2025-07-06T23:55:25.893784116Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:55:25.912970 containerd[1469]: time="2025-07-06T23:55:25.912915927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4x9m4,Uid:235a07e2-9c57-4132-8bfb-cf2648f6b5f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a965f20d931cec8b6f25a4602812b0b3e42326dfa7af836e867e29d0cc1c7e0f\"" Jul 6 23:55:25.914039 kubelet[2497]: E0706 23:55:25.914018 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:25.917602 containerd[1469]: time="2025-07-06T23:55:25.917249696Z" level=info msg="CreateContainer within sandbox \"a965f20d931cec8b6f25a4602812b0b3e42326dfa7af836e867e29d0cc1c7e0f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:55:25.930373 containerd[1469]: time="2025-07-06T23:55:25.930217411Z" level=info msg="CreateContainer within sandbox \"a965f20d931cec8b6f25a4602812b0b3e42326dfa7af836e867e29d0cc1c7e0f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c1be671fc867d3da18d888ca009aa50a2da41a015c317db37fde69ab8e130551\"" Jul 6 23:55:25.930856 containerd[1469]: time="2025-07-06T23:55:25.930829710Z" level=info msg="StartContainer for \"c1be671fc867d3da18d888ca009aa50a2da41a015c317db37fde69ab8e130551\"" Jul 6 23:55:25.964293 systemd[1]: Started cri-containerd-c1be671fc867d3da18d888ca009aa50a2da41a015c317db37fde69ab8e130551.scope - libcontainer container c1be671fc867d3da18d888ca009aa50a2da41a015c317db37fde69ab8e130551. Jul 6 23:55:25.996651 containerd[1469]: time="2025-07-06T23:55:25.995855950Z" level=info msg="StartContainer for \"c1be671fc867d3da18d888ca009aa50a2da41a015c317db37fde69ab8e130551\" returns successfully" Jul 6 23:55:26.617772 systemd[1]: run-containerd-runc-k8s.io-a653b4263ce8ef9f3de86a6898bf61af50ff459650fcfdd1231a77b6f302f6bb-runc.iq5foW.mount: Deactivated successfully. Jul 6 23:55:26.822691 kubelet[2497]: E0706 23:55:26.822627 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:26.836113 kubelet[2497]: I0706 23:55:26.834480 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4x9m4" podStartSLOduration=2.834460002 podStartE2EDuration="2.834460002s" podCreationTimestamp="2025-07-06 23:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:55:26.833616932 +0000 UTC m=+8.223577257" watchObservedRunningTime="2025-07-06 23:55:26.834460002 +0000 UTC m=+8.224420320" Jul 6 23:55:27.401867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554596046.mount: Deactivated successfully. Jul 6 23:55:28.258735 containerd[1469]: time="2025-07-06T23:55:28.258660880Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:28.260410 containerd[1469]: time="2025-07-06T23:55:28.260039920Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 6 23:55:28.261197 containerd[1469]: time="2025-07-06T23:55:28.261156559Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:28.265656 containerd[1469]: time="2025-07-06T23:55:28.265591598Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:28.266351 containerd[1469]: time="2025-07-06T23:55:28.266303051Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.372467701s" Jul 6 23:55:28.266477 containerd[1469]: time="2025-07-06T23:55:28.266352129Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 6 23:55:28.271030 containerd[1469]: time="2025-07-06T23:55:28.270963534Z" level=info msg="CreateContainer within sandbox \"a653b4263ce8ef9f3de86a6898bf61af50ff459650fcfdd1231a77b6f302f6bb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:55:28.296996 containerd[1469]: time="2025-07-06T23:55:28.296915607Z" level=info msg="CreateContainer within sandbox \"a653b4263ce8ef9f3de86a6898bf61af50ff459650fcfdd1231a77b6f302f6bb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"74ebd1ec388d0558b4423ef0147c52ad2d0448d02c99a5fdc9a32865310f606b\"" Jul 6 23:55:28.300560 containerd[1469]: time="2025-07-06T23:55:28.300179270Z" level=info msg="StartContainer for \"74ebd1ec388d0558b4423ef0147c52ad2d0448d02c99a5fdc9a32865310f606b\"" Jul 6 23:55:28.348320 systemd[1]: Started cri-containerd-74ebd1ec388d0558b4423ef0147c52ad2d0448d02c99a5fdc9a32865310f606b.scope - libcontainer container 74ebd1ec388d0558b4423ef0147c52ad2d0448d02c99a5fdc9a32865310f606b. Jul 6 23:55:28.385761 containerd[1469]: time="2025-07-06T23:55:28.385574343Z" level=info msg="StartContainer for \"74ebd1ec388d0558b4423ef0147c52ad2d0448d02c99a5fdc9a32865310f606b\" returns successfully" Jul 6 23:55:30.749155 kubelet[2497]: E0706 23:55:30.749118 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:30.795908 kubelet[2497]: I0706 23:55:30.795841 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-59dgn" podStartSLOduration=4.419174568 podStartE2EDuration="6.795822347s" podCreationTimestamp="2025-07-06 23:55:24 +0000 UTC" firstStartedPulling="2025-07-06 23:55:25.891471821 +0000 UTC m=+7.281432125" lastFinishedPulling="2025-07-06 23:55:28.268119601 +0000 UTC m=+9.658079904" observedRunningTime="2025-07-06 23:55:28.840594727 +0000 UTC m=+10.230555051" watchObservedRunningTime="2025-07-06 23:55:30.795822347 +0000 UTC m=+12.185782671" Jul 6 23:55:34.915075 update_engine[1444]: I20250706 23:55:34.913478 1444 update_attempter.cc:509] Updating boot flags... Jul 6 23:55:35.013698 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2867) Jul 6 23:55:35.109045 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2867) Jul 6 23:55:35.285565 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2867) Jul 6 23:55:35.611199 sudo[1649]: pam_unix(sudo:session): session closed for user root Jul 6 23:55:35.618642 sshd[1645]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:35.623721 systemd[1]: sshd@6-146.190.157.121:22-139.178.89.65:34440.service: Deactivated successfully. Jul 6 23:55:35.628282 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:55:35.628566 systemd[1]: session-7.scope: Consumed 5.635s CPU time, 144.5M memory peak, 0B memory swap peak. Jul 6 23:55:35.634539 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:55:35.638364 systemd-logind[1443]: Removed session 7. Jul 6 23:55:40.779726 systemd[1]: Created slice kubepods-besteffort-podaf5d257b_a74b_4bee_8744_6c3bd017f7fb.slice - libcontainer container kubepods-besteffort-podaf5d257b_a74b_4bee_8744_6c3bd017f7fb.slice. Jul 6 23:55:40.805385 kubelet[2497]: I0706 23:55:40.805271 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jgxh\" (UniqueName: \"kubernetes.io/projected/af5d257b-a74b-4bee-8744-6c3bd017f7fb-kube-api-access-2jgxh\") pod \"calico-typha-664b8768c6-4nmzm\" (UID: \"af5d257b-a74b-4bee-8744-6c3bd017f7fb\") " pod="calico-system/calico-typha-664b8768c6-4nmzm" Jul 6 23:55:40.805385 kubelet[2497]: I0706 23:55:40.805375 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/af5d257b-a74b-4bee-8744-6c3bd017f7fb-typha-certs\") pod \"calico-typha-664b8768c6-4nmzm\" (UID: \"af5d257b-a74b-4bee-8744-6c3bd017f7fb\") " pod="calico-system/calico-typha-664b8768c6-4nmzm" Jul 6 23:55:40.810846 kubelet[2497]: I0706 23:55:40.805411 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af5d257b-a74b-4bee-8744-6c3bd017f7fb-tigera-ca-bundle\") pod \"calico-typha-664b8768c6-4nmzm\" (UID: \"af5d257b-a74b-4bee-8744-6c3bd017f7fb\") " pod="calico-system/calico-typha-664b8768c6-4nmzm" Jul 6 23:55:41.102121 kubelet[2497]: E0706 23:55:41.101946 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:41.107582 containerd[1469]: time="2025-07-06T23:55:41.107429740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-664b8768c6-4nmzm,Uid:af5d257b-a74b-4bee-8744-6c3bd017f7fb,Namespace:calico-system,Attempt:0,}" Jul 6 23:55:41.161925 containerd[1469]: time="2025-07-06T23:55:41.158250667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:41.161925 containerd[1469]: time="2025-07-06T23:55:41.160394670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:41.161925 containerd[1469]: time="2025-07-06T23:55:41.160419630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:41.161925 containerd[1469]: time="2025-07-06T23:55:41.160633407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:41.206352 systemd[1]: Started cri-containerd-d1f7acc54e4fb68a3cde589138feb74033ac59185a98aac8f041351a1ffcb603.scope - libcontainer container d1f7acc54e4fb68a3cde589138feb74033ac59185a98aac8f041351a1ffcb603. Jul 6 23:55:41.236729 systemd[1]: Created slice kubepods-besteffort-pod3bbff3d8_f142_4e66_b0cd_ec59f714f010.slice - libcontainer container kubepods-besteffort-pod3bbff3d8_f142_4e66_b0cd_ec59f714f010.slice. Jul 6 23:55:41.310120 kubelet[2497]: I0706 23:55:41.309257 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3bbff3d8-f142-4e66-b0cd-ec59f714f010-cni-bin-dir\") pod \"calico-node-r5ql2\" (UID: \"3bbff3d8-f142-4e66-b0cd-ec59f714f010\") " pod="calico-system/calico-node-r5ql2" Jul 6 23:55:41.310120 kubelet[2497]: I0706 23:55:41.309324 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3bbff3d8-f142-4e66-b0cd-ec59f714f010-cni-log-dir\") pod \"calico-node-r5ql2\" (UID: \"3bbff3d8-f142-4e66-b0cd-ec59f714f010\") " pod="calico-system/calico-node-r5ql2" Jul 6 23:55:41.310120 kubelet[2497]: I0706 23:55:41.309346 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3bbff3d8-f142-4e66-b0cd-ec59f714f010-policysync\") pod \"calico-node-r5ql2\" (UID: \"3bbff3d8-f142-4e66-b0cd-ec59f714f010\") " pod="calico-system/calico-node-r5ql2" Jul 6 23:55:41.310120 kubelet[2497]: I0706 23:55:41.309370 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7bmc\" (UniqueName: \"kubernetes.io/projected/3bbff3d8-f142-4e66-b0cd-ec59f714f010-kube-api-access-p7bmc\") pod \"calico-node-r5ql2\" (UID: \"3bbff3d8-f142-4e66-b0cd-ec59f714f010\") " pod="calico-system/calico-node-r5ql2" Jul 6 23:55:41.310120 kubelet[2497]: I0706 23:55:41.309390 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bbff3d8-f142-4e66-b0cd-ec59f714f010-xtables-lock\") pod \"calico-node-r5ql2\" (UID: \"3bbff3d8-f142-4e66-b0cd-ec59f714f010\") " pod="calico-system/calico-node-r5ql2" Jul 6 23:55:41.310538 kubelet[2497]: I0706 23:55:41.309411 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3bbff3d8-f142-4e66-b0cd-ec59f714f010-node-certs\") pod \"calico-node-r5ql2\" (UID: \"3bbff3d8-f142-4e66-b0cd-ec59f714f010\") " pod="calico-system/calico-node-r5ql2" Jul 6 23:55:41.310538 kubelet[2497]: I0706 23:55:41.309433 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bbff3d8-f142-4e66-b0cd-ec59f714f010-tigera-ca-bundle\") pod \"calico-node-r5ql2\" (UID: \"3bbff3d8-f142-4e66-b0cd-ec59f714f010\") " pod="calico-system/calico-node-r5ql2" Jul 6 23:55:41.310538 kubelet[2497]: I0706 23:55:41.309455 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3bbff3d8-f142-4e66-b0cd-ec59f714f010-flexvol-driver-host\") pod \"calico-node-r5ql2\" (UID: \"3bbff3d8-f142-4e66-b0cd-ec59f714f010\") " pod="calico-system/calico-node-r5ql2" Jul 6 23:55:41.310538 kubelet[2497]: I0706 23:55:41.309476 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bbff3d8-f142-4e66-b0cd-ec59f714f010-lib-modules\") pod \"calico-node-r5ql2\" (UID: \"3bbff3d8-f142-4e66-b0cd-ec59f714f010\") " pod="calico-system/calico-node-r5ql2" Jul 6 23:55:41.310538 kubelet[2497]: I0706 23:55:41.309501 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3bbff3d8-f142-4e66-b0cd-ec59f714f010-var-lib-calico\") pod \"calico-node-r5ql2\" (UID: \"3bbff3d8-f142-4e66-b0cd-ec59f714f010\") " pod="calico-system/calico-node-r5ql2" Jul 6 23:55:41.310692 kubelet[2497]: I0706 23:55:41.309531 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3bbff3d8-f142-4e66-b0cd-ec59f714f010-cni-net-dir\") pod \"calico-node-r5ql2\" (UID: \"3bbff3d8-f142-4e66-b0cd-ec59f714f010\") " pod="calico-system/calico-node-r5ql2" Jul 6 23:55:41.310692 kubelet[2497]: I0706 23:55:41.309561 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3bbff3d8-f142-4e66-b0cd-ec59f714f010-var-run-calico\") pod \"calico-node-r5ql2\" (UID: \"3bbff3d8-f142-4e66-b0cd-ec59f714f010\") " pod="calico-system/calico-node-r5ql2" Jul 6 23:55:41.362113 containerd[1469]: time="2025-07-06T23:55:41.360556448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-664b8768c6-4nmzm,Uid:af5d257b-a74b-4bee-8744-6c3bd017f7fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"d1f7acc54e4fb68a3cde589138feb74033ac59185a98aac8f041351a1ffcb603\"" Jul 6 23:55:41.366225 kubelet[2497]: E0706 23:55:41.366185 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:41.376695 containerd[1469]: time="2025-07-06T23:55:41.376479014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:55:41.419555 kubelet[2497]: E0706 23:55:41.419212 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.419555 kubelet[2497]: W0706 23:55:41.419255 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.419555 kubelet[2497]: E0706 23:55:41.419310 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.420297 kubelet[2497]: E0706 23:55:41.419981 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.420297 kubelet[2497]: W0706 23:55:41.420003 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.420297 kubelet[2497]: E0706 23:55:41.420024 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.421030 kubelet[2497]: E0706 23:55:41.420551 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.421030 kubelet[2497]: W0706 23:55:41.420562 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.421030 kubelet[2497]: E0706 23:55:41.420576 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.424463 kubelet[2497]: E0706 23:55:41.424399 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.424463 kubelet[2497]: W0706 23:55:41.424426 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.425350 kubelet[2497]: E0706 23:55:41.425088 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.425350 kubelet[2497]: W0706 23:55:41.425104 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.425350 kubelet[2497]: E0706 23:55:41.425124 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.425571 kubelet[2497]: E0706 23:55:41.425530 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.425774 kubelet[2497]: W0706 23:55:41.425636 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.425774 kubelet[2497]: E0706 23:55:41.425656 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.425953 kubelet[2497]: E0706 23:55:41.425934 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.426450 kubelet[2497]: E0706 23:55:41.426420 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.427077 kubelet[2497]: W0706 23:55:41.427037 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.427344 kubelet[2497]: E0706 23:55:41.427148 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.430733 kubelet[2497]: E0706 23:55:41.430571 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.431472 kubelet[2497]: W0706 23:55:41.431440 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.431631 kubelet[2497]: E0706 23:55:41.431595 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.434178 kubelet[2497]: E0706 23:55:41.433923 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.434178 kubelet[2497]: W0706 23:55:41.433945 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.434178 kubelet[2497]: E0706 23:55:41.433969 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.438731 kubelet[2497]: E0706 23:55:41.438320 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.438731 kubelet[2497]: W0706 23:55:41.438346 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.438731 kubelet[2497]: E0706 23:55:41.438397 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.439461 kubelet[2497]: E0706 23:55:41.438923 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.439461 kubelet[2497]: W0706 23:55:41.439094 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.439957 kubelet[2497]: E0706 23:55:41.439624 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.439957 kubelet[2497]: W0706 23:55:41.439636 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.439957 kubelet[2497]: E0706 23:55:41.439735 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.439957 kubelet[2497]: E0706 23:55:41.439781 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.441313 kubelet[2497]: E0706 23:55:41.440801 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.441313 kubelet[2497]: W0706 23:55:41.440824 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.441313 kubelet[2497]: E0706 23:55:41.440965 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.442508 kubelet[2497]: E0706 23:55:41.442486 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.442802 kubelet[2497]: W0706 23:55:41.442629 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.442802 kubelet[2497]: E0706 23:55:41.442732 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.443351 kubelet[2497]: E0706 23:55:41.443139 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.443351 kubelet[2497]: W0706 23:55:41.443152 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.443498 kubelet[2497]: E0706 23:55:41.443455 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.443791 kubelet[2497]: E0706 23:55:41.443733 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.443791 kubelet[2497]: W0706 23:55:41.443746 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.444045 kubelet[2497]: E0706 23:55:41.444033 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.444226 kubelet[2497]: E0706 23:55:41.444163 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.444226 kubelet[2497]: W0706 23:55:41.444172 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.444404 kubelet[2497]: E0706 23:55:41.444309 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.444596 kubelet[2497]: E0706 23:55:41.444585 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.444764 kubelet[2497]: W0706 23:55:41.444668 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.444764 kubelet[2497]: E0706 23:55:41.444695 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.445483 kubelet[2497]: E0706 23:55:41.445171 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.445483 kubelet[2497]: W0706 23:55:41.445183 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.445483 kubelet[2497]: E0706 23:55:41.445197 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.445673 kubelet[2497]: E0706 23:55:41.445650 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.445824 kubelet[2497]: W0706 23:55:41.445812 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.445971 kubelet[2497]: E0706 23:55:41.445940 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.446554 kubelet[2497]: E0706 23:55:41.446288 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.446554 kubelet[2497]: W0706 23:55:41.446302 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.446554 kubelet[2497]: E0706 23:55:41.446317 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.447691 kubelet[2497]: E0706 23:55:41.447675 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.447809 kubelet[2497]: W0706 23:55:41.447763 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.447809 kubelet[2497]: E0706 23:55:41.447782 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.474533 kubelet[2497]: E0706 23:55:41.474429 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9x6m" podUID="7abd3305-5de3-4e82-84ee-e697b6b22043" Jul 6 23:55:41.506155 kubelet[2497]: E0706 23:55:41.505903 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.506155 kubelet[2497]: W0706 23:55:41.505939 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.506155 kubelet[2497]: E0706 23:55:41.505971 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.506871 kubelet[2497]: E0706 23:55:41.506710 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.506871 kubelet[2497]: W0706 23:55:41.506732 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.506871 kubelet[2497]: E0706 23:55:41.506755 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.507153 kubelet[2497]: E0706 23:55:41.507135 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.507367 kubelet[2497]: W0706 23:55:41.507224 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.507367 kubelet[2497]: E0706 23:55:41.507246 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.507637 kubelet[2497]: E0706 23:55:41.507618 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.507721 kubelet[2497]: W0706 23:55:41.507707 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.507926 kubelet[2497]: E0706 23:55:41.507790 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.508103 kubelet[2497]: E0706 23:55:41.508089 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.508282 kubelet[2497]: W0706 23:55:41.508164 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.508282 kubelet[2497]: E0706 23:55:41.508185 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.508447 kubelet[2497]: E0706 23:55:41.508413 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.508535 kubelet[2497]: W0706 23:55:41.508522 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.508601 kubelet[2497]: E0706 23:55:41.508583 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.509216 kubelet[2497]: E0706 23:55:41.508948 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.509216 kubelet[2497]: W0706 23:55:41.508960 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.509216 kubelet[2497]: E0706 23:55:41.508971 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.509542 kubelet[2497]: E0706 23:55:41.509477 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.509765 kubelet[2497]: W0706 23:55:41.509630 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.509765 kubelet[2497]: E0706 23:55:41.509649 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.510581 kubelet[2497]: E0706 23:55:41.510565 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.510687 kubelet[2497]: W0706 23:55:41.510649 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.510808 kubelet[2497]: E0706 23:55:41.510770 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.511635 kubelet[2497]: E0706 23:55:41.511488 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.511635 kubelet[2497]: W0706 23:55:41.511502 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.511635 kubelet[2497]: E0706 23:55:41.511515 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.511913 kubelet[2497]: E0706 23:55:41.511895 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.512006 kubelet[2497]: W0706 23:55:41.511993 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.512156 kubelet[2497]: E0706 23:55:41.512136 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.512633 kubelet[2497]: E0706 23:55:41.512618 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.512716 kubelet[2497]: W0706 23:55:41.512706 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.512966 kubelet[2497]: E0706 23:55:41.512769 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.513226 kubelet[2497]: E0706 23:55:41.513193 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.513427 kubelet[2497]: W0706 23:55:41.513344 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.513427 kubelet[2497]: E0706 23:55:41.513363 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.513899 kubelet[2497]: E0706 23:55:41.513792 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.513899 kubelet[2497]: W0706 23:55:41.513805 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.513899 kubelet[2497]: E0706 23:55:41.513816 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.514554 kubelet[2497]: E0706 23:55:41.514315 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.514554 kubelet[2497]: W0706 23:55:41.514328 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.514554 kubelet[2497]: E0706 23:55:41.514340 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.516317 kubelet[2497]: E0706 23:55:41.515977 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.516317 kubelet[2497]: W0706 23:55:41.515999 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.516317 kubelet[2497]: E0706 23:55:41.516018 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.517375 kubelet[2497]: E0706 23:55:41.517056 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.517375 kubelet[2497]: W0706 23:55:41.517112 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.517375 kubelet[2497]: E0706 23:55:41.517132 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.517672 kubelet[2497]: E0706 23:55:41.517655 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.517835 kubelet[2497]: W0706 23:55:41.517760 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.517835 kubelet[2497]: E0706 23:55:41.517783 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.518328 kubelet[2497]: E0706 23:55:41.518161 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.518328 kubelet[2497]: W0706 23:55:41.518182 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.518328 kubelet[2497]: E0706 23:55:41.518239 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.519570 kubelet[2497]: E0706 23:55:41.519234 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.519570 kubelet[2497]: W0706 23:55:41.519252 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.519570 kubelet[2497]: E0706 23:55:41.519269 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.519925 kubelet[2497]: E0706 23:55:41.519912 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.519989 kubelet[2497]: W0706 23:55:41.519980 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.520145 kubelet[2497]: E0706 23:55:41.520033 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.520145 kubelet[2497]: I0706 23:55:41.520083 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7abd3305-5de3-4e82-84ee-e697b6b22043-kubelet-dir\") pod \"csi-node-driver-c9x6m\" (UID: \"7abd3305-5de3-4e82-84ee-e697b6b22043\") " pod="calico-system/csi-node-driver-c9x6m" Jul 6 23:55:41.520488 kubelet[2497]: E0706 23:55:41.520468 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.520738 kubelet[2497]: W0706 23:55:41.520601 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.520738 kubelet[2497]: E0706 23:55:41.520635 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.520738 kubelet[2497]: I0706 23:55:41.520667 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7abd3305-5de3-4e82-84ee-e697b6b22043-socket-dir\") pod \"csi-node-driver-c9x6m\" (UID: \"7abd3305-5de3-4e82-84ee-e697b6b22043\") " pod="calico-system/csi-node-driver-c9x6m" Jul 6 23:55:41.521576 kubelet[2497]: E0706 23:55:41.521406 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.521576 kubelet[2497]: W0706 23:55:41.521424 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.521576 kubelet[2497]: E0706 23:55:41.521447 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.521576 kubelet[2497]: I0706 23:55:41.521470 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7abd3305-5de3-4e82-84ee-e697b6b22043-varrun\") pod \"csi-node-driver-c9x6m\" (UID: \"7abd3305-5de3-4e82-84ee-e697b6b22043\") " pod="calico-system/csi-node-driver-c9x6m" Jul 6 23:55:41.522335 kubelet[2497]: E0706 23:55:41.522030 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.522335 kubelet[2497]: W0706 23:55:41.522048 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.522335 kubelet[2497]: E0706 23:55:41.522259 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.522335 kubelet[2497]: I0706 23:55:41.522293 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7abd3305-5de3-4e82-84ee-e697b6b22043-registration-dir\") pod \"csi-node-driver-c9x6m\" (UID: \"7abd3305-5de3-4e82-84ee-e697b6b22043\") " pod="calico-system/csi-node-driver-c9x6m" Jul 6 23:55:41.522826 kubelet[2497]: E0706 23:55:41.522733 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.522826 kubelet[2497]: W0706 23:55:41.522746 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.522826 kubelet[2497]: E0706 23:55:41.522777 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.523470 kubelet[2497]: E0706 23:55:41.523191 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.523470 kubelet[2497]: W0706 23:55:41.523210 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.523470 kubelet[2497]: E0706 23:55:41.523241 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.523845 kubelet[2497]: E0706 23:55:41.523714 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.523845 kubelet[2497]: W0706 23:55:41.523730 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.523845 kubelet[2497]: E0706 23:55:41.523762 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.524251 kubelet[2497]: E0706 23:55:41.524128 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.524251 kubelet[2497]: W0706 23:55:41.524144 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.524478 kubelet[2497]: E0706 23:55:41.524408 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.524478 kubelet[2497]: I0706 23:55:41.524453 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bcfk\" (UniqueName: \"kubernetes.io/projected/7abd3305-5de3-4e82-84ee-e697b6b22043-kube-api-access-5bcfk\") pod \"csi-node-driver-c9x6m\" (UID: \"7abd3305-5de3-4e82-84ee-e697b6b22043\") " pod="calico-system/csi-node-driver-c9x6m" Jul 6 23:55:41.524819 kubelet[2497]: E0706 23:55:41.524647 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.524819 kubelet[2497]: W0706 23:55:41.524664 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.524819 kubelet[2497]: E0706 23:55:41.524709 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.525130 kubelet[2497]: E0706 23:55:41.525033 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.525130 kubelet[2497]: W0706 23:55:41.525045 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.525130 kubelet[2497]: E0706 23:55:41.525059 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.525380 kubelet[2497]: E0706 23:55:41.525335 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.525380 kubelet[2497]: W0706 23:55:41.525378 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.525622 kubelet[2497]: E0706 23:55:41.525398 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.525622 kubelet[2497]: E0706 23:55:41.525614 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.525622 kubelet[2497]: W0706 23:55:41.525623 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.525622 kubelet[2497]: E0706 23:55:41.525636 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.525898 kubelet[2497]: E0706 23:55:41.525847 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.525898 kubelet[2497]: W0706 23:55:41.525863 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.525898 kubelet[2497]: E0706 23:55:41.525875 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.526944 kubelet[2497]: E0706 23:55:41.526911 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.527019 kubelet[2497]: W0706 23:55:41.526999 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.527019 kubelet[2497]: E0706 23:55:41.527015 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.527333 kubelet[2497]: E0706 23:55:41.527313 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.527333 kubelet[2497]: W0706 23:55:41.527326 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.527445 kubelet[2497]: E0706 23:55:41.527338 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.545169 containerd[1469]: time="2025-07-06T23:55:41.544963269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r5ql2,Uid:3bbff3d8-f142-4e66-b0cd-ec59f714f010,Namespace:calico-system,Attempt:0,}" Jul 6 23:55:41.592650 containerd[1469]: time="2025-07-06T23:55:41.592053812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:41.594492 containerd[1469]: time="2025-07-06T23:55:41.593691306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:41.594492 containerd[1469]: time="2025-07-06T23:55:41.593749735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:41.594492 containerd[1469]: time="2025-07-06T23:55:41.593901723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:41.630019 kubelet[2497]: E0706 23:55:41.627226 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.630019 kubelet[2497]: W0706 23:55:41.627262 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.630019 kubelet[2497]: E0706 23:55:41.627286 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.629717 systemd[1]: Started cri-containerd-91e46c8a8928fc10f8c60d730c495356737e5f80c5bdefad9e078d913d46055d.scope - libcontainer container 91e46c8a8928fc10f8c60d730c495356737e5f80c5bdefad9e078d913d46055d. Jul 6 23:55:41.632516 kubelet[2497]: E0706 23:55:41.630492 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.632516 kubelet[2497]: W0706 23:55:41.630524 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.632516 kubelet[2497]: E0706 23:55:41.630551 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.636090 kubelet[2497]: E0706 23:55:41.633185 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.636090 kubelet[2497]: W0706 23:55:41.633218 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.636090 kubelet[2497]: E0706 23:55:41.633251 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.642551 kubelet[2497]: E0706 23:55:41.642487 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.642719 kubelet[2497]: W0706 23:55:41.642608 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.642719 kubelet[2497]: E0706 23:55:41.642647 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.643390 kubelet[2497]: E0706 23:55:41.643265 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.643390 kubelet[2497]: W0706 23:55:41.643284 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.643390 kubelet[2497]: E0706 23:55:41.643310 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.643917 kubelet[2497]: E0706 23:55:41.643739 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.643917 kubelet[2497]: W0706 23:55:41.643777 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.643917 kubelet[2497]: E0706 23:55:41.643797 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.644385 kubelet[2497]: E0706 23:55:41.644266 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.644385 kubelet[2497]: W0706 23:55:41.644306 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.644895 kubelet[2497]: E0706 23:55:41.644787 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.644895 kubelet[2497]: W0706 23:55:41.644805 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.648247 kubelet[2497]: E0706 23:55:41.648213 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.648247 kubelet[2497]: W0706 23:55:41.648240 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.649160 kubelet[2497]: E0706 23:55:41.649102 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.649160 kubelet[2497]: W0706 23:55:41.649127 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.649503 kubelet[2497]: E0706 23:55:41.649426 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.649503 kubelet[2497]: W0706 23:55:41.649441 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.649503 kubelet[2497]: E0706 23:55:41.649458 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.651018 kubelet[2497]: E0706 23:55:41.650939 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.651018 kubelet[2497]: W0706 23:55:41.650959 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.651018 kubelet[2497]: E0706 23:55:41.650983 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.654902 kubelet[2497]: E0706 23:55:41.654631 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.654902 kubelet[2497]: W0706 23:55:41.654662 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.654902 kubelet[2497]: E0706 23:55:41.654692 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.659432 kubelet[2497]: E0706 23:55:41.659381 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.659432 kubelet[2497]: W0706 23:55:41.659425 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.659432 kubelet[2497]: E0706 23:55:41.659456 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.659432 kubelet[2497]: E0706 23:55:41.659501 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.660097 kubelet[2497]: E0706 23:55:41.659779 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.660097 kubelet[2497]: W0706 23:55:41.659793 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.660097 kubelet[2497]: E0706 23:55:41.659807 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.660097 kubelet[2497]: E0706 23:55:41.659966 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.660097 kubelet[2497]: E0706 23:55:41.659981 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.660097 kubelet[2497]: E0706 23:55:41.660010 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.660994 kubelet[2497]: E0706 23:55:41.660533 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.660994 kubelet[2497]: W0706 23:55:41.660554 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.660994 kubelet[2497]: E0706 23:55:41.660570 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.663170 kubelet[2497]: E0706 23:55:41.663127 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.663170 kubelet[2497]: W0706 23:55:41.663147 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.663170 kubelet[2497]: E0706 23:55:41.663168 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.663969 kubelet[2497]: E0706 23:55:41.663740 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.663969 kubelet[2497]: W0706 23:55:41.663765 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.663969 kubelet[2497]: E0706 23:55:41.663803 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.665298 kubelet[2497]: E0706 23:55:41.665275 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.665440 kubelet[2497]: W0706 23:55:41.665295 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.665440 kubelet[2497]: E0706 23:55:41.665321 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.665815 kubelet[2497]: E0706 23:55:41.665798 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.665882 kubelet[2497]: W0706 23:55:41.665818 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.665882 kubelet[2497]: E0706 23:55:41.665833 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.667782 kubelet[2497]: E0706 23:55:41.667751 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.667950 kubelet[2497]: W0706 23:55:41.667813 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.667950 kubelet[2497]: E0706 23:55:41.667836 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.668241 kubelet[2497]: E0706 23:55:41.668218 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.668241 kubelet[2497]: W0706 23:55:41.668237 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.668379 kubelet[2497]: E0706 23:55:41.668253 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.671342 kubelet[2497]: E0706 23:55:41.671192 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.671342 kubelet[2497]: W0706 23:55:41.671343 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.671551 kubelet[2497]: E0706 23:55:41.671494 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.674604 kubelet[2497]: E0706 23:55:41.674524 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.674604 kubelet[2497]: W0706 23:55:41.674556 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.674604 kubelet[2497]: E0706 23:55:41.674587 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.678453 kubelet[2497]: E0706 23:55:41.678414 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.678453 kubelet[2497]: W0706 23:55:41.678441 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.678453 kubelet[2497]: E0706 23:55:41.678466 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.681795 containerd[1469]: time="2025-07-06T23:55:41.681732257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r5ql2,Uid:3bbff3d8-f142-4e66-b0cd-ec59f714f010,Namespace:calico-system,Attempt:0,} returns sandbox id \"91e46c8a8928fc10f8c60d730c495356737e5f80c5bdefad9e078d913d46055d\"" Jul 6 23:55:41.692856 kubelet[2497]: E0706 23:55:41.692814 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:41.692856 kubelet[2497]: W0706 23:55:41.692837 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:41.692856 kubelet[2497]: E0706 23:55:41.692860 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:41.935975 systemd[1]: run-containerd-runc-k8s.io-d1f7acc54e4fb68a3cde589138feb74033ac59185a98aac8f041351a1ffcb603-runc.br2PYh.mount: Deactivated successfully. Jul 6 23:55:42.777587 kubelet[2497]: E0706 23:55:42.776437 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9x6m" podUID="7abd3305-5de3-4e82-84ee-e697b6b22043" Jul 6 23:55:42.838347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998794109.mount: Deactivated successfully. Jul 6 23:55:44.149932 containerd[1469]: time="2025-07-06T23:55:44.148965271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:44.149932 containerd[1469]: time="2025-07-06T23:55:44.149844344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 6 23:55:44.150569 containerd[1469]: time="2025-07-06T23:55:44.150369939Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:44.156047 containerd[1469]: time="2025-07-06T23:55:44.155995305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:44.160094 containerd[1469]: time="2025-07-06T23:55:44.160005336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.783476489s" Jul 6 23:55:44.160503 containerd[1469]: time="2025-07-06T23:55:44.160179302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 6 23:55:44.164290 containerd[1469]: time="2025-07-06T23:55:44.163914644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:55:44.196476 containerd[1469]: time="2025-07-06T23:55:44.196415941Z" level=info msg="CreateContainer within sandbox \"d1f7acc54e4fb68a3cde589138feb74033ac59185a98aac8f041351a1ffcb603\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:55:44.214434 containerd[1469]: time="2025-07-06T23:55:44.214121038Z" level=info msg="CreateContainer within sandbox \"d1f7acc54e4fb68a3cde589138feb74033ac59185a98aac8f041351a1ffcb603\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9c65b9e647a9288e1d3bc72a06de70b5520daa41e809ee9062496905144edff7\"" Jul 6 23:55:44.223945 containerd[1469]: time="2025-07-06T23:55:44.223832447Z" level=info msg="StartContainer for \"9c65b9e647a9288e1d3bc72a06de70b5520daa41e809ee9062496905144edff7\"" Jul 6 23:55:44.290371 systemd[1]: Started cri-containerd-9c65b9e647a9288e1d3bc72a06de70b5520daa41e809ee9062496905144edff7.scope - libcontainer container 9c65b9e647a9288e1d3bc72a06de70b5520daa41e809ee9062496905144edff7. Jul 6 23:55:44.356868 containerd[1469]: time="2025-07-06T23:55:44.356814009Z" level=info msg="StartContainer for \"9c65b9e647a9288e1d3bc72a06de70b5520daa41e809ee9062496905144edff7\" returns successfully" Jul 6 23:55:44.779541 kubelet[2497]: E0706 23:55:44.779473 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9x6m" podUID="7abd3305-5de3-4e82-84ee-e697b6b22043" Jul 6 23:55:44.884779 kubelet[2497]: E0706 23:55:44.884738 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:44.920644 kubelet[2497]: I0706 23:55:44.920581 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-664b8768c6-4nmzm" podStartSLOduration=2.134272393 podStartE2EDuration="4.920563052s" podCreationTimestamp="2025-07-06 23:55:40 +0000 UTC" firstStartedPulling="2025-07-06 23:55:41.376012994 +0000 UTC m=+22.765973297" lastFinishedPulling="2025-07-06 23:55:44.162303634 +0000 UTC m=+25.552263956" observedRunningTime="2025-07-06 23:55:44.917950901 +0000 UTC m=+26.307911230" watchObservedRunningTime="2025-07-06 23:55:44.920563052 +0000 UTC m=+26.310523376" Jul 6 23:55:44.952581 kubelet[2497]: E0706 23:55:44.952379 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.952581 kubelet[2497]: W0706 23:55:44.952410 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.952581 kubelet[2497]: E0706 23:55:44.952437 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.953202 kubelet[2497]: E0706 23:55:44.953046 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.953202 kubelet[2497]: W0706 23:55:44.953077 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.953202 kubelet[2497]: E0706 23:55:44.953098 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.953597 kubelet[2497]: E0706 23:55:44.953487 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.953597 kubelet[2497]: W0706 23:55:44.953504 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.953597 kubelet[2497]: E0706 23:55:44.953521 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.954273 kubelet[2497]: E0706 23:55:44.954058 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.954273 kubelet[2497]: W0706 23:55:44.954092 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.954273 kubelet[2497]: E0706 23:55:44.954107 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.954768 kubelet[2497]: E0706 23:55:44.954642 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.954768 kubelet[2497]: W0706 23:55:44.954654 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.954768 kubelet[2497]: E0706 23:55:44.954669 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.955086 kubelet[2497]: E0706 23:55:44.955016 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.955086 kubelet[2497]: W0706 23:55:44.955029 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.955372 kubelet[2497]: E0706 23:55:44.955231 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.955690 kubelet[2497]: E0706 23:55:44.955628 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.955690 kubelet[2497]: W0706 23:55:44.955639 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.955690 kubelet[2497]: E0706 23:55:44.955650 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.956058 kubelet[2497]: E0706 23:55:44.955992 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.956058 kubelet[2497]: W0706 23:55:44.956004 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.956058 kubelet[2497]: E0706 23:55:44.956015 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.956484 kubelet[2497]: E0706 23:55:44.956417 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.956484 kubelet[2497]: W0706 23:55:44.956433 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.956484 kubelet[2497]: E0706 23:55:44.956444 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.956959 kubelet[2497]: E0706 23:55:44.956839 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.956959 kubelet[2497]: W0706 23:55:44.956851 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.956959 kubelet[2497]: E0706 23:55:44.956863 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.957219 kubelet[2497]: E0706 23:55:44.957153 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.957219 kubelet[2497]: W0706 23:55:44.957163 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.957219 kubelet[2497]: E0706 23:55:44.957174 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.957814 kubelet[2497]: E0706 23:55:44.957646 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.957814 kubelet[2497]: W0706 23:55:44.957664 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.957814 kubelet[2497]: E0706 23:55:44.957680 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.958196 kubelet[2497]: E0706 23:55:44.958111 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.958196 kubelet[2497]: W0706 23:55:44.958124 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.958196 kubelet[2497]: E0706 23:55:44.958135 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.958660 kubelet[2497]: E0706 23:55:44.958552 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.958660 kubelet[2497]: W0706 23:55:44.958563 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.958660 kubelet[2497]: E0706 23:55:44.958574 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.958871 kubelet[2497]: E0706 23:55:44.958858 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.958994 kubelet[2497]: W0706 23:55:44.958924 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.958994 kubelet[2497]: E0706 23:55:44.958938 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.967518 kubelet[2497]: E0706 23:55:44.967483 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.967910 kubelet[2497]: W0706 23:55:44.967704 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.967910 kubelet[2497]: E0706 23:55:44.967741 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.968537 kubelet[2497]: E0706 23:55:44.968515 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.968808 kubelet[2497]: W0706 23:55:44.968671 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.968808 kubelet[2497]: E0706 23:55:44.968702 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.969295 kubelet[2497]: E0706 23:55:44.969261 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.969295 kubelet[2497]: W0706 23:55:44.969286 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.969405 kubelet[2497]: E0706 23:55:44.969310 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.970028 kubelet[2497]: E0706 23:55:44.969999 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.970028 kubelet[2497]: W0706 23:55:44.970019 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.970160 kubelet[2497]: E0706 23:55:44.970042 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.970471 kubelet[2497]: E0706 23:55:44.970453 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.970471 kubelet[2497]: W0706 23:55:44.970469 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.970666 kubelet[2497]: E0706 23:55:44.970640 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.971178 kubelet[2497]: E0706 23:55:44.971159 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.971458 kubelet[2497]: W0706 23:55:44.971434 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.971600 kubelet[2497]: E0706 23:55:44.971571 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.971999 kubelet[2497]: E0706 23:55:44.971960 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.971999 kubelet[2497]: W0706 23:55:44.971976 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.972630 kubelet[2497]: E0706 23:55:44.972175 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.973209 kubelet[2497]: E0706 23:55:44.973176 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.973209 kubelet[2497]: W0706 23:55:44.973195 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.973319 kubelet[2497]: E0706 23:55:44.973303 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.974292 kubelet[2497]: E0706 23:55:44.974165 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.974292 kubelet[2497]: W0706 23:55:44.974184 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.974292 kubelet[2497]: E0706 23:55:44.974221 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.974699 kubelet[2497]: E0706 23:55:44.974586 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.974699 kubelet[2497]: W0706 23:55:44.974601 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.974699 kubelet[2497]: E0706 23:55:44.974679 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.975141 kubelet[2497]: E0706 23:55:44.974977 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.975141 kubelet[2497]: W0706 23:55:44.974991 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.975141 kubelet[2497]: E0706 23:55:44.975012 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.975547 kubelet[2497]: E0706 23:55:44.975425 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.975547 kubelet[2497]: W0706 23:55:44.975439 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.975547 kubelet[2497]: E0706 23:55:44.975458 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.976104 kubelet[2497]: E0706 23:55:44.975934 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.976104 kubelet[2497]: W0706 23:55:44.975948 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.976104 kubelet[2497]: E0706 23:55:44.975968 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.976996 kubelet[2497]: E0706 23:55:44.976456 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.976996 kubelet[2497]: W0706 23:55:44.976475 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.976996 kubelet[2497]: E0706 23:55:44.976925 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.977458 kubelet[2497]: E0706 23:55:44.977312 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.977458 kubelet[2497]: W0706 23:55:44.977327 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.977458 kubelet[2497]: E0706 23:55:44.977343 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.978440 kubelet[2497]: E0706 23:55:44.978179 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.978440 kubelet[2497]: W0706 23:55:44.978193 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.978440 kubelet[2497]: E0706 23:55:44.978213 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.979146 kubelet[2497]: E0706 23:55:44.979124 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.979146 kubelet[2497]: W0706 23:55:44.979141 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.979431 kubelet[2497]: E0706 23:55:44.979164 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:44.981440 kubelet[2497]: E0706 23:55:44.981372 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:55:44.981440 kubelet[2497]: W0706 23:55:44.981388 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:55:44.981440 kubelet[2497]: E0706 23:55:44.981404 2497 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:55:45.529389 containerd[1469]: time="2025-07-06T23:55:45.529328954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:45.530437 containerd[1469]: time="2025-07-06T23:55:45.530186945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 6 23:55:45.531050 containerd[1469]: time="2025-07-06T23:55:45.531008791Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:45.535007 containerd[1469]: time="2025-07-06T23:55:45.533681518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:45.535007 containerd[1469]: time="2025-07-06T23:55:45.534757127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.370768895s" Jul 6 23:55:45.535007 containerd[1469]: time="2025-07-06T23:55:45.534866856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 6 23:55:45.564834 containerd[1469]: time="2025-07-06T23:55:45.564732680Z" level=info msg="CreateContainer within sandbox \"91e46c8a8928fc10f8c60d730c495356737e5f80c5bdefad9e078d913d46055d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:55:45.579013 containerd[1469]: time="2025-07-06T23:55:45.578958503Z" level=info msg="CreateContainer within sandbox \"91e46c8a8928fc10f8c60d730c495356737e5f80c5bdefad9e078d913d46055d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d564c60908e50ca763977524b21c7e43e36f17dbdd4dea087dc1c4b818db23e8\"" Jul 6 23:55:45.579650 containerd[1469]: time="2025-07-06T23:55:45.579618843Z" level=info msg="StartContainer for \"d564c60908e50ca763977524b21c7e43e36f17dbdd4dea087dc1c4b818db23e8\"" Jul 6 23:55:45.635323 systemd[1]: Started cri-containerd-d564c60908e50ca763977524b21c7e43e36f17dbdd4dea087dc1c4b818db23e8.scope - libcontainer container d564c60908e50ca763977524b21c7e43e36f17dbdd4dea087dc1c4b818db23e8. Jul 6 23:55:45.688397 containerd[1469]: time="2025-07-06T23:55:45.687921186Z" level=info msg="StartContainer for \"d564c60908e50ca763977524b21c7e43e36f17dbdd4dea087dc1c4b818db23e8\" returns successfully" Jul 6 23:55:45.708483 systemd[1]: cri-containerd-d564c60908e50ca763977524b21c7e43e36f17dbdd4dea087dc1c4b818db23e8.scope: Deactivated successfully. Jul 6 23:55:45.792340 containerd[1469]: time="2025-07-06T23:55:45.744925633Z" level=info msg="shim disconnected" id=d564c60908e50ca763977524b21c7e43e36f17dbdd4dea087dc1c4b818db23e8 namespace=k8s.io Jul 6 23:55:45.792340 containerd[1469]: time="2025-07-06T23:55:45.791959830Z" level=warning msg="cleaning up after shim disconnected" id=d564c60908e50ca763977524b21c7e43e36f17dbdd4dea087dc1c4b818db23e8 namespace=k8s.io Jul 6 23:55:45.792340 containerd[1469]: time="2025-07-06T23:55:45.791977708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:55:45.895403 kubelet[2497]: I0706 23:55:45.895369 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:55:45.896016 kubelet[2497]: E0706 23:55:45.895795 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:45.899817 containerd[1469]: time="2025-07-06T23:55:45.898482539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:55:46.172182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d564c60908e50ca763977524b21c7e43e36f17dbdd4dea087dc1c4b818db23e8-rootfs.mount: Deactivated successfully. Jul 6 23:55:46.774575 kubelet[2497]: E0706 23:55:46.773449 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9x6m" podUID="7abd3305-5de3-4e82-84ee-e697b6b22043" Jul 6 23:55:48.776506 kubelet[2497]: E0706 23:55:48.776391 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9x6m" podUID="7abd3305-5de3-4e82-84ee-e697b6b22043" Jul 6 23:55:50.774399 kubelet[2497]: E0706 23:55:50.773498 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9x6m" podUID="7abd3305-5de3-4e82-84ee-e697b6b22043" Jul 6 23:55:51.396049 containerd[1469]: time="2025-07-06T23:55:51.394847133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:51.396049 containerd[1469]: time="2025-07-06T23:55:51.395775596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 6 23:55:51.396049 containerd[1469]: time="2025-07-06T23:55:51.395977177Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:51.399868 containerd[1469]: time="2025-07-06T23:55:51.399801761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:51.401357 containerd[1469]: time="2025-07-06T23:55:51.401303850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 5.502757324s" Jul 6 23:55:51.401357 containerd[1469]: time="2025-07-06T23:55:51.401357558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 6 23:55:51.405666 containerd[1469]: time="2025-07-06T23:55:51.405607343Z" level=info msg="CreateContainer within sandbox \"91e46c8a8928fc10f8c60d730c495356737e5f80c5bdefad9e078d913d46055d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:55:51.458425 containerd[1469]: time="2025-07-06T23:55:51.458367416Z" level=info msg="CreateContainer within sandbox \"91e46c8a8928fc10f8c60d730c495356737e5f80c5bdefad9e078d913d46055d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8ac06d464bdc821df93d63d4ff4c9cadedd4c6ff7d943c8c45f850581e4fe926\"" Jul 6 23:55:51.459716 containerd[1469]: time="2025-07-06T23:55:51.459399738Z" level=info msg="StartContainer for \"8ac06d464bdc821df93d63d4ff4c9cadedd4c6ff7d943c8c45f850581e4fe926\"" Jul 6 23:55:51.551801 systemd[1]: Started cri-containerd-8ac06d464bdc821df93d63d4ff4c9cadedd4c6ff7d943c8c45f850581e4fe926.scope - libcontainer container 8ac06d464bdc821df93d63d4ff4c9cadedd4c6ff7d943c8c45f850581e4fe926. Jul 6 23:55:51.590819 containerd[1469]: time="2025-07-06T23:55:51.590598313Z" level=info msg="StartContainer for \"8ac06d464bdc821df93d63d4ff4c9cadedd4c6ff7d943c8c45f850581e4fe926\" returns successfully" Jul 6 23:55:52.303096 systemd[1]: cri-containerd-8ac06d464bdc821df93d63d4ff4c9cadedd4c6ff7d943c8c45f850581e4fe926.scope: Deactivated successfully. Jul 6 23:55:52.341906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ac06d464bdc821df93d63d4ff4c9cadedd4c6ff7d943c8c45f850581e4fe926-rootfs.mount: Deactivated successfully. Jul 6 23:55:52.345326 containerd[1469]: time="2025-07-06T23:55:52.345253013Z" level=info msg="shim disconnected" id=8ac06d464bdc821df93d63d4ff4c9cadedd4c6ff7d943c8c45f850581e4fe926 namespace=k8s.io Jul 6 23:55:52.345326 containerd[1469]: time="2025-07-06T23:55:52.345321034Z" level=warning msg="cleaning up after shim disconnected" id=8ac06d464bdc821df93d63d4ff4c9cadedd4c6ff7d943c8c45f850581e4fe926 namespace=k8s.io Jul 6 23:55:52.345326 containerd[1469]: time="2025-07-06T23:55:52.345330001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:55:52.355311 kubelet[2497]: I0706 23:55:52.355277 2497 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:55:52.411360 systemd[1]: Created slice kubepods-burstable-pod18aa2971_3783_48e6_bae4_2b9283bfdea3.slice - libcontainer container kubepods-burstable-pod18aa2971_3783_48e6_bae4_2b9283bfdea3.slice. Jul 6 23:55:52.419052 systemd[1]: Created slice kubepods-burstable-poda6b54d7d_c374_4342_81a5_36baa376812a.slice - libcontainer container kubepods-burstable-poda6b54d7d_c374_4342_81a5_36baa376812a.slice. Jul 6 23:55:52.426088 kubelet[2497]: I0706 23:55:52.425608 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b4d620c4-ff0d-4798-9fbc-b59167726f3d-calico-apiserver-certs\") pod \"calico-apiserver-6d4c5f94cc-brq2m\" (UID: \"b4d620c4-ff0d-4798-9fbc-b59167726f3d\") " pod="calico-apiserver/calico-apiserver-6d4c5f94cc-brq2m" Jul 6 23:55:52.426088 kubelet[2497]: I0706 23:55:52.425644 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6b54d7d-c374-4342-81a5-36baa376812a-config-volume\") pod \"coredns-7c65d6cfc9-kxtht\" (UID: \"a6b54d7d-c374-4342-81a5-36baa376812a\") " pod="kube-system/coredns-7c65d6cfc9-kxtht" Jul 6 23:55:52.426088 kubelet[2497]: I0706 23:55:52.425664 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9hlb\" (UniqueName: \"kubernetes.io/projected/a6b54d7d-c374-4342-81a5-36baa376812a-kube-api-access-z9hlb\") pod \"coredns-7c65d6cfc9-kxtht\" (UID: \"a6b54d7d-c374-4342-81a5-36baa376812a\") " pod="kube-system/coredns-7c65d6cfc9-kxtht" Jul 6 23:55:52.426088 kubelet[2497]: I0706 23:55:52.425694 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhxnm\" (UniqueName: \"kubernetes.io/projected/b4d620c4-ff0d-4798-9fbc-b59167726f3d-kube-api-access-lhxnm\") pod \"calico-apiserver-6d4c5f94cc-brq2m\" (UID: \"b4d620c4-ff0d-4798-9fbc-b59167726f3d\") " pod="calico-apiserver/calico-apiserver-6d4c5f94cc-brq2m" Jul 6 23:55:52.426088 kubelet[2497]: I0706 23:55:52.425719 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18aa2971-3783-48e6-bae4-2b9283bfdea3-config-volume\") pod \"coredns-7c65d6cfc9-hlr8x\" (UID: \"18aa2971-3783-48e6-bae4-2b9283bfdea3\") " pod="kube-system/coredns-7c65d6cfc9-hlr8x" Jul 6 23:55:52.426433 kubelet[2497]: I0706 23:55:52.425743 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p4bm\" (UniqueName: \"kubernetes.io/projected/18aa2971-3783-48e6-bae4-2b9283bfdea3-kube-api-access-4p4bm\") pod \"coredns-7c65d6cfc9-hlr8x\" (UID: \"18aa2971-3783-48e6-bae4-2b9283bfdea3\") " pod="kube-system/coredns-7c65d6cfc9-hlr8x" Jul 6 23:55:52.430851 systemd[1]: Created slice kubepods-besteffort-podb4d620c4_ff0d_4798_9fbc_b59167726f3d.slice - libcontainer container kubepods-besteffort-podb4d620c4_ff0d_4798_9fbc_b59167726f3d.slice. Jul 6 23:55:52.448712 systemd[1]: Created slice kubepods-besteffort-pod81f527b1_eb10_4bdf_b6ab_7aba8546e99f.slice - libcontainer container kubepods-besteffort-pod81f527b1_eb10_4bdf_b6ab_7aba8546e99f.slice. Jul 6 23:55:52.454819 kubelet[2497]: W0706 23:55:52.454761 2497 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4081.3.4-9-29085cf50e" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.4-9-29085cf50e' and this object Jul 6 23:55:52.455910 kubelet[2497]: E0706 23:55:52.455104 2497 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4081.3.4-9-29085cf50e\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.4-9-29085cf50e' and this object" logger="UnhandledError" Jul 6 23:55:52.459095 kubelet[2497]: W0706 23:55:52.455758 2497 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4081.3.4-9-29085cf50e" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.4-9-29085cf50e' and this object Jul 6 23:55:52.459095 kubelet[2497]: E0706 23:55:52.456878 2497 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4081.3.4-9-29085cf50e\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.4-9-29085cf50e' and this object" logger="UnhandledError" Jul 6 23:55:52.464428 systemd[1]: Created slice kubepods-besteffort-pod15633098_99cc_4da2_aa2e_7ce63afd2881.slice - libcontainer container kubepods-besteffort-pod15633098_99cc_4da2_aa2e_7ce63afd2881.slice. Jul 6 23:55:52.488321 systemd[1]: Created slice kubepods-besteffort-pod440a2155_cfea_4aaa_b248_ccfd5a0a677a.slice - libcontainer container kubepods-besteffort-pod440a2155_cfea_4aaa_b248_ccfd5a0a677a.slice. Jul 6 23:55:52.502417 systemd[1]: Created slice kubepods-besteffort-pod4e9dee5d_e24d_4799_b79a_36586ddb42a9.slice - libcontainer container kubepods-besteffort-pod4e9dee5d_e24d_4799_b79a_36586ddb42a9.slice. Jul 6 23:55:52.627115 kubelet[2497]: I0706 23:55:52.627025 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96fnq\" (UniqueName: \"kubernetes.io/projected/15633098-99cc-4da2-aa2e-7ce63afd2881-kube-api-access-96fnq\") pod \"calico-kube-controllers-7684c4899d-9vhnf\" (UID: \"15633098-99cc-4da2-aa2e-7ce63afd2881\") " pod="calico-system/calico-kube-controllers-7684c4899d-9vhnf" Jul 6 23:55:52.627914 kubelet[2497]: I0706 23:55:52.627444 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfb5p\" (UniqueName: \"kubernetes.io/projected/4e9dee5d-e24d-4799-b79a-36586ddb42a9-kube-api-access-mfb5p\") pod \"goldmane-58fd7646b9-wb46p\" (UID: \"4e9dee5d-e24d-4799-b79a-36586ddb42a9\") " pod="calico-system/goldmane-58fd7646b9-wb46p" Jul 6 23:55:52.627914 kubelet[2497]: I0706 23:55:52.627493 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-whisker-backend-key-pair\") pod \"whisker-577555dc9b-7t5dc\" (UID: \"81f527b1-eb10-4bdf-b6ab-7aba8546e99f\") " pod="calico-system/whisker-577555dc9b-7t5dc" Jul 6 23:55:52.627914 kubelet[2497]: I0706 23:55:52.627537 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-whisker-ca-bundle\") pod \"whisker-577555dc9b-7t5dc\" (UID: \"81f527b1-eb10-4bdf-b6ab-7aba8546e99f\") " pod="calico-system/whisker-577555dc9b-7t5dc" Jul 6 23:55:52.627914 kubelet[2497]: I0706 23:55:52.627565 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hspvj\" (UniqueName: \"kubernetes.io/projected/440a2155-cfea-4aaa-b248-ccfd5a0a677a-kube-api-access-hspvj\") pod \"calico-apiserver-6d4c5f94cc-28d9v\" (UID: \"440a2155-cfea-4aaa-b248-ccfd5a0a677a\") " pod="calico-apiserver/calico-apiserver-6d4c5f94cc-28d9v" Jul 6 23:55:52.627914 kubelet[2497]: I0706 23:55:52.627617 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e9dee5d-e24d-4799-b79a-36586ddb42a9-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-wb46p\" (UID: \"4e9dee5d-e24d-4799-b79a-36586ddb42a9\") " pod="calico-system/goldmane-58fd7646b9-wb46p" Jul 6 23:55:52.628239 kubelet[2497]: I0706 23:55:52.627681 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15633098-99cc-4da2-aa2e-7ce63afd2881-tigera-ca-bundle\") pod \"calico-kube-controllers-7684c4899d-9vhnf\" (UID: \"15633098-99cc-4da2-aa2e-7ce63afd2881\") " pod="calico-system/calico-kube-controllers-7684c4899d-9vhnf" Jul 6 23:55:52.628239 kubelet[2497]: I0706 23:55:52.627731 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d626g\" (UniqueName: \"kubernetes.io/projected/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-kube-api-access-d626g\") pod \"whisker-577555dc9b-7t5dc\" (UID: \"81f527b1-eb10-4bdf-b6ab-7aba8546e99f\") " pod="calico-system/whisker-577555dc9b-7t5dc" Jul 6 23:55:52.628239 kubelet[2497]: I0706 23:55:52.627756 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e9dee5d-e24d-4799-b79a-36586ddb42a9-config\") pod \"goldmane-58fd7646b9-wb46p\" (UID: \"4e9dee5d-e24d-4799-b79a-36586ddb42a9\") " pod="calico-system/goldmane-58fd7646b9-wb46p" Jul 6 23:55:52.628239 kubelet[2497]: I0706 23:55:52.627779 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4e9dee5d-e24d-4799-b79a-36586ddb42a9-goldmane-key-pair\") pod \"goldmane-58fd7646b9-wb46p\" (UID: \"4e9dee5d-e24d-4799-b79a-36586ddb42a9\") " pod="calico-system/goldmane-58fd7646b9-wb46p" Jul 6 23:55:52.628239 kubelet[2497]: I0706 23:55:52.627804 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/440a2155-cfea-4aaa-b248-ccfd5a0a677a-calico-apiserver-certs\") pod \"calico-apiserver-6d4c5f94cc-28d9v\" (UID: \"440a2155-cfea-4aaa-b248-ccfd5a0a677a\") " pod="calico-apiserver/calico-apiserver-6d4c5f94cc-28d9v" Jul 6 23:55:52.715224 kubelet[2497]: E0706 23:55:52.715180 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:52.716227 containerd[1469]: time="2025-07-06T23:55:52.716173740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlr8x,Uid:18aa2971-3783-48e6-bae4-2b9283bfdea3,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:52.732193 kubelet[2497]: E0706 23:55:52.726536 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:52.732355 containerd[1469]: time="2025-07-06T23:55:52.729374936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kxtht,Uid:a6b54d7d-c374-4342-81a5-36baa376812a,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:52.755875 containerd[1469]: time="2025-07-06T23:55:52.755829971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c5f94cc-brq2m,Uid:b4d620c4-ff0d-4798-9fbc-b59167726f3d,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:55:52.796298 systemd[1]: Created slice kubepods-besteffort-pod7abd3305_5de3_4e82_84ee_e697b6b22043.slice - libcontainer container kubepods-besteffort-pod7abd3305_5de3_4e82_84ee_e697b6b22043.slice. Jul 6 23:55:52.802622 containerd[1469]: time="2025-07-06T23:55:52.802514507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c9x6m,Uid:7abd3305-5de3-4e82-84ee-e697b6b22043,Namespace:calico-system,Attempt:0,}" Jul 6 23:55:52.810994 containerd[1469]: time="2025-07-06T23:55:52.810842794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c5f94cc-28d9v,Uid:440a2155-cfea-4aaa-b248-ccfd5a0a677a,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:55:52.844722 containerd[1469]: time="2025-07-06T23:55:52.844664874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-wb46p,Uid:4e9dee5d-e24d-4799-b79a-36586ddb42a9,Namespace:calico-system,Attempt:0,}" Jul 6 23:55:52.951494 containerd[1469]: time="2025-07-06T23:55:52.951254077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:55:53.088933 containerd[1469]: time="2025-07-06T23:55:53.088331568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7684c4899d-9vhnf,Uid:15633098-99cc-4da2-aa2e-7ce63afd2881,Namespace:calico-system,Attempt:0,}" Jul 6 23:55:53.170837 containerd[1469]: time="2025-07-06T23:55:53.170452106Z" level=error msg="Failed to destroy network for sandbox \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.171501 containerd[1469]: time="2025-07-06T23:55:53.171449093Z" level=error msg="encountered an error cleaning up failed sandbox \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.171604 containerd[1469]: time="2025-07-06T23:55:53.171548264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlr8x,Uid:18aa2971-3783-48e6-bae4-2b9283bfdea3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.191096 containerd[1469]: time="2025-07-06T23:55:53.190425298Z" level=error msg="Failed to destroy network for sandbox \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.192470 kubelet[2497]: E0706 23:55:53.192055 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.192470 kubelet[2497]: E0706 23:55:53.192146 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hlr8x" Jul 6 23:55:53.192470 kubelet[2497]: E0706 23:55:53.192169 2497 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hlr8x" Jul 6 23:55:53.192681 kubelet[2497]: E0706 23:55:53.192234 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hlr8x_kube-system(18aa2971-3783-48e6-bae4-2b9283bfdea3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hlr8x_kube-system(18aa2971-3783-48e6-bae4-2b9283bfdea3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hlr8x" podUID="18aa2971-3783-48e6-bae4-2b9283bfdea3" Jul 6 23:55:53.196624 containerd[1469]: time="2025-07-06T23:55:53.193898377Z" level=error msg="Failed to destroy network for sandbox \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.197790 containerd[1469]: time="2025-07-06T23:55:53.196900093Z" level=error msg="encountered an error cleaning up failed sandbox \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.197790 containerd[1469]: time="2025-07-06T23:55:53.196967763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c9x6m,Uid:7abd3305-5de3-4e82-84ee-e697b6b22043,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.199170 kubelet[2497]: E0706 23:55:53.199128 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.199295 kubelet[2497]: E0706 23:55:53.199205 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c9x6m" Jul 6 23:55:53.199295 kubelet[2497]: E0706 23:55:53.199228 2497 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c9x6m" Jul 6 23:55:53.199381 kubelet[2497]: E0706 23:55:53.199338 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c9x6m_calico-system(7abd3305-5de3-4e82-84ee-e697b6b22043)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c9x6m_calico-system(7abd3305-5de3-4e82-84ee-e697b6b22043)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c9x6m" podUID="7abd3305-5de3-4e82-84ee-e697b6b22043" Jul 6 23:55:53.199699 containerd[1469]: time="2025-07-06T23:55:53.199662793Z" level=error msg="encountered an error cleaning up failed sandbox \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.199875 containerd[1469]: time="2025-07-06T23:55:53.199807462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-wb46p,Uid:4e9dee5d-e24d-4799-b79a-36586ddb42a9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.202186 containerd[1469]: time="2025-07-06T23:55:53.201896632Z" level=error msg="Failed to destroy network for sandbox \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.203097 containerd[1469]: time="2025-07-06T23:55:53.203004340Z" level=error msg="encountered an error cleaning up failed sandbox \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.203258 containerd[1469]: time="2025-07-06T23:55:53.203073267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c5f94cc-brq2m,Uid:b4d620c4-ff0d-4798-9fbc-b59167726f3d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.203635 kubelet[2497]: E0706 23:55:53.203593 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.203704 kubelet[2497]: E0706 23:55:53.203658 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4c5f94cc-brq2m" Jul 6 23:55:53.203738 kubelet[2497]: E0706 23:55:53.203683 2497 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4c5f94cc-brq2m" Jul 6 23:55:53.203787 kubelet[2497]: E0706 23:55:53.203747 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d4c5f94cc-brq2m_calico-apiserver(b4d620c4-ff0d-4798-9fbc-b59167726f3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d4c5f94cc-brq2m_calico-apiserver(b4d620c4-ff0d-4798-9fbc-b59167726f3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4c5f94cc-brq2m" podUID="b4d620c4-ff0d-4798-9fbc-b59167726f3d" Jul 6 23:55:53.203846 kubelet[2497]: E0706 23:55:53.203810 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.203846 kubelet[2497]: E0706 23:55:53.203827 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-wb46p" Jul 6 23:55:53.203846 kubelet[2497]: E0706 23:55:53.203840 2497 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-wb46p" Jul 6 23:55:53.203928 kubelet[2497]: E0706 23:55:53.203861 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-wb46p_calico-system(4e9dee5d-e24d-4799-b79a-36586ddb42a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-wb46p_calico-system(4e9dee5d-e24d-4799-b79a-36586ddb42a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-wb46p" podUID="4e9dee5d-e24d-4799-b79a-36586ddb42a9" Jul 6 23:55:53.221648 containerd[1469]: time="2025-07-06T23:55:53.221588557Z" level=error msg="Failed to destroy network for sandbox \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.222675 containerd[1469]: time="2025-07-06T23:55:53.222630118Z" level=error msg="encountered an error cleaning up failed sandbox \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.222881 containerd[1469]: time="2025-07-06T23:55:53.222828605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kxtht,Uid:a6b54d7d-c374-4342-81a5-36baa376812a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.223314 kubelet[2497]: E0706 23:55:53.223154 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.223314 kubelet[2497]: E0706 23:55:53.223229 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kxtht" Jul 6 23:55:53.223314 kubelet[2497]: E0706 23:55:53.223255 2497 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kxtht" Jul 6 23:55:53.223495 kubelet[2497]: E0706 23:55:53.223301 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kxtht_kube-system(a6b54d7d-c374-4342-81a5-36baa376812a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kxtht_kube-system(a6b54d7d-c374-4342-81a5-36baa376812a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kxtht" podUID="a6b54d7d-c374-4342-81a5-36baa376812a" Jul 6 23:55:53.246051 containerd[1469]: time="2025-07-06T23:55:53.246006643Z" level=error msg="Failed to destroy network for sandbox \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.247563 containerd[1469]: time="2025-07-06T23:55:53.247388660Z" level=error msg="encountered an error cleaning up failed sandbox \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.247563 containerd[1469]: time="2025-07-06T23:55:53.247463132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c5f94cc-28d9v,Uid:440a2155-cfea-4aaa-b248-ccfd5a0a677a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.248305 kubelet[2497]: E0706 23:55:53.247768 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.248305 kubelet[2497]: E0706 23:55:53.247863 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4c5f94cc-28d9v" Jul 6 23:55:53.248305 kubelet[2497]: E0706 23:55:53.247923 2497 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4c5f94cc-28d9v" Jul 6 23:55:53.248479 kubelet[2497]: E0706 23:55:53.248014 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d4c5f94cc-28d9v_calico-apiserver(440a2155-cfea-4aaa-b248-ccfd5a0a677a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d4c5f94cc-28d9v_calico-apiserver(440a2155-cfea-4aaa-b248-ccfd5a0a677a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4c5f94cc-28d9v" podUID="440a2155-cfea-4aaa-b248-ccfd5a0a677a" Jul 6 23:55:53.279627 containerd[1469]: time="2025-07-06T23:55:53.278998717Z" level=error msg="Failed to destroy network for sandbox \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.279627 containerd[1469]: time="2025-07-06T23:55:53.279410300Z" level=error msg="encountered an error cleaning up failed sandbox \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.279627 containerd[1469]: time="2025-07-06T23:55:53.279482783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7684c4899d-9vhnf,Uid:15633098-99cc-4da2-aa2e-7ce63afd2881,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.280023 kubelet[2497]: E0706 23:55:53.279795 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:53.280023 kubelet[2497]: E0706 23:55:53.279881 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7684c4899d-9vhnf" Jul 6 23:55:53.280023 kubelet[2497]: E0706 23:55:53.279908 2497 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7684c4899d-9vhnf" Jul 6 23:55:53.280161 kubelet[2497]: E0706 23:55:53.279976 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7684c4899d-9vhnf_calico-system(15633098-99cc-4da2-aa2e-7ce63afd2881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7684c4899d-9vhnf_calico-system(15633098-99cc-4da2-aa2e-7ce63afd2881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7684c4899d-9vhnf" podUID="15633098-99cc-4da2-aa2e-7ce63afd2881" Jul 6 23:55:53.553368 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5-shm.mount: Deactivated successfully. Jul 6 23:55:53.741402 kubelet[2497]: E0706 23:55:53.741050 2497 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 6 23:55:53.741402 kubelet[2497]: E0706 23:55:53.741184 2497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-whisker-ca-bundle podName:81f527b1-eb10-4bdf-b6ab-7aba8546e99f nodeName:}" failed. No retries permitted until 2025-07-06 23:55:54.241161816 +0000 UTC m=+35.631122131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-whisker-ca-bundle") pod "whisker-577555dc9b-7t5dc" (UID: "81f527b1-eb10-4bdf-b6ab-7aba8546e99f") : failed to sync configmap cache: timed out waiting for the condition Jul 6 23:55:53.949903 kubelet[2497]: I0706 23:55:53.949832 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:55:53.954869 kubelet[2497]: I0706 23:55:53.951886 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:55:53.954869 kubelet[2497]: I0706 23:55:53.954581 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:55:53.959940 containerd[1469]: time="2025-07-06T23:55:53.959882734Z" level=info msg="StopPodSandbox for \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\"" Jul 6 23:55:53.960628 containerd[1469]: time="2025-07-06T23:55:53.960590116Z" level=info msg="StopPodSandbox for \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\"" Jul 6 23:55:53.964578 containerd[1469]: time="2025-07-06T23:55:53.964139611Z" level=info msg="Ensure that sandbox 70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3 in task-service has been cleanup successfully" Jul 6 23:55:53.964578 containerd[1469]: time="2025-07-06T23:55:53.964393115Z" level=info msg="StopPodSandbox for \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\"" Jul 6 23:55:53.964827 containerd[1469]: time="2025-07-06T23:55:53.964617216Z" level=info msg="Ensure that sandbox 91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b in task-service has been cleanup successfully" Jul 6 23:55:53.966459 containerd[1469]: time="2025-07-06T23:55:53.964147337Z" level=info msg="Ensure that sandbox 5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2 in task-service has been cleanup successfully" Jul 6 23:55:53.971623 kubelet[2497]: I0706 23:55:53.970875 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:55:53.973660 containerd[1469]: time="2025-07-06T23:55:53.973609467Z" level=info msg="StopPodSandbox for \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\"" Jul 6 23:55:53.973819 containerd[1469]: time="2025-07-06T23:55:53.973802868Z" level=info msg="Ensure that sandbox 609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e in task-service has been cleanup successfully" Jul 6 23:55:53.979978 kubelet[2497]: I0706 23:55:53.979393 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:55:53.981389 containerd[1469]: time="2025-07-06T23:55:53.981332227Z" level=info msg="StopPodSandbox for \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\"" Jul 6 23:55:53.982303 containerd[1469]: time="2025-07-06T23:55:53.982275559Z" level=info msg="Ensure that sandbox 8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5 in task-service has been cleanup successfully" Jul 6 23:55:53.993375 kubelet[2497]: I0706 23:55:53.993290 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:55:53.997174 containerd[1469]: time="2025-07-06T23:55:53.997043762Z" level=info msg="StopPodSandbox for \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\"" Jul 6 23:55:53.998448 containerd[1469]: time="2025-07-06T23:55:53.998404320Z" level=info msg="Ensure that sandbox 4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c in task-service has been cleanup successfully" Jul 6 23:55:54.003144 kubelet[2497]: I0706 23:55:54.002256 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:55:54.004831 containerd[1469]: time="2025-07-06T23:55:54.004504523Z" level=info msg="StopPodSandbox for \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\"" Jul 6 23:55:54.008759 containerd[1469]: time="2025-07-06T23:55:54.008705885Z" level=info msg="Ensure that sandbox d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f in task-service has been cleanup successfully" Jul 6 23:55:54.110389 containerd[1469]: time="2025-07-06T23:55:54.110318087Z" level=error msg="StopPodSandbox for \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\" failed" error="failed to destroy network for sandbox \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:54.111501 kubelet[2497]: E0706 23:55:54.111294 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:55:54.111501 kubelet[2497]: E0706 23:55:54.111366 2497 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3"} Jul 6 23:55:54.111501 kubelet[2497]: E0706 23:55:54.111431 2497 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"440a2155-cfea-4aaa-b248-ccfd5a0a677a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:55:54.111501 kubelet[2497]: E0706 23:55:54.111454 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"440a2155-cfea-4aaa-b248-ccfd5a0a677a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4c5f94cc-28d9v" podUID="440a2155-cfea-4aaa-b248-ccfd5a0a677a" Jul 6 23:55:54.114038 containerd[1469]: time="2025-07-06T23:55:54.113291689Z" level=error msg="StopPodSandbox for \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\" failed" error="failed to destroy network for sandbox \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:54.114175 kubelet[2497]: E0706 23:55:54.113546 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:55:54.114175 kubelet[2497]: E0706 23:55:54.113597 2497 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b"} Jul 6 23:55:54.114175 kubelet[2497]: E0706 23:55:54.113637 2497 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4d620c4-ff0d-4798-9fbc-b59167726f3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:55:54.114175 kubelet[2497]: E0706 23:55:54.113662 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4d620c4-ff0d-4798-9fbc-b59167726f3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4c5f94cc-brq2m" podUID="b4d620c4-ff0d-4798-9fbc-b59167726f3d" Jul 6 23:55:54.126774 containerd[1469]: time="2025-07-06T23:55:54.126708034Z" level=error msg="StopPodSandbox for \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\" failed" error="failed to destroy network for sandbox \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:54.127286 kubelet[2497]: E0706 23:55:54.127242 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:55:54.127852 kubelet[2497]: E0706 23:55:54.127416 2497 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5"} Jul 6 23:55:54.127852 kubelet[2497]: E0706 23:55:54.127457 2497 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"18aa2971-3783-48e6-bae4-2b9283bfdea3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:55:54.127852 kubelet[2497]: E0706 23:55:54.127484 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"18aa2971-3783-48e6-bae4-2b9283bfdea3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hlr8x" podUID="18aa2971-3783-48e6-bae4-2b9283bfdea3" Jul 6 23:55:54.127852 kubelet[2497]: E0706 23:55:54.127664 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:55:54.127852 kubelet[2497]: E0706 23:55:54.127727 2497 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e"} Jul 6 23:55:54.128373 containerd[1469]: time="2025-07-06T23:55:54.127441771Z" level=error msg="StopPodSandbox for \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\" failed" error="failed to destroy network for sandbox \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:54.128431 kubelet[2497]: E0706 23:55:54.127769 2497 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6b54d7d-c374-4342-81a5-36baa376812a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:55:54.128431 kubelet[2497]: E0706 23:55:54.127805 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6b54d7d-c374-4342-81a5-36baa376812a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kxtht" podUID="a6b54d7d-c374-4342-81a5-36baa376812a" Jul 6 23:55:54.134658 containerd[1469]: time="2025-07-06T23:55:54.134600342Z" level=error msg="StopPodSandbox for \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\" failed" error="failed to destroy network for sandbox \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:54.135164 kubelet[2497]: E0706 23:55:54.134867 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:55:54.135164 kubelet[2497]: E0706 23:55:54.134937 2497 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2"} Jul 6 23:55:54.135164 kubelet[2497]: E0706 23:55:54.134987 2497 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7abd3305-5de3-4e82-84ee-e697b6b22043\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:55:54.135164 kubelet[2497]: E0706 23:55:54.135021 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7abd3305-5de3-4e82-84ee-e697b6b22043\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c9x6m" podUID="7abd3305-5de3-4e82-84ee-e697b6b22043" Jul 6 23:55:54.147109 containerd[1469]: time="2025-07-06T23:55:54.147021832Z" level=error msg="StopPodSandbox for \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\" failed" error="failed to destroy network for sandbox \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:54.147409 kubelet[2497]: E0706 23:55:54.147367 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:55:54.147486 kubelet[2497]: E0706 23:55:54.147433 2497 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f"} Jul 6 23:55:54.147486 kubelet[2497]: E0706 23:55:54.147468 2497 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15633098-99cc-4da2-aa2e-7ce63afd2881\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:55:54.147591 kubelet[2497]: E0706 23:55:54.147496 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15633098-99cc-4da2-aa2e-7ce63afd2881\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7684c4899d-9vhnf" podUID="15633098-99cc-4da2-aa2e-7ce63afd2881" Jul 6 23:55:54.148653 containerd[1469]: time="2025-07-06T23:55:54.148611704Z" level=error msg="StopPodSandbox for \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\" failed" error="failed to destroy network for sandbox \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:54.148860 kubelet[2497]: E0706 23:55:54.148813 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:55:54.148922 kubelet[2497]: E0706 23:55:54.148863 2497 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c"} Jul 6 23:55:54.148922 kubelet[2497]: E0706 23:55:54.148892 2497 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e9dee5d-e24d-4799-b79a-36586ddb42a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:55:54.149014 kubelet[2497]: E0706 23:55:54.148942 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e9dee5d-e24d-4799-b79a-36586ddb42a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-wb46p" podUID="4e9dee5d-e24d-4799-b79a-36586ddb42a9" Jul 6 23:55:54.263119 containerd[1469]: time="2025-07-06T23:55:54.262044988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-577555dc9b-7t5dc,Uid:81f527b1-eb10-4bdf-b6ab-7aba8546e99f,Namespace:calico-system,Attempt:0,}" Jul 6 23:55:54.346042 containerd[1469]: time="2025-07-06T23:55:54.345976051Z" level=error msg="Failed to destroy network for sandbox \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:54.346511 containerd[1469]: time="2025-07-06T23:55:54.346466672Z" level=error msg="encountered an error cleaning up failed sandbox \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:54.346639 containerd[1469]: time="2025-07-06T23:55:54.346598554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-577555dc9b-7t5dc,Uid:81f527b1-eb10-4bdf-b6ab-7aba8546e99f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:54.348458 kubelet[2497]: E0706 23:55:54.348280 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:54.348458 kubelet[2497]: E0706 23:55:54.348378 2497 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-577555dc9b-7t5dc" Jul 6 23:55:54.348458 kubelet[2497]: E0706 23:55:54.348401 2497 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-577555dc9b-7t5dc" Jul 6 23:55:54.348919 kubelet[2497]: E0706 23:55:54.348661 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-577555dc9b-7t5dc_calico-system(81f527b1-eb10-4bdf-b6ab-7aba8546e99f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-577555dc9b-7t5dc_calico-system(81f527b1-eb10-4bdf-b6ab-7aba8546e99f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-577555dc9b-7t5dc" podUID="81f527b1-eb10-4bdf-b6ab-7aba8546e99f" Jul 6 23:55:54.349794 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3-shm.mount: Deactivated successfully. Jul 6 23:55:55.009085 kubelet[2497]: I0706 23:55:55.007718 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:55:55.009726 containerd[1469]: time="2025-07-06T23:55:55.008394614Z" level=info msg="StopPodSandbox for \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\"" Jul 6 23:55:55.009726 containerd[1469]: time="2025-07-06T23:55:55.008676528Z" level=info msg="Ensure that sandbox f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3 in task-service has been cleanup successfully" Jul 6 23:55:55.055667 containerd[1469]: time="2025-07-06T23:55:55.055616969Z" level=error msg="StopPodSandbox for \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\" failed" error="failed to destroy network for sandbox \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:55:55.056218 kubelet[2497]: E0706 23:55:55.056051 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:55:55.056218 kubelet[2497]: E0706 23:55:55.056125 2497 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3"} Jul 6 23:55:55.056218 kubelet[2497]: E0706 23:55:55.056162 2497 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"81f527b1-eb10-4bdf-b6ab-7aba8546e99f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:55:55.056218 kubelet[2497]: E0706 23:55:55.056185 2497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"81f527b1-eb10-4bdf-b6ab-7aba8546e99f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-577555dc9b-7t5dc" podUID="81f527b1-eb10-4bdf-b6ab-7aba8546e99f" Jul 6 23:55:56.359335 kubelet[2497]: I0706 23:55:56.359289 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:55:56.360496 kubelet[2497]: E0706 23:55:56.360464 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:55:57.017170 kubelet[2497]: E0706 23:55:57.016547 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:01.115686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3779337962.mount: Deactivated successfully. Jul 6 23:56:01.233155 containerd[1469]: time="2025-07-06T23:56:01.192274425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 6 23:56:01.233155 containerd[1469]: time="2025-07-06T23:56:01.218618149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 8.255034768s" Jul 6 23:56:01.233155 containerd[1469]: time="2025-07-06T23:56:01.231647119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 6 23:56:01.244164 containerd[1469]: time="2025-07-06T23:56:01.243206783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:01.273678 containerd[1469]: time="2025-07-06T23:56:01.272802550Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:01.273678 containerd[1469]: time="2025-07-06T23:56:01.273501846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:01.281409 containerd[1469]: time="2025-07-06T23:56:01.281333565Z" level=info msg="CreateContainer within sandbox \"91e46c8a8928fc10f8c60d730c495356737e5f80c5bdefad9e078d913d46055d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:56:01.356700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286982041.mount: Deactivated successfully. Jul 6 23:56:01.388888 containerd[1469]: time="2025-07-06T23:56:01.388647953Z" level=info msg="CreateContainer within sandbox \"91e46c8a8928fc10f8c60d730c495356737e5f80c5bdefad9e078d913d46055d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c1e71390ebd104a2c1d4c478f7da8bb0be725778793cba571b8d9c50e5df6680\"" Jul 6 23:56:01.392129 containerd[1469]: time="2025-07-06T23:56:01.391932714Z" level=info msg="StartContainer for \"c1e71390ebd104a2c1d4c478f7da8bb0be725778793cba571b8d9c50e5df6680\"" Jul 6 23:56:01.669618 systemd[1]: Started cri-containerd-c1e71390ebd104a2c1d4c478f7da8bb0be725778793cba571b8d9c50e5df6680.scope - libcontainer container c1e71390ebd104a2c1d4c478f7da8bb0be725778793cba571b8d9c50e5df6680. Jul 6 23:56:01.751248 containerd[1469]: time="2025-07-06T23:56:01.749766545Z" level=info msg="StartContainer for \"c1e71390ebd104a2c1d4c478f7da8bb0be725778793cba571b8d9c50e5df6680\" returns successfully" Jul 6 23:56:01.915331 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:56:01.916844 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:56:02.102454 kubelet[2497]: I0706 23:56:02.101572 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r5ql2" podStartSLOduration=1.541238667 podStartE2EDuration="21.089058934s" podCreationTimestamp="2025-07-06 23:55:41 +0000 UTC" firstStartedPulling="2025-07-06 23:55:41.68639471 +0000 UTC m=+23.076355019" lastFinishedPulling="2025-07-06 23:56:01.234214971 +0000 UTC m=+42.624175286" observedRunningTime="2025-07-06 23:56:02.087351759 +0000 UTC m=+43.477312096" watchObservedRunningTime="2025-07-06 23:56:02.089058934 +0000 UTC m=+43.479019259" Jul 6 23:56:02.353625 containerd[1469]: time="2025-07-06T23:56:02.353499183Z" level=info msg="StopPodSandbox for \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\"" Jul 6 23:56:02.768560 containerd[1469]: 2025-07-06 23:56:02.483 [INFO][3745] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:56:02.768560 containerd[1469]: 2025-07-06 23:56:02.484 [INFO][3745] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" iface="eth0" netns="/var/run/netns/cni-e3a221c5-fed2-0000-6e2d-6f3a8cde2a6a" Jul 6 23:56:02.768560 containerd[1469]: 2025-07-06 23:56:02.485 [INFO][3745] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" iface="eth0" netns="/var/run/netns/cni-e3a221c5-fed2-0000-6e2d-6f3a8cde2a6a" Jul 6 23:56:02.768560 containerd[1469]: 2025-07-06 23:56:02.486 [INFO][3745] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" iface="eth0" netns="/var/run/netns/cni-e3a221c5-fed2-0000-6e2d-6f3a8cde2a6a" Jul 6 23:56:02.768560 containerd[1469]: 2025-07-06 23:56:02.486 [INFO][3745] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:56:02.768560 containerd[1469]: 2025-07-06 23:56:02.486 [INFO][3745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:56:02.768560 containerd[1469]: 2025-07-06 23:56:02.727 [INFO][3753] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" HandleID="k8s-pod-network.f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Workload="ci--4081.3.4--9--29085cf50e-k8s-whisker--577555dc9b--7t5dc-eth0" Jul 6 23:56:02.768560 containerd[1469]: 2025-07-06 23:56:02.731 [INFO][3753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:02.768560 containerd[1469]: 2025-07-06 23:56:02.731 [INFO][3753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:02.768560 containerd[1469]: 2025-07-06 23:56:02.755 [WARNING][3753] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" HandleID="k8s-pod-network.f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Workload="ci--4081.3.4--9--29085cf50e-k8s-whisker--577555dc9b--7t5dc-eth0" Jul 6 23:56:02.768560 containerd[1469]: 2025-07-06 23:56:02.755 [INFO][3753] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" HandleID="k8s-pod-network.f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Workload="ci--4081.3.4--9--29085cf50e-k8s-whisker--577555dc9b--7t5dc-eth0" Jul 6 23:56:02.768560 containerd[1469]: 2025-07-06 23:56:02.758 [INFO][3753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:02.768560 containerd[1469]: 2025-07-06 23:56:02.764 [INFO][3745] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:56:02.770008 containerd[1469]: time="2025-07-06T23:56:02.769580144Z" level=info msg="TearDown network for sandbox \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\" successfully" Jul 6 23:56:02.770008 containerd[1469]: time="2025-07-06T23:56:02.769625528Z" level=info msg="StopPodSandbox for \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\" returns successfully" Jul 6 23:56:02.781769 systemd[1]: run-netns-cni\x2de3a221c5\x2dfed2\x2d0000\x2d6e2d\x2d6f3a8cde2a6a.mount: Deactivated successfully. Jul 6 23:56:02.939913 kubelet[2497]: I0706 23:56:02.938707 2497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-whisker-ca-bundle\") pod \"81f527b1-eb10-4bdf-b6ab-7aba8546e99f\" (UID: \"81f527b1-eb10-4bdf-b6ab-7aba8546e99f\") " Jul 6 23:56:02.939913 kubelet[2497]: I0706 23:56:02.938848 2497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-whisker-backend-key-pair\") pod \"81f527b1-eb10-4bdf-b6ab-7aba8546e99f\" (UID: \"81f527b1-eb10-4bdf-b6ab-7aba8546e99f\") " Jul 6 23:56:02.939913 kubelet[2497]: I0706 23:56:02.938881 2497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d626g\" (UniqueName: \"kubernetes.io/projected/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-kube-api-access-d626g\") pod \"81f527b1-eb10-4bdf-b6ab-7aba8546e99f\" (UID: \"81f527b1-eb10-4bdf-b6ab-7aba8546e99f\") " Jul 6 23:56:02.950677 kubelet[2497]: I0706 23:56:02.949076 2497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "81f527b1-eb10-4bdf-b6ab-7aba8546e99f" (UID: "81f527b1-eb10-4bdf-b6ab-7aba8546e99f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:56:02.953385 kubelet[2497]: I0706 23:56:02.949163 2497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "81f527b1-eb10-4bdf-b6ab-7aba8546e99f" (UID: "81f527b1-eb10-4bdf-b6ab-7aba8546e99f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:56:02.952859 systemd[1]: var-lib-kubelet-pods-81f527b1\x2deb10\x2d4bdf\x2db6ab\x2d7aba8546e99f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:56:02.956144 kubelet[2497]: I0706 23:56:02.955781 2497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-kube-api-access-d626g" (OuterVolumeSpecName: "kube-api-access-d626g") pod "81f527b1-eb10-4bdf-b6ab-7aba8546e99f" (UID: "81f527b1-eb10-4bdf-b6ab-7aba8546e99f"). InnerVolumeSpecName "kube-api-access-d626g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:56:02.960261 systemd[1]: var-lib-kubelet-pods-81f527b1\x2deb10\x2d4bdf\x2db6ab\x2d7aba8546e99f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd626g.mount: Deactivated successfully. Jul 6 23:56:03.040388 kubelet[2497]: I0706 23:56:03.040218 2497 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-whisker-backend-key-pair\") on node \"ci-4081.3.4-9-29085cf50e\" DevicePath \"\"" Jul 6 23:56:03.040388 kubelet[2497]: I0706 23:56:03.040264 2497 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d626g\" (UniqueName: \"kubernetes.io/projected/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-kube-api-access-d626g\") on node \"ci-4081.3.4-9-29085cf50e\" DevicePath \"\"" Jul 6 23:56:03.040388 kubelet[2497]: I0706 23:56:03.040276 2497 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81f527b1-eb10-4bdf-b6ab-7aba8546e99f-whisker-ca-bundle\") on node \"ci-4081.3.4-9-29085cf50e\" DevicePath \"\"" Jul 6 23:56:03.050334 systemd[1]: Removed slice kubepods-besteffort-pod81f527b1_eb10_4bdf_b6ab_7aba8546e99f.slice - libcontainer container kubepods-besteffort-pod81f527b1_eb10_4bdf_b6ab_7aba8546e99f.slice. Jul 6 23:56:03.238162 systemd[1]: Created slice kubepods-besteffort-podd2e96589_da97_4c05_922a_18a43f800080.slice - libcontainer container kubepods-besteffort-podd2e96589_da97_4c05_922a_18a43f800080.slice. Jul 6 23:56:03.344349 kubelet[2497]: I0706 23:56:03.344296 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2e96589-da97-4c05-922a-18a43f800080-whisker-ca-bundle\") pod \"whisker-7559c46b4d-d7wjd\" (UID: \"d2e96589-da97-4c05-922a-18a43f800080\") " pod="calico-system/whisker-7559c46b4d-d7wjd" Jul 6 23:56:03.346099 kubelet[2497]: I0706 23:56:03.344362 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc77q\" (UniqueName: \"kubernetes.io/projected/d2e96589-da97-4c05-922a-18a43f800080-kube-api-access-bc77q\") pod \"whisker-7559c46b4d-d7wjd\" (UID: \"d2e96589-da97-4c05-922a-18a43f800080\") " pod="calico-system/whisker-7559c46b4d-d7wjd" Jul 6 23:56:03.346099 kubelet[2497]: I0706 23:56:03.344392 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d2e96589-da97-4c05-922a-18a43f800080-whisker-backend-key-pair\") pod \"whisker-7559c46b4d-d7wjd\" (UID: \"d2e96589-da97-4c05-922a-18a43f800080\") " pod="calico-system/whisker-7559c46b4d-d7wjd" Jul 6 23:56:03.550597 containerd[1469]: time="2025-07-06T23:56:03.550542981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7559c46b4d-d7wjd,Uid:d2e96589-da97-4c05-922a-18a43f800080,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:03.771240 systemd-networkd[1374]: cali03b97a1d858: Link UP Jul 6 23:56:03.773024 systemd-networkd[1374]: cali03b97a1d858: Gained carrier Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.623 [INFO][3799] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.645 [INFO][3799] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-eth0 whisker-7559c46b4d- calico-system d2e96589-da97-4c05-922a-18a43f800080 958 0 2025-07-06 23:56:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7559c46b4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.4-9-29085cf50e whisker-7559c46b4d-d7wjd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali03b97a1d858 [] [] }} ContainerID="d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" Namespace="calico-system" Pod="whisker-7559c46b4d-d7wjd" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-" Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.645 [INFO][3799] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" Namespace="calico-system" Pod="whisker-7559c46b4d-d7wjd" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-eth0" Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.688 [INFO][3810] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" HandleID="k8s-pod-network.d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" Workload="ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-eth0" Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.688 [INFO][3810] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" HandleID="k8s-pod-network.d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" Workload="ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-9-29085cf50e", "pod":"whisker-7559c46b4d-d7wjd", "timestamp":"2025-07-06 23:56:03.688667201 +0000 UTC"}, Hostname:"ci-4081.3.4-9-29085cf50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.689 [INFO][3810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.689 [INFO][3810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.689 [INFO][3810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-9-29085cf50e' Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.701 [INFO][3810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.717 [INFO][3810] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.724 [INFO][3810] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.727 [INFO][3810] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.733 [INFO][3810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.733 [INFO][3810] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.736 [INFO][3810] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3 Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.741 [INFO][3810] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.748 [INFO][3810] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.1/26] block=192.168.70.0/26 handle="k8s-pod-network.d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.748 [INFO][3810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.1/26] handle="k8s-pod-network.d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.748 [INFO][3810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:03.792294 containerd[1469]: 2025-07-06 23:56:03.748 [INFO][3810] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.1/26] IPv6=[] ContainerID="d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" HandleID="k8s-pod-network.d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" Workload="ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-eth0" Jul 6 23:56:03.796188 containerd[1469]: 2025-07-06 23:56:03.753 [INFO][3799] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" Namespace="calico-system" Pod="whisker-7559c46b4d-d7wjd" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-eth0", GenerateName:"whisker-7559c46b4d-", Namespace:"calico-system", SelfLink:"", UID:"d2e96589-da97-4c05-922a-18a43f800080", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7559c46b4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"", Pod:"whisker-7559c46b4d-d7wjd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.70.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali03b97a1d858", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:03.796188 containerd[1469]: 2025-07-06 23:56:03.753 [INFO][3799] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.1/32] ContainerID="d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" Namespace="calico-system" Pod="whisker-7559c46b4d-d7wjd" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-eth0" Jul 6 23:56:03.796188 containerd[1469]: 2025-07-06 23:56:03.754 [INFO][3799] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03b97a1d858 ContainerID="d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" Namespace="calico-system" Pod="whisker-7559c46b4d-d7wjd" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-eth0" Jul 6 23:56:03.796188 containerd[1469]: 2025-07-06 23:56:03.769 [INFO][3799] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" Namespace="calico-system" Pod="whisker-7559c46b4d-d7wjd" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-eth0" Jul 6 23:56:03.796188 containerd[1469]: 2025-07-06 23:56:03.770 [INFO][3799] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" Namespace="calico-system" Pod="whisker-7559c46b4d-d7wjd" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-eth0", GenerateName:"whisker-7559c46b4d-", Namespace:"calico-system", SelfLink:"", UID:"d2e96589-da97-4c05-922a-18a43f800080", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7559c46b4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3", Pod:"whisker-7559c46b4d-d7wjd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.70.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali03b97a1d858", MAC:"ee:12:0b:6c:6f:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:03.796188 containerd[1469]: 2025-07-06 23:56:03.781 [INFO][3799] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3" Namespace="calico-system" Pod="whisker-7559c46b4d-d7wjd" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-whisker--7559c46b4d--d7wjd-eth0" Jul 6 23:56:03.838906 containerd[1469]: time="2025-07-06T23:56:03.832885178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:03.839252 containerd[1469]: time="2025-07-06T23:56:03.838884024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:03.839585 containerd[1469]: time="2025-07-06T23:56:03.838989188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:03.839585 containerd[1469]: time="2025-07-06T23:56:03.839539519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:03.861868 systemd[1]: Started cri-containerd-d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3.scope - libcontainer container d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3. Jul 6 23:56:04.000183 containerd[1469]: time="2025-07-06T23:56:03.999526426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7559c46b4d-d7wjd,Uid:d2e96589-da97-4c05-922a-18a43f800080,Namespace:calico-system,Attempt:0,} returns sandbox id \"d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3\"" Jul 6 23:56:04.048667 containerd[1469]: time="2025-07-06T23:56:04.047695392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:56:04.452098 kernel: bpftool[3980]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 6 23:56:04.776143 containerd[1469]: time="2025-07-06T23:56:04.774281690Z" level=info msg="StopPodSandbox for \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\"" Jul 6 23:56:04.832031 kubelet[2497]: I0706 23:56:04.831625 2497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f527b1-eb10-4bdf-b6ab-7aba8546e99f" path="/var/lib/kubelet/pods/81f527b1-eb10-4bdf-b6ab-7aba8546e99f/volumes" Jul 6 23:56:04.971565 containerd[1469]: 2025-07-06 23:56:04.897 [INFO][4012] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:56:04.971565 containerd[1469]: 2025-07-06 23:56:04.897 [INFO][4012] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" iface="eth0" netns="/var/run/netns/cni-945d417d-1c8d-fc7e-6ace-35aa8045fbeb" Jul 6 23:56:04.971565 containerd[1469]: 2025-07-06 23:56:04.897 [INFO][4012] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" iface="eth0" netns="/var/run/netns/cni-945d417d-1c8d-fc7e-6ace-35aa8045fbeb" Jul 6 23:56:04.971565 containerd[1469]: 2025-07-06 23:56:04.897 [INFO][4012] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" iface="eth0" netns="/var/run/netns/cni-945d417d-1c8d-fc7e-6ace-35aa8045fbeb" Jul 6 23:56:04.971565 containerd[1469]: 2025-07-06 23:56:04.897 [INFO][4012] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:56:04.971565 containerd[1469]: 2025-07-06 23:56:04.897 [INFO][4012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:56:04.971565 containerd[1469]: 2025-07-06 23:56:04.932 [INFO][4032] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" HandleID="k8s-pod-network.609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:04.971565 containerd[1469]: 2025-07-06 23:56:04.933 [INFO][4032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:04.971565 containerd[1469]: 2025-07-06 23:56:04.933 [INFO][4032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:04.971565 containerd[1469]: 2025-07-06 23:56:04.949 [WARNING][4032] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" HandleID="k8s-pod-network.609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:04.971565 containerd[1469]: 2025-07-06 23:56:04.949 [INFO][4032] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" HandleID="k8s-pod-network.609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:04.971565 containerd[1469]: 2025-07-06 23:56:04.958 [INFO][4032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:04.971565 containerd[1469]: 2025-07-06 23:56:04.964 [INFO][4012] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:56:04.972372 containerd[1469]: time="2025-07-06T23:56:04.972287475Z" level=info msg="TearDown network for sandbox \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\" successfully" Jul 6 23:56:04.972372 containerd[1469]: time="2025-07-06T23:56:04.972320634Z" level=info msg="StopPodSandbox for \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\" returns successfully" Jul 6 23:56:04.977887 systemd[1]: run-netns-cni\x2d945d417d\x2d1c8d\x2dfc7e\x2d6ace\x2d35aa8045fbeb.mount: Deactivated successfully. Jul 6 23:56:04.983154 kubelet[2497]: E0706 23:56:04.979056 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:04.986090 containerd[1469]: time="2025-07-06T23:56:04.985723553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kxtht,Uid:a6b54d7d-c374-4342-81a5-36baa376812a,Namespace:kube-system,Attempt:1,}" Jul 6 23:56:05.088175 systemd-networkd[1374]: cali03b97a1d858: Gained IPv6LL Jul 6 23:56:05.139812 systemd-networkd[1374]: vxlan.calico: Link UP Jul 6 23:56:05.139823 systemd-networkd[1374]: vxlan.calico: Gained carrier Jul 6 23:56:05.288470 systemd-networkd[1374]: caliec36ac81e75: Link UP Jul 6 23:56:05.290400 systemd-networkd[1374]: caliec36ac81e75: Gained carrier Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.157 [INFO][4039] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0 coredns-7c65d6cfc9- kube-system a6b54d7d-c374-4342-81a5-36baa376812a 969 0 2025-07-06 23:55:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-9-29085cf50e coredns-7c65d6cfc9-kxtht eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliec36ac81e75 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxtht" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-" Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.157 [INFO][4039] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxtht" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.223 [INFO][4057] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" HandleID="k8s-pod-network.36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.223 [INFO][4057] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" HandleID="k8s-pod-network.36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f3c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-9-29085cf50e", "pod":"coredns-7c65d6cfc9-kxtht", "timestamp":"2025-07-06 23:56:05.221331772 +0000 UTC"}, Hostname:"ci-4081.3.4-9-29085cf50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.224 [INFO][4057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.224 [INFO][4057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.224 [INFO][4057] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-9-29085cf50e' Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.246 [INFO][4057] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.253 [INFO][4057] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.259 [INFO][4057] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.261 [INFO][4057] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.264 [INFO][4057] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.264 [INFO][4057] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.266 [INFO][4057] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391 Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.274 [INFO][4057] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.281 [INFO][4057] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.2/26] block=192.168.70.0/26 handle="k8s-pod-network.36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.281 [INFO][4057] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.2/26] handle="k8s-pod-network.36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.281 [INFO][4057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:05.319615 containerd[1469]: 2025-07-06 23:56:05.281 [INFO][4057] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.2/26] IPv6=[] ContainerID="36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" HandleID="k8s-pod-network.36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:05.323853 containerd[1469]: 2025-07-06 23:56:05.284 [INFO][4039] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxtht" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a6b54d7d-c374-4342-81a5-36baa376812a", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"", Pod:"coredns-7c65d6cfc9-kxtht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec36ac81e75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:05.323853 containerd[1469]: 2025-07-06 23:56:05.284 [INFO][4039] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.2/32] ContainerID="36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxtht" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:05.323853 containerd[1469]: 2025-07-06 23:56:05.284 [INFO][4039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec36ac81e75 ContainerID="36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxtht" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:05.323853 containerd[1469]: 2025-07-06 23:56:05.292 [INFO][4039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxtht" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:05.323853 containerd[1469]: 2025-07-06 23:56:05.293 [INFO][4039] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxtht" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a6b54d7d-c374-4342-81a5-36baa376812a", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391", Pod:"coredns-7c65d6cfc9-kxtht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec36ac81e75", MAC:"06:e5:b0:46:5d:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:05.323853 containerd[1469]: 2025-07-06 23:56:05.311 [INFO][4039] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxtht" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:05.357271 containerd[1469]: time="2025-07-06T23:56:05.355360982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:05.357271 containerd[1469]: time="2025-07-06T23:56:05.355545691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:05.357271 containerd[1469]: time="2025-07-06T23:56:05.355608712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:05.357271 containerd[1469]: time="2025-07-06T23:56:05.355992546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:05.406429 systemd[1]: Started cri-containerd-36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391.scope - libcontainer container 36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391. Jul 6 23:56:05.495987 containerd[1469]: time="2025-07-06T23:56:05.495888219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kxtht,Uid:a6b54d7d-c374-4342-81a5-36baa376812a,Namespace:kube-system,Attempt:1,} returns sandbox id \"36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391\"" Jul 6 23:56:05.498094 kubelet[2497]: E0706 23:56:05.496854 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:05.501568 containerd[1469]: time="2025-07-06T23:56:05.501531661Z" level=info msg="CreateContainer within sandbox \"36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:56:05.528349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181924824.mount: Deactivated successfully. Jul 6 23:56:05.532603 containerd[1469]: time="2025-07-06T23:56:05.532420328Z" level=info msg="CreateContainer within sandbox \"36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"714f1e38d33d12a0b0775258fedea1f453a7fca4e651693faf5dedb09a7aa0ba\"" Jul 6 23:56:05.533954 containerd[1469]: time="2025-07-06T23:56:05.533441014Z" level=info msg="StartContainer for \"714f1e38d33d12a0b0775258fedea1f453a7fca4e651693faf5dedb09a7aa0ba\"" Jul 6 23:56:05.603321 systemd[1]: Started cri-containerd-714f1e38d33d12a0b0775258fedea1f453a7fca4e651693faf5dedb09a7aa0ba.scope - libcontainer container 714f1e38d33d12a0b0775258fedea1f453a7fca4e651693faf5dedb09a7aa0ba. Jul 6 23:56:05.657509 containerd[1469]: time="2025-07-06T23:56:05.657320747Z" level=info msg="StartContainer for \"714f1e38d33d12a0b0775258fedea1f453a7fca4e651693faf5dedb09a7aa0ba\" returns successfully" Jul 6 23:56:05.695771 containerd[1469]: time="2025-07-06T23:56:05.694407361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:05.697263 containerd[1469]: time="2025-07-06T23:56:05.696488851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 6 23:56:05.697263 containerd[1469]: time="2025-07-06T23:56:05.697225729Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:05.701613 containerd[1469]: time="2025-07-06T23:56:05.700620869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:05.702500 containerd[1469]: time="2025-07-06T23:56:05.702451987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.654718067s" Jul 6 23:56:05.702500 containerd[1469]: time="2025-07-06T23:56:05.702493157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 6 23:56:05.707031 containerd[1469]: time="2025-07-06T23:56:05.706990336Z" level=info msg="CreateContainer within sandbox \"d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:56:05.734699 containerd[1469]: time="2025-07-06T23:56:05.734637594Z" level=info msg="CreateContainer within sandbox \"d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"97647664f97e5d44dfd3fdd84cb18773b6d0f4de9f40f2409242dd86d3bbcf8f\"" Jul 6 23:56:05.735695 containerd[1469]: time="2025-07-06T23:56:05.735639525Z" level=info msg="StartContainer for \"97647664f97e5d44dfd3fdd84cb18773b6d0f4de9f40f2409242dd86d3bbcf8f\"" Jul 6 23:56:05.777020 containerd[1469]: time="2025-07-06T23:56:05.776670987Z" level=info msg="StopPodSandbox for \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\"" Jul 6 23:56:05.803295 systemd[1]: Started cri-containerd-97647664f97e5d44dfd3fdd84cb18773b6d0f4de9f40f2409242dd86d3bbcf8f.scope - libcontainer container 97647664f97e5d44dfd3fdd84cb18773b6d0f4de9f40f2409242dd86d3bbcf8f. Jul 6 23:56:05.969433 containerd[1469]: time="2025-07-06T23:56:05.968801619Z" level=info msg="StartContainer for \"97647664f97e5d44dfd3fdd84cb18773b6d0f4de9f40f2409242dd86d3bbcf8f\" returns successfully" Jul 6 23:56:05.973939 containerd[1469]: time="2025-07-06T23:56:05.973751595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:56:05.982189 containerd[1469]: 2025-07-06 23:56:05.893 [INFO][4231] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:56:05.982189 containerd[1469]: 2025-07-06 23:56:05.893 [INFO][4231] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" iface="eth0" netns="/var/run/netns/cni-0131b57a-e4eb-2f4f-0bc5-08c12663e3b6" Jul 6 23:56:05.982189 containerd[1469]: 2025-07-06 23:56:05.893 [INFO][4231] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" iface="eth0" netns="/var/run/netns/cni-0131b57a-e4eb-2f4f-0bc5-08c12663e3b6" Jul 6 23:56:05.982189 containerd[1469]: 2025-07-06 23:56:05.894 [INFO][4231] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" iface="eth0" netns="/var/run/netns/cni-0131b57a-e4eb-2f4f-0bc5-08c12663e3b6" Jul 6 23:56:05.982189 containerd[1469]: 2025-07-06 23:56:05.894 [INFO][4231] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:56:05.982189 containerd[1469]: 2025-07-06 23:56:05.894 [INFO][4231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:56:05.982189 containerd[1469]: 2025-07-06 23:56:05.953 [INFO][4245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" HandleID="k8s-pod-network.5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Workload="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:05.982189 containerd[1469]: 2025-07-06 23:56:05.956 [INFO][4245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:05.982189 containerd[1469]: 2025-07-06 23:56:05.956 [INFO][4245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:05.982189 containerd[1469]: 2025-07-06 23:56:05.968 [WARNING][4245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" HandleID="k8s-pod-network.5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Workload="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:05.982189 containerd[1469]: 2025-07-06 23:56:05.968 [INFO][4245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" HandleID="k8s-pod-network.5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Workload="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:05.982189 containerd[1469]: 2025-07-06 23:56:05.973 [INFO][4245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:05.982189 containerd[1469]: 2025-07-06 23:56:05.978 [INFO][4231] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:56:05.982189 containerd[1469]: time="2025-07-06T23:56:05.982363777Z" level=info msg="TearDown network for sandbox \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\" successfully" Jul 6 23:56:05.982189 containerd[1469]: time="2025-07-06T23:56:05.982401164Z" level=info msg="StopPodSandbox for \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\" returns successfully" Jul 6 23:56:05.983690 containerd[1469]: time="2025-07-06T23:56:05.983245053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c9x6m,Uid:7abd3305-5de3-4e82-84ee-e697b6b22043,Namespace:calico-system,Attempt:1,}" Jul 6 23:56:06.068618 kubelet[2497]: E0706 23:56:06.068584 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:06.098754 kubelet[2497]: I0706 23:56:06.098561 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kxtht" podStartSLOduration=42.098539565 podStartE2EDuration="42.098539565s" podCreationTimestamp="2025-07-06 23:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:06.095302652 +0000 UTC m=+47.485262979" watchObservedRunningTime="2025-07-06 23:56:06.098539565 +0000 UTC m=+47.488499890" Jul 6 23:56:06.169851 systemd-networkd[1374]: cali92d42368c2a: Link UP Jul 6 23:56:06.171325 systemd-networkd[1374]: cali92d42368c2a: Gained carrier Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.045 [INFO][4269] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0 csi-node-driver- calico-system 7abd3305-5de3-4e82-84ee-e697b6b22043 982 0 2025-07-06 23:55:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.4-9-29085cf50e csi-node-driver-c9x6m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali92d42368c2a [] [] }} ContainerID="505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" Namespace="calico-system" Pod="csi-node-driver-c9x6m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-" Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.045 [INFO][4269] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" Namespace="calico-system" Pod="csi-node-driver-c9x6m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.093 [INFO][4283] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" HandleID="k8s-pod-network.505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" Workload="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.093 [INFO][4283] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" HandleID="k8s-pod-network.505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" Workload="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5830), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-9-29085cf50e", "pod":"csi-node-driver-c9x6m", "timestamp":"2025-07-06 23:56:06.09311448 +0000 UTC"}, Hostname:"ci-4081.3.4-9-29085cf50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.093 [INFO][4283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.094 [INFO][4283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.094 [INFO][4283] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-9-29085cf50e' Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.114 [INFO][4283] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.122 [INFO][4283] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.133 [INFO][4283] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.137 [INFO][4283] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.141 [INFO][4283] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.141 [INFO][4283] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.143 [INFO][4283] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5 Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.148 [INFO][4283] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.157 [INFO][4283] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.3/26] block=192.168.70.0/26 handle="k8s-pod-network.505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.157 [INFO][4283] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.3/26] handle="k8s-pod-network.505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.157 [INFO][4283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:06.196061 containerd[1469]: 2025-07-06 23:56:06.157 [INFO][4283] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.3/26] IPv6=[] ContainerID="505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" HandleID="k8s-pod-network.505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" Workload="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:06.198983 containerd[1469]: 2025-07-06 23:56:06.160 [INFO][4269] cni-plugin/k8s.go 418: Populated endpoint ContainerID="505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" Namespace="calico-system" Pod="csi-node-driver-c9x6m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7abd3305-5de3-4e82-84ee-e697b6b22043", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"", Pod:"csi-node-driver-c9x6m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92d42368c2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:06.198983 containerd[1469]: 2025-07-06 23:56:06.160 [INFO][4269] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.3/32] ContainerID="505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" Namespace="calico-system" Pod="csi-node-driver-c9x6m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:06.198983 containerd[1469]: 2025-07-06 23:56:06.161 [INFO][4269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92d42368c2a ContainerID="505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" Namespace="calico-system" Pod="csi-node-driver-c9x6m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:06.198983 containerd[1469]: 2025-07-06 23:56:06.170 [INFO][4269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" Namespace="calico-system" Pod="csi-node-driver-c9x6m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:06.198983 containerd[1469]: 2025-07-06 23:56:06.171 [INFO][4269] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" Namespace="calico-system" Pod="csi-node-driver-c9x6m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7abd3305-5de3-4e82-84ee-e697b6b22043", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5", Pod:"csi-node-driver-c9x6m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92d42368c2a", MAC:"22:96:04:53:f1:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:06.198983 containerd[1469]: 2025-07-06 23:56:06.190 [INFO][4269] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5" Namespace="calico-system" Pod="csi-node-driver-c9x6m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:06.228514 containerd[1469]: time="2025-07-06T23:56:06.227595010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:06.228514 containerd[1469]: time="2025-07-06T23:56:06.227662170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:06.228514 containerd[1469]: time="2025-07-06T23:56:06.227678349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:06.228514 containerd[1469]: time="2025-07-06T23:56:06.227766747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:06.252386 systemd[1]: Started cri-containerd-505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5.scope - libcontainer container 505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5. Jul 6 23:56:06.293232 containerd[1469]: time="2025-07-06T23:56:06.293168303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c9x6m,Uid:7abd3305-5de3-4e82-84ee-e697b6b22043,Namespace:calico-system,Attempt:1,} returns sandbox id \"505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5\"" Jul 6 23:56:06.367334 systemd[1]: run-netns-cni\x2d0131b57a\x2de4eb\x2d2f4f\x2d0bc5\x2d08c12663e3b6.mount: Deactivated successfully. Jul 6 23:56:06.775937 containerd[1469]: time="2025-07-06T23:56:06.775269191Z" level=info msg="StopPodSandbox for \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\"" Jul 6 23:56:06.875186 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Jul 6 23:56:06.879555 containerd[1469]: 2025-07-06 23:56:06.829 [INFO][4354] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:56:06.879555 containerd[1469]: 2025-07-06 23:56:06.832 [INFO][4354] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" iface="eth0" netns="/var/run/netns/cni-215c8ecd-d9bb-335b-5599-c979731ab7df" Jul 6 23:56:06.879555 containerd[1469]: 2025-07-06 23:56:06.833 [INFO][4354] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" iface="eth0" netns="/var/run/netns/cni-215c8ecd-d9bb-335b-5599-c979731ab7df" Jul 6 23:56:06.879555 containerd[1469]: 2025-07-06 23:56:06.833 [INFO][4354] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" iface="eth0" netns="/var/run/netns/cni-215c8ecd-d9bb-335b-5599-c979731ab7df" Jul 6 23:56:06.879555 containerd[1469]: 2025-07-06 23:56:06.833 [INFO][4354] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:56:06.879555 containerd[1469]: 2025-07-06 23:56:06.833 [INFO][4354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:56:06.879555 containerd[1469]: 2025-07-06 23:56:06.858 [INFO][4361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" HandleID="k8s-pod-network.d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:06.879555 containerd[1469]: 2025-07-06 23:56:06.858 [INFO][4361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:06.879555 containerd[1469]: 2025-07-06 23:56:06.858 [INFO][4361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:06.879555 containerd[1469]: 2025-07-06 23:56:06.870 [WARNING][4361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" HandleID="k8s-pod-network.d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:06.879555 containerd[1469]: 2025-07-06 23:56:06.870 [INFO][4361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" HandleID="k8s-pod-network.d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:06.879555 containerd[1469]: 2025-07-06 23:56:06.873 [INFO][4361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:06.879555 containerd[1469]: 2025-07-06 23:56:06.876 [INFO][4354] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:56:06.881287 containerd[1469]: time="2025-07-06T23:56:06.881194201Z" level=info msg="TearDown network for sandbox \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\" successfully" Jul 6 23:56:06.881287 containerd[1469]: time="2025-07-06T23:56:06.881231046Z" level=info msg="StopPodSandbox for \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\" returns successfully" Jul 6 23:56:06.884634 containerd[1469]: time="2025-07-06T23:56:06.882961426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7684c4899d-9vhnf,Uid:15633098-99cc-4da2-aa2e-7ce63afd2881,Namespace:calico-system,Attempt:1,}" Jul 6 23:56:06.884056 systemd[1]: run-netns-cni\x2d215c8ecd\x2dd9bb\x2d335b\x2d5599\x2dc979731ab7df.mount: Deactivated successfully. Jul 6 23:56:06.941407 systemd-networkd[1374]: caliec36ac81e75: Gained IPv6LL Jul 6 23:56:07.057370 systemd-networkd[1374]: cali8d2940d1b7d: Link UP Jul 6 23:56:07.058623 systemd-networkd[1374]: cali8d2940d1b7d: Gained carrier Jul 6 23:56:07.090101 kubelet[2497]: E0706 23:56:07.088754 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:06.961 [INFO][4368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0 calico-kube-controllers-7684c4899d- calico-system 15633098-99cc-4da2-aa2e-7ce63afd2881 997 0 2025-07-06 23:55:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7684c4899d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.4-9-29085cf50e calico-kube-controllers-7684c4899d-9vhnf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8d2940d1b7d [] [] }} ContainerID="293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" Namespace="calico-system" Pod="calico-kube-controllers-7684c4899d-9vhnf" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-" Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:06.961 [INFO][4368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" Namespace="calico-system" Pod="calico-kube-controllers-7684c4899d-9vhnf" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:06.993 [INFO][4379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" HandleID="k8s-pod-network.293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:06.993 [INFO][4379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" HandleID="k8s-pod-network.293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-9-29085cf50e", "pod":"calico-kube-controllers-7684c4899d-9vhnf", "timestamp":"2025-07-06 23:56:06.993230802 +0000 UTC"}, Hostname:"ci-4081.3.4-9-29085cf50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:06.993 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:06.993 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:06.993 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-9-29085cf50e' Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:07.002 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:07.013 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:07.022 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:07.025 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:07.030 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:07.030 [INFO][4379] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:07.032 [INFO][4379] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51 Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:07.037 [INFO][4379] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:07.047 [INFO][4379] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.4/26] block=192.168.70.0/26 handle="k8s-pod-network.293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:07.047 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.4/26] handle="k8s-pod-network.293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:07.047 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:07.109353 containerd[1469]: 2025-07-06 23:56:07.047 [INFO][4379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.4/26] IPv6=[] ContainerID="293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" HandleID="k8s-pod-network.293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:07.115560 containerd[1469]: 2025-07-06 23:56:07.051 [INFO][4368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" Namespace="calico-system" Pod="calico-kube-controllers-7684c4899d-9vhnf" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0", GenerateName:"calico-kube-controllers-7684c4899d-", Namespace:"calico-system", SelfLink:"", UID:"15633098-99cc-4da2-aa2e-7ce63afd2881", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7684c4899d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"", Pod:"calico-kube-controllers-7684c4899d-9vhnf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d2940d1b7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:07.115560 containerd[1469]: 2025-07-06 23:56:07.052 [INFO][4368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.4/32] ContainerID="293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" Namespace="calico-system" Pod="calico-kube-controllers-7684c4899d-9vhnf" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:07.115560 containerd[1469]: 2025-07-06 23:56:07.052 [INFO][4368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d2940d1b7d ContainerID="293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" Namespace="calico-system" Pod="calico-kube-controllers-7684c4899d-9vhnf" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:07.115560 containerd[1469]: 2025-07-06 23:56:07.072 [INFO][4368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" Namespace="calico-system" Pod="calico-kube-controllers-7684c4899d-9vhnf" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:07.115560 containerd[1469]: 2025-07-06 23:56:07.075 [INFO][4368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" Namespace="calico-system" Pod="calico-kube-controllers-7684c4899d-9vhnf" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0", GenerateName:"calico-kube-controllers-7684c4899d-", Namespace:"calico-system", SelfLink:"", UID:"15633098-99cc-4da2-aa2e-7ce63afd2881", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7684c4899d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51", Pod:"calico-kube-controllers-7684c4899d-9vhnf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d2940d1b7d", MAC:"06:fd:90:4c:c1:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:07.115560 containerd[1469]: 2025-07-06 23:56:07.098 [INFO][4368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51" Namespace="calico-system" Pod="calico-kube-controllers-7684c4899d-9vhnf" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:07.168497 containerd[1469]: time="2025-07-06T23:56:07.167886837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:07.168801 containerd[1469]: time="2025-07-06T23:56:07.168540176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:07.169511 containerd[1469]: time="2025-07-06T23:56:07.169125814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:07.170277 containerd[1469]: time="2025-07-06T23:56:07.170205898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:07.216934 systemd[1]: Started cri-containerd-293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51.scope - libcontainer container 293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51. Jul 6 23:56:07.314009 containerd[1469]: time="2025-07-06T23:56:07.313106639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7684c4899d-9vhnf,Uid:15633098-99cc-4da2-aa2e-7ce63afd2881,Namespace:calico-system,Attempt:1,} returns sandbox id \"293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51\"" Jul 6 23:56:07.782229 containerd[1469]: time="2025-07-06T23:56:07.781786418Z" level=info msg="StopPodSandbox for \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\"" Jul 6 23:56:07.782575 containerd[1469]: time="2025-07-06T23:56:07.782119978Z" level=info msg="StopPodSandbox for \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\"" Jul 6 23:56:07.800164 containerd[1469]: time="2025-07-06T23:56:07.782177962Z" level=info msg="StopPodSandbox for \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\"" Jul 6 23:56:07.804089 containerd[1469]: time="2025-07-06T23:56:07.803575470Z" level=info msg="StopPodSandbox for \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\"" Jul 6 23:56:07.835572 systemd-networkd[1374]: cali92d42368c2a: Gained IPv6LL Jul 6 23:56:08.109417 kubelet[2497]: E0706 23:56:08.109366 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:08.154259 systemd-networkd[1374]: cali8d2940d1b7d: Gained IPv6LL Jul 6 23:56:08.206534 containerd[1469]: 2025-07-06 23:56:08.010 [INFO][4486] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:56:08.206534 containerd[1469]: 2025-07-06 23:56:08.016 [INFO][4486] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" iface="eth0" netns="/var/run/netns/cni-7577bdc5-5b38-a86e-04b5-6799e2600042" Jul 6 23:56:08.206534 containerd[1469]: 2025-07-06 23:56:08.017 [INFO][4486] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" iface="eth0" netns="/var/run/netns/cni-7577bdc5-5b38-a86e-04b5-6799e2600042" Jul 6 23:56:08.206534 containerd[1469]: 2025-07-06 23:56:08.019 [INFO][4486] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" iface="eth0" netns="/var/run/netns/cni-7577bdc5-5b38-a86e-04b5-6799e2600042" Jul 6 23:56:08.206534 containerd[1469]: 2025-07-06 23:56:08.019 [INFO][4486] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:56:08.206534 containerd[1469]: 2025-07-06 23:56:08.019 [INFO][4486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:56:08.206534 containerd[1469]: 2025-07-06 23:56:08.172 [INFO][4502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" HandleID="k8s-pod-network.91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:08.206534 containerd[1469]: 2025-07-06 23:56:08.172 [INFO][4502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:08.206534 containerd[1469]: 2025-07-06 23:56:08.173 [INFO][4502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:08.206534 containerd[1469]: 2025-07-06 23:56:08.191 [WARNING][4502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" HandleID="k8s-pod-network.91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:08.206534 containerd[1469]: 2025-07-06 23:56:08.191 [INFO][4502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" HandleID="k8s-pod-network.91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:08.206534 containerd[1469]: 2025-07-06 23:56:08.194 [INFO][4502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:08.206534 containerd[1469]: 2025-07-06 23:56:08.198 [INFO][4486] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:56:08.209742 containerd[1469]: time="2025-07-06T23:56:08.206653354Z" level=info msg="TearDown network for sandbox \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\" successfully" Jul 6 23:56:08.209742 containerd[1469]: time="2025-07-06T23:56:08.206686979Z" level=info msg="StopPodSandbox for \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\" returns successfully" Jul 6 23:56:08.210340 systemd[1]: run-netns-cni\x2d7577bdc5\x2d5b38\x2da86e\x2d04b5\x2d6799e2600042.mount: Deactivated successfully. Jul 6 23:56:08.212421 containerd[1469]: time="2025-07-06T23:56:08.211325725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c5f94cc-brq2m,Uid:b4d620c4-ff0d-4798-9fbc-b59167726f3d,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:56:08.230365 containerd[1469]: 2025-07-06 23:56:08.046 [INFO][4476] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:56:08.230365 containerd[1469]: 2025-07-06 23:56:08.046 [INFO][4476] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" iface="eth0" netns="/var/run/netns/cni-48b93db4-e610-27c3-4b50-59349f285c30" Jul 6 23:56:08.230365 containerd[1469]: 2025-07-06 23:56:08.046 [INFO][4476] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" iface="eth0" netns="/var/run/netns/cni-48b93db4-e610-27c3-4b50-59349f285c30" Jul 6 23:56:08.230365 containerd[1469]: 2025-07-06 23:56:08.047 [INFO][4476] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" iface="eth0" netns="/var/run/netns/cni-48b93db4-e610-27c3-4b50-59349f285c30" Jul 6 23:56:08.230365 containerd[1469]: 2025-07-06 23:56:08.047 [INFO][4476] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:56:08.230365 containerd[1469]: 2025-07-06 23:56:08.047 [INFO][4476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:56:08.230365 containerd[1469]: 2025-07-06 23:56:08.187 [INFO][4508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" HandleID="k8s-pod-network.70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:08.230365 containerd[1469]: 2025-07-06 23:56:08.188 [INFO][4508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:08.230365 containerd[1469]: 2025-07-06 23:56:08.194 [INFO][4508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:08.230365 containerd[1469]: 2025-07-06 23:56:08.219 [WARNING][4508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" HandleID="k8s-pod-network.70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:08.230365 containerd[1469]: 2025-07-06 23:56:08.219 [INFO][4508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" HandleID="k8s-pod-network.70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:08.230365 containerd[1469]: 2025-07-06 23:56:08.222 [INFO][4508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:08.230365 containerd[1469]: 2025-07-06 23:56:08.226 [INFO][4476] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:56:08.235828 containerd[1469]: time="2025-07-06T23:56:08.235665048Z" level=info msg="TearDown network for sandbox \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\" successfully" Jul 6 23:56:08.235828 containerd[1469]: time="2025-07-06T23:56:08.235725410Z" level=info msg="StopPodSandbox for \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\" returns successfully" Jul 6 23:56:08.237080 systemd[1]: run-netns-cni\x2d48b93db4\x2de610\x2d27c3\x2d4b50\x2d59349f285c30.mount: Deactivated successfully. Jul 6 23:56:08.238762 containerd[1469]: time="2025-07-06T23:56:08.238413431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c5f94cc-28d9v,Uid:440a2155-cfea-4aaa-b248-ccfd5a0a677a,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:56:08.256889 containerd[1469]: 2025-07-06 23:56:08.073 [INFO][4477] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:56:08.256889 containerd[1469]: 2025-07-06 23:56:08.074 [INFO][4477] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" iface="eth0" netns="/var/run/netns/cni-f23cceb3-bd9f-568a-eb3f-bfce4a03e5df" Jul 6 23:56:08.256889 containerd[1469]: 2025-07-06 23:56:08.075 [INFO][4477] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" iface="eth0" netns="/var/run/netns/cni-f23cceb3-bd9f-568a-eb3f-bfce4a03e5df" Jul 6 23:56:08.256889 containerd[1469]: 2025-07-06 23:56:08.077 [INFO][4477] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" iface="eth0" netns="/var/run/netns/cni-f23cceb3-bd9f-568a-eb3f-bfce4a03e5df" Jul 6 23:56:08.256889 containerd[1469]: 2025-07-06 23:56:08.077 [INFO][4477] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:56:08.256889 containerd[1469]: 2025-07-06 23:56:08.077 [INFO][4477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:56:08.256889 containerd[1469]: 2025-07-06 23:56:08.208 [INFO][4514] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" HandleID="k8s-pod-network.8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:08.256889 containerd[1469]: 2025-07-06 23:56:08.212 [INFO][4514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:08.256889 containerd[1469]: 2025-07-06 23:56:08.222 [INFO][4514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:08.256889 containerd[1469]: 2025-07-06 23:56:08.240 [WARNING][4514] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" HandleID="k8s-pod-network.8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:08.256889 containerd[1469]: 2025-07-06 23:56:08.240 [INFO][4514] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" HandleID="k8s-pod-network.8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:08.256889 containerd[1469]: 2025-07-06 23:56:08.242 [INFO][4514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:08.256889 containerd[1469]: 2025-07-06 23:56:08.246 [INFO][4477] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:56:08.260872 containerd[1469]: time="2025-07-06T23:56:08.257052733Z" level=info msg="TearDown network for sandbox \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\" successfully" Jul 6 23:56:08.260872 containerd[1469]: time="2025-07-06T23:56:08.260852342Z" level=info msg="StopPodSandbox for \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\" returns successfully" Jul 6 23:56:08.262755 kubelet[2497]: E0706 23:56:08.262699 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:08.264011 systemd[1]: run-netns-cni\x2df23cceb3\x2dbd9f\x2d568a\x2deb3f\x2dbfce4a03e5df.mount: Deactivated successfully. Jul 6 23:56:08.266090 containerd[1469]: time="2025-07-06T23:56:08.265099710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlr8x,Uid:18aa2971-3783-48e6-bae4-2b9283bfdea3,Namespace:kube-system,Attempt:1,}" Jul 6 23:56:08.290039 containerd[1469]: 2025-07-06 23:56:08.088 [INFO][4478] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:56:08.290039 containerd[1469]: 2025-07-06 23:56:08.090 [INFO][4478] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" iface="eth0" netns="/var/run/netns/cni-14849091-6154-bb0d-68fd-f9765ce09137" Jul 6 23:56:08.290039 containerd[1469]: 2025-07-06 23:56:08.092 [INFO][4478] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" iface="eth0" netns="/var/run/netns/cni-14849091-6154-bb0d-68fd-f9765ce09137" Jul 6 23:56:08.290039 containerd[1469]: 2025-07-06 23:56:08.098 [INFO][4478] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" iface="eth0" netns="/var/run/netns/cni-14849091-6154-bb0d-68fd-f9765ce09137" Jul 6 23:56:08.290039 containerd[1469]: 2025-07-06 23:56:08.098 [INFO][4478] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:56:08.290039 containerd[1469]: 2025-07-06 23:56:08.098 [INFO][4478] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:56:08.290039 containerd[1469]: 2025-07-06 23:56:08.226 [INFO][4519] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" HandleID="k8s-pod-network.4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Workload="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:08.290039 containerd[1469]: 2025-07-06 23:56:08.226 [INFO][4519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:08.290039 containerd[1469]: 2025-07-06 23:56:08.242 [INFO][4519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:08.290039 containerd[1469]: 2025-07-06 23:56:08.272 [WARNING][4519] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" HandleID="k8s-pod-network.4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Workload="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:08.290039 containerd[1469]: 2025-07-06 23:56:08.272 [INFO][4519] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" HandleID="k8s-pod-network.4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Workload="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:08.290039 containerd[1469]: 2025-07-06 23:56:08.278 [INFO][4519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:08.290039 containerd[1469]: 2025-07-06 23:56:08.283 [INFO][4478] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:56:08.292411 containerd[1469]: time="2025-07-06T23:56:08.292370900Z" level=info msg="TearDown network for sandbox \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\" successfully" Jul 6 23:56:08.295597 containerd[1469]: time="2025-07-06T23:56:08.295169563Z" level=info msg="StopPodSandbox for \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\" returns successfully" Jul 6 23:56:08.306240 containerd[1469]: time="2025-07-06T23:56:08.306199982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-wb46p,Uid:4e9dee5d-e24d-4799-b79a-36586ddb42a9,Namespace:calico-system,Attempt:1,}" Jul 6 23:56:08.371921 systemd[1]: run-netns-cni\x2d14849091\x2d6154\x2dbb0d\x2d68fd\x2df9765ce09137.mount: Deactivated successfully. Jul 6 23:56:08.632802 systemd-networkd[1374]: calieefb6de53c9: Link UP Jul 6 23:56:08.637110 systemd-networkd[1374]: calieefb6de53c9: Gained carrier Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.411 [INFO][4537] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0 calico-apiserver-6d4c5f94cc- calico-apiserver b4d620c4-ff0d-4798-9fbc-b59167726f3d 1016 0 2025-07-06 23:55:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d4c5f94cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-9-29085cf50e calico-apiserver-6d4c5f94cc-brq2m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieefb6de53c9 [] [] }} ContainerID="dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-brq2m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-" Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.413 [INFO][4537] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-brq2m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.524 [INFO][4589] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" HandleID="k8s-pod-network.dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.524 [INFO][4589] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" HandleID="k8s-pod-network.dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000353f30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-9-29085cf50e", "pod":"calico-apiserver-6d4c5f94cc-brq2m", "timestamp":"2025-07-06 23:56:08.524734323 +0000 UTC"}, Hostname:"ci-4081.3.4-9-29085cf50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.524 [INFO][4589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.525 [INFO][4589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.527 [INFO][4589] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-9-29085cf50e' Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.551 [INFO][4589] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.560 [INFO][4589] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.571 [INFO][4589] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.576 [INFO][4589] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.581 [INFO][4589] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.581 [INFO][4589] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.585 [INFO][4589] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.599 [INFO][4589] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.613 [INFO][4589] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.5/26] block=192.168.70.0/26 handle="k8s-pod-network.dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.614 [INFO][4589] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.5/26] handle="k8s-pod-network.dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.614 [INFO][4589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:08.683344 containerd[1469]: 2025-07-06 23:56:08.614 [INFO][4589] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.5/26] IPv6=[] ContainerID="dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" HandleID="k8s-pod-network.dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:08.684426 containerd[1469]: 2025-07-06 23:56:08.627 [INFO][4537] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-brq2m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0", GenerateName:"calico-apiserver-6d4c5f94cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4d620c4-ff0d-4798-9fbc-b59167726f3d", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c5f94cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"", Pod:"calico-apiserver-6d4c5f94cc-brq2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieefb6de53c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:08.684426 containerd[1469]: 2025-07-06 23:56:08.627 [INFO][4537] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.5/32] ContainerID="dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-brq2m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:08.684426 containerd[1469]: 2025-07-06 23:56:08.627 [INFO][4537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieefb6de53c9 ContainerID="dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-brq2m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:08.684426 containerd[1469]: 2025-07-06 23:56:08.636 [INFO][4537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-brq2m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:08.684426 containerd[1469]: 2025-07-06 23:56:08.639 [INFO][4537] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-brq2m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0", GenerateName:"calico-apiserver-6d4c5f94cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4d620c4-ff0d-4798-9fbc-b59167726f3d", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c5f94cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab", Pod:"calico-apiserver-6d4c5f94cc-brq2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieefb6de53c9", MAC:"46:0e:43:9c:a9:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:08.684426 containerd[1469]: 2025-07-06 23:56:08.676 [INFO][4537] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-brq2m" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:08.805445 containerd[1469]: time="2025-07-06T23:56:08.804480240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:08.805445 containerd[1469]: time="2025-07-06T23:56:08.804554781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:08.805445 containerd[1469]: time="2025-07-06T23:56:08.804569729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:08.805445 containerd[1469]: time="2025-07-06T23:56:08.804665343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:08.821606 systemd-networkd[1374]: calic9975c01ad0: Link UP Jul 6 23:56:08.823700 systemd-networkd[1374]: calic9975c01ad0: Gained carrier Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.460 [INFO][4572] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0 goldmane-58fd7646b9- calico-system 4e9dee5d-e24d-4799-b79a-36586ddb42a9 1020 0 2025-07-06 23:55:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.4-9-29085cf50e goldmane-58fd7646b9-wb46p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic9975c01ad0 [] [] }} ContainerID="c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-wb46p" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-" Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.460 [INFO][4572] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-wb46p" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.536 [INFO][4601] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" HandleID="k8s-pod-network.c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" Workload="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.536 [INFO][4601] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" HandleID="k8s-pod-network.c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" Workload="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-9-29085cf50e", "pod":"goldmane-58fd7646b9-wb46p", "timestamp":"2025-07-06 23:56:08.536512984 +0000 UTC"}, Hostname:"ci-4081.3.4-9-29085cf50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.537 [INFO][4601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.614 [INFO][4601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.615 [INFO][4601] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-9-29085cf50e' Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.654 [INFO][4601] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.686 [INFO][4601] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.706 [INFO][4601] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.734 [INFO][4601] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.743 [INFO][4601] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.743 [INFO][4601] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.747 [INFO][4601] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4 Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.762 [INFO][4601] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.789 [INFO][4601] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.6/26] block=192.168.70.0/26 handle="k8s-pod-network.c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.792 [INFO][4601] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.6/26] handle="k8s-pod-network.c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.792 [INFO][4601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:08.883265 containerd[1469]: 2025-07-06 23:56:08.792 [INFO][4601] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.6/26] IPv6=[] ContainerID="c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" HandleID="k8s-pod-network.c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" Workload="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:08.886559 containerd[1469]: 2025-07-06 23:56:08.811 [INFO][4572] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-wb46p" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4e9dee5d-e24d-4799-b79a-36586ddb42a9", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"", Pod:"goldmane-58fd7646b9-wb46p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9975c01ad0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:08.886559 containerd[1469]: 2025-07-06 23:56:08.813 [INFO][4572] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.6/32] ContainerID="c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-wb46p" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:08.886559 containerd[1469]: 2025-07-06 23:56:08.813 [INFO][4572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9975c01ad0 ContainerID="c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-wb46p" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:08.886559 containerd[1469]: 2025-07-06 23:56:08.823 [INFO][4572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-wb46p" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:08.886559 containerd[1469]: 2025-07-06 23:56:08.828 [INFO][4572] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-wb46p" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4e9dee5d-e24d-4799-b79a-36586ddb42a9", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4", Pod:"goldmane-58fd7646b9-wb46p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9975c01ad0", MAC:"e2:8a:32:af:06:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:08.886559 containerd[1469]: 2025-07-06 23:56:08.857 [INFO][4572] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4" Namespace="calico-system" Pod="goldmane-58fd7646b9-wb46p" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:08.918719 systemd-networkd[1374]: cali8eca972de9d: Link UP Jul 6 23:56:08.921476 systemd-networkd[1374]: cali8eca972de9d: Gained carrier Jul 6 23:56:08.972435 systemd[1]: Started cri-containerd-dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab.scope - libcontainer container dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab. Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.438 [INFO][4541] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0 coredns-7c65d6cfc9- kube-system 18aa2971-3783-48e6-bae4-2b9283bfdea3 1019 0 2025-07-06 23:55:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-9-29085cf50e coredns-7c65d6cfc9-hlr8x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8eca972de9d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlr8x" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-" Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.438 [INFO][4541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlr8x" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.589 [INFO][4594] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" HandleID="k8s-pod-network.5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.591 [INFO][4594] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" HandleID="k8s-pod-network.5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00028a6b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-9-29085cf50e", "pod":"coredns-7c65d6cfc9-hlr8x", "timestamp":"2025-07-06 23:56:08.589794574 +0000 UTC"}, Hostname:"ci-4081.3.4-9-29085cf50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.591 [INFO][4594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.793 [INFO][4594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.793 [INFO][4594] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-9-29085cf50e' Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.811 [INFO][4594] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.829 [INFO][4594] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.850 [INFO][4594] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.859 [INFO][4594] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.867 [INFO][4594] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.867 [INFO][4594] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.870 [INFO][4594] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.881 [INFO][4594] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.894 [INFO][4594] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.7/26] block=192.168.70.0/26 handle="k8s-pod-network.5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.894 [INFO][4594] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.7/26] handle="k8s-pod-network.5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.894 [INFO][4594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:08.997799 containerd[1469]: 2025-07-06 23:56:08.894 [INFO][4594] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.7/26] IPv6=[] ContainerID="5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" HandleID="k8s-pod-network.5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:09.001644 containerd[1469]: 2025-07-06 23:56:08.906 [INFO][4541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlr8x" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18aa2971-3783-48e6-bae4-2b9283bfdea3", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"", Pod:"coredns-7c65d6cfc9-hlr8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8eca972de9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:09.001644 containerd[1469]: 2025-07-06 23:56:08.909 [INFO][4541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.7/32] ContainerID="5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlr8x" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:09.001644 containerd[1469]: 2025-07-06 23:56:08.909 [INFO][4541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8eca972de9d ContainerID="5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlr8x" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:09.001644 containerd[1469]: 2025-07-06 23:56:08.944 [INFO][4541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlr8x" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:09.001644 containerd[1469]: 2025-07-06 23:56:08.948 [INFO][4541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlr8x" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18aa2971-3783-48e6-bae4-2b9283bfdea3", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d", Pod:"coredns-7c65d6cfc9-hlr8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8eca972de9d", MAC:"4a:f3:6d:77:ed:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:09.001644 containerd[1469]: 2025-07-06 23:56:08.986 [INFO][4541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hlr8x" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:09.027750 containerd[1469]: time="2025-07-06T23:56:09.024728913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:09.027750 containerd[1469]: time="2025-07-06T23:56:09.025271793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:09.027750 containerd[1469]: time="2025-07-06T23:56:09.025301132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:09.027750 containerd[1469]: time="2025-07-06T23:56:09.026518567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:09.046256 systemd-networkd[1374]: cali6c3e8d88057: Link UP Jul 6 23:56:09.054429 systemd-networkd[1374]: cali6c3e8d88057: Gained carrier Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.446 [INFO][4554] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0 calico-apiserver-6d4c5f94cc- calico-apiserver 440a2155-cfea-4aaa-b248-ccfd5a0a677a 1017 0 2025-07-06 23:55:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d4c5f94cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-9-29085cf50e calico-apiserver-6d4c5f94cc-28d9v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6c3e8d88057 [] [] }} ContainerID="02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-28d9v" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-" Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.447 [INFO][4554] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-28d9v" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.602 [INFO][4599] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" HandleID="k8s-pod-network.02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.603 [INFO][4599] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" HandleID="k8s-pod-network.02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-9-29085cf50e", "pod":"calico-apiserver-6d4c5f94cc-28d9v", "timestamp":"2025-07-06 23:56:08.602453413 +0000 UTC"}, Hostname:"ci-4081.3.4-9-29085cf50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.603 [INFO][4599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.896 [INFO][4599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.899 [INFO][4599] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-9-29085cf50e' Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.913 [INFO][4599] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.943 [INFO][4599] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.961 [INFO][4599] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.969 [INFO][4599] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.978 [INFO][4599] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.979 [INFO][4599] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:08.988 [INFO][4599] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4 Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:09.004 [INFO][4599] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:09.021 [INFO][4599] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.8/26] block=192.168.70.0/26 handle="k8s-pod-network.02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:09.021 [INFO][4599] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.8/26] handle="k8s-pod-network.02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" host="ci-4081.3.4-9-29085cf50e" Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:09.021 [INFO][4599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:09.131767 containerd[1469]: 2025-07-06 23:56:09.022 [INFO][4599] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.8/26] IPv6=[] ContainerID="02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" HandleID="k8s-pod-network.02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:09.132704 containerd[1469]: 2025-07-06 23:56:09.034 [INFO][4554] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-28d9v" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0", GenerateName:"calico-apiserver-6d4c5f94cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"440a2155-cfea-4aaa-b248-ccfd5a0a677a", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c5f94cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"", Pod:"calico-apiserver-6d4c5f94cc-28d9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c3e8d88057", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:09.132704 containerd[1469]: 2025-07-06 23:56:09.034 [INFO][4554] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.8/32] ContainerID="02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-28d9v" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:09.132704 containerd[1469]: 2025-07-06 23:56:09.034 [INFO][4554] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c3e8d88057 ContainerID="02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-28d9v" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:09.132704 containerd[1469]: 2025-07-06 23:56:09.056 [INFO][4554] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-28d9v" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:09.132704 containerd[1469]: 2025-07-06 23:56:09.070 [INFO][4554] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-28d9v" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0", GenerateName:"calico-apiserver-6d4c5f94cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"440a2155-cfea-4aaa-b248-ccfd5a0a677a", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c5f94cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4", Pod:"calico-apiserver-6d4c5f94cc-28d9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c3e8d88057", MAC:"52:5f:6c:13:06:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:09.132704 containerd[1469]: 2025-07-06 23:56:09.106 [INFO][4554] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4" Namespace="calico-apiserver" Pod="calico-apiserver-6d4c5f94cc-28d9v" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:09.137558 systemd[1]: Started cri-containerd-c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4.scope - libcontainer container c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4. Jul 6 23:56:09.170091 containerd[1469]: time="2025-07-06T23:56:09.169715061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:09.170842 containerd[1469]: time="2025-07-06T23:56:09.169967332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:09.170842 containerd[1469]: time="2025-07-06T23:56:09.170117310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:09.170842 containerd[1469]: time="2025-07-06T23:56:09.170724285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:09.241328 systemd[1]: Started cri-containerd-5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d.scope - libcontainer container 5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d. Jul 6 23:56:09.260922 containerd[1469]: time="2025-07-06T23:56:09.260712194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:09.260922 containerd[1469]: time="2025-07-06T23:56:09.260795434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:09.260922 containerd[1469]: time="2025-07-06T23:56:09.260822761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:09.262451 containerd[1469]: time="2025-07-06T23:56:09.260943342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:09.338380 systemd[1]: Started cri-containerd-02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4.scope - libcontainer container 02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4. Jul 6 23:56:09.449689 containerd[1469]: time="2025-07-06T23:56:09.449257342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hlr8x,Uid:18aa2971-3783-48e6-bae4-2b9283bfdea3,Namespace:kube-system,Attempt:1,} returns sandbox id \"5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d\"" Jul 6 23:56:09.455964 kubelet[2497]: E0706 23:56:09.455690 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:09.470638 containerd[1469]: time="2025-07-06T23:56:09.470377321Z" level=info msg="CreateContainer within sandbox \"5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:56:09.487875 containerd[1469]: time="2025-07-06T23:56:09.487759872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-wb46p,Uid:4e9dee5d-e24d-4799-b79a-36586ddb42a9,Namespace:calico-system,Attempt:1,} returns sandbox id \"c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4\"" Jul 6 23:56:09.522979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447284154.mount: Deactivated successfully. Jul 6 23:56:09.524337 containerd[1469]: time="2025-07-06T23:56:09.524292602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c5f94cc-brq2m,Uid:b4d620c4-ff0d-4798-9fbc-b59167726f3d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab\"" Jul 6 23:56:09.536624 containerd[1469]: time="2025-07-06T23:56:09.536573396Z" level=info msg="CreateContainer within sandbox \"5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8bf2a1ec387ae86557c00a6e68e1223576e540536694ee8b56199b3213624851\"" Jul 6 23:56:09.539028 containerd[1469]: time="2025-07-06T23:56:09.537851518Z" level=info msg="StartContainer for \"8bf2a1ec387ae86557c00a6e68e1223576e540536694ee8b56199b3213624851\"" Jul 6 23:56:09.660591 systemd[1]: Started cri-containerd-8bf2a1ec387ae86557c00a6e68e1223576e540536694ee8b56199b3213624851.scope - libcontainer container 8bf2a1ec387ae86557c00a6e68e1223576e540536694ee8b56199b3213624851. Jul 6 23:56:09.768384 containerd[1469]: time="2025-07-06T23:56:09.768032042Z" level=info msg="StartContainer for \"8bf2a1ec387ae86557c00a6e68e1223576e540536694ee8b56199b3213624851\" returns successfully" Jul 6 23:56:09.811203 containerd[1469]: time="2025-07-06T23:56:09.811162748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4c5f94cc-28d9v,Uid:440a2155-cfea-4aaa-b248-ccfd5a0a677a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4\"" Jul 6 23:56:09.946358 systemd-networkd[1374]: calic9975c01ad0: Gained IPv6LL Jul 6 23:56:10.138754 kubelet[2497]: E0706 23:56:10.138309 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:10.203857 systemd-networkd[1374]: cali8eca972de9d: Gained IPv6LL Jul 6 23:56:10.233761 kubelet[2497]: I0706 23:56:10.233202 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hlr8x" podStartSLOduration=46.233171764 podStartE2EDuration="46.233171764s" podCreationTimestamp="2025-07-06 23:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:10.176924226 +0000 UTC m=+51.566884550" watchObservedRunningTime="2025-07-06 23:56:10.233171764 +0000 UTC m=+51.623132090" Jul 6 23:56:10.387203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630574713.mount: Deactivated successfully. Jul 6 23:56:10.395318 systemd-networkd[1374]: calieefb6de53c9: Gained IPv6LL Jul 6 23:56:10.404213 containerd[1469]: time="2025-07-06T23:56:10.404149259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:10.405109 containerd[1469]: time="2025-07-06T23:56:10.405031524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 6 23:56:10.406515 containerd[1469]: time="2025-07-06T23:56:10.405816768Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:10.408574 containerd[1469]: time="2025-07-06T23:56:10.408534516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:10.410135 containerd[1469]: time="2025-07-06T23:56:10.410019254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.436220482s" Jul 6 23:56:10.410135 containerd[1469]: time="2025-07-06T23:56:10.410056268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 6 23:56:10.414829 containerd[1469]: time="2025-07-06T23:56:10.414784057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:56:10.415645 containerd[1469]: time="2025-07-06T23:56:10.415417122Z" level=info msg="CreateContainer within sandbox \"d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:56:10.437992 containerd[1469]: time="2025-07-06T23:56:10.437931810Z" level=info msg="CreateContainer within sandbox \"d65af24a1f434ccc146eacbde3d2400b04086ff4b7306ae91bdda0186ff1b7f3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"070773bf5531111271dd524cef08cab608f82923e19d3e6f9a692c382e05bfbd\"" Jul 6 23:56:10.441124 containerd[1469]: time="2025-07-06T23:56:10.440977137Z" level=info msg="StartContainer for \"070773bf5531111271dd524cef08cab608f82923e19d3e6f9a692c382e05bfbd\"" Jul 6 23:56:10.511406 systemd[1]: Started cri-containerd-070773bf5531111271dd524cef08cab608f82923e19d3e6f9a692c382e05bfbd.scope - libcontainer container 070773bf5531111271dd524cef08cab608f82923e19d3e6f9a692c382e05bfbd. Jul 6 23:56:10.563289 containerd[1469]: time="2025-07-06T23:56:10.563240092Z" level=info msg="StartContainer for \"070773bf5531111271dd524cef08cab608f82923e19d3e6f9a692c382e05bfbd\" returns successfully" Jul 6 23:56:10.906440 systemd-networkd[1374]: cali6c3e8d88057: Gained IPv6LL Jul 6 23:56:11.178111 kubelet[2497]: E0706 23:56:11.177667 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:11.367403 systemd[1]: Started sshd@7-146.190.157.121:22-139.178.89.65:57680.service - OpenSSH per-connection server daemon (139.178.89.65:57680). Jul 6 23:56:11.451611 sshd[4921]: Accepted publickey for core from 139.178.89.65 port 57680 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:11.454330 sshd[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:11.467471 systemd-logind[1443]: New session 8 of user core. Jul 6 23:56:11.474498 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:56:12.099578 containerd[1469]: time="2025-07-06T23:56:12.098397542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:12.102118 containerd[1469]: time="2025-07-06T23:56:12.101830768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 6 23:56:12.103940 containerd[1469]: time="2025-07-06T23:56:12.103849398Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:12.108152 containerd[1469]: time="2025-07-06T23:56:12.108038373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:12.110093 containerd[1469]: time="2025-07-06T23:56:12.108848543Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.694020313s" Jul 6 23:56:12.110093 containerd[1469]: time="2025-07-06T23:56:12.109895366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 6 23:56:12.112008 containerd[1469]: time="2025-07-06T23:56:12.111980848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:56:12.127578 containerd[1469]: time="2025-07-06T23:56:12.127443960Z" level=info msg="CreateContainer within sandbox \"505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:56:12.156089 containerd[1469]: time="2025-07-06T23:56:12.153202603Z" level=info msg="CreateContainer within sandbox \"505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"925a2b66c46818347387d30286afbd7429af5b067eff7b143b0b066307337c16\"" Jul 6 23:56:12.158092 containerd[1469]: time="2025-07-06T23:56:12.156669712Z" level=info msg="StartContainer for \"925a2b66c46818347387d30286afbd7429af5b067eff7b143b0b066307337c16\"" Jul 6 23:56:12.156925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857235920.mount: Deactivated successfully. Jul 6 23:56:12.183031 kubelet[2497]: E0706 23:56:12.182990 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:12.236338 systemd[1]: Started cri-containerd-925a2b66c46818347387d30286afbd7429af5b067eff7b143b0b066307337c16.scope - libcontainer container 925a2b66c46818347387d30286afbd7429af5b067eff7b143b0b066307337c16. Jul 6 23:56:12.287120 sshd[4921]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:12.302478 systemd[1]: sshd@7-146.190.157.121:22-139.178.89.65:57680.service: Deactivated successfully. Jul 6 23:56:12.305343 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:56:12.308909 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:56:12.314049 systemd-logind[1443]: Removed session 8. Jul 6 23:56:12.316683 containerd[1469]: time="2025-07-06T23:56:12.315111072Z" level=info msg="StartContainer for \"925a2b66c46818347387d30286afbd7429af5b067eff7b143b0b066307337c16\" returns successfully" Jul 6 23:56:15.386035 containerd[1469]: time="2025-07-06T23:56:15.385865092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:15.388152 containerd[1469]: time="2025-07-06T23:56:15.387677034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 6 23:56:15.388152 containerd[1469]: time="2025-07-06T23:56:15.387818179Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:15.407432 containerd[1469]: time="2025-07-06T23:56:15.407374471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:15.408855 containerd[1469]: time="2025-07-06T23:56:15.408675527Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.296493204s" Jul 6 23:56:15.408855 containerd[1469]: time="2025-07-06T23:56:15.408726261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 6 23:56:15.412634 containerd[1469]: time="2025-07-06T23:56:15.412358376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:56:15.439987 containerd[1469]: time="2025-07-06T23:56:15.439255095Z" level=info msg="CreateContainer within sandbox \"293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:56:15.455276 containerd[1469]: time="2025-07-06T23:56:15.455224790Z" level=info msg="CreateContainer within sandbox \"293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"218150b22ce60ffdbd724d7fdb2fff742f9dd771d27be7bd76aa766a0077f416\"" Jul 6 23:56:15.456721 containerd[1469]: time="2025-07-06T23:56:15.456584844Z" level=info msg="StartContainer for \"218150b22ce60ffdbd724d7fdb2fff742f9dd771d27be7bd76aa766a0077f416\"" Jul 6 23:56:15.575401 systemd[1]: Started cri-containerd-218150b22ce60ffdbd724d7fdb2fff742f9dd771d27be7bd76aa766a0077f416.scope - libcontainer container 218150b22ce60ffdbd724d7fdb2fff742f9dd771d27be7bd76aa766a0077f416. Jul 6 23:56:15.641639 containerd[1469]: time="2025-07-06T23:56:15.641490605Z" level=info msg="StartContainer for \"218150b22ce60ffdbd724d7fdb2fff742f9dd771d27be7bd76aa766a0077f416\" returns successfully" Jul 6 23:56:16.238435 kubelet[2497]: I0706 23:56:16.238134 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7684c4899d-9vhnf" podStartSLOduration=27.146065308 podStartE2EDuration="35.238106481s" podCreationTimestamp="2025-07-06 23:55:41 +0000 UTC" firstStartedPulling="2025-07-06 23:56:07.31814736 +0000 UTC m=+48.708107669" lastFinishedPulling="2025-07-06 23:56:15.410188526 +0000 UTC m=+56.800148842" observedRunningTime="2025-07-06 23:56:16.235399734 +0000 UTC m=+57.625360058" watchObservedRunningTime="2025-07-06 23:56:16.238106481 +0000 UTC m=+57.628066805" Jul 6 23:56:16.238435 kubelet[2497]: I0706 23:56:16.238392 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7559c46b4d-d7wjd" podStartSLOduration=6.871352836 podStartE2EDuration="13.238380009s" podCreationTimestamp="2025-07-06 23:56:03 +0000 UTC" firstStartedPulling="2025-07-06 23:56:04.044314717 +0000 UTC m=+45.434275034" lastFinishedPulling="2025-07-06 23:56:10.41134188 +0000 UTC m=+51.801302207" observedRunningTime="2025-07-06 23:56:11.198142643 +0000 UTC m=+52.588102978" watchObservedRunningTime="2025-07-06 23:56:16.238380009 +0000 UTC m=+57.628340333" Jul 6 23:56:17.302959 systemd[1]: Started sshd@8-146.190.157.121:22-139.178.89.65:57682.service - OpenSSH per-connection server daemon (139.178.89.65:57682). Jul 6 23:56:17.426922 sshd[5054]: Accepted publickey for core from 139.178.89.65 port 57682 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:17.429828 sshd[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:17.438776 systemd-logind[1443]: New session 9 of user core. Jul 6 23:56:17.445485 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:56:18.033488 sshd[5054]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:18.041533 systemd[1]: sshd@8-146.190.157.121:22-139.178.89.65:57682.service: Deactivated successfully. Jul 6 23:56:18.047712 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:56:18.050564 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:56:18.052700 systemd-logind[1443]: Removed session 9. Jul 6 23:56:18.661886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215233767.mount: Deactivated successfully. Jul 6 23:56:19.016839 containerd[1469]: time="2025-07-06T23:56:19.016332499Z" level=info msg="StopPodSandbox for \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\"" Jul 6 23:56:19.548415 containerd[1469]: 2025-07-06 23:56:19.251 [WARNING][5109] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0", GenerateName:"calico-apiserver-6d4c5f94cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4d620c4-ff0d-4798-9fbc-b59167726f3d", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c5f94cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab", Pod:"calico-apiserver-6d4c5f94cc-brq2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieefb6de53c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:19.548415 containerd[1469]: 2025-07-06 23:56:19.253 [INFO][5109] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:56:19.548415 containerd[1469]: 2025-07-06 23:56:19.253 [INFO][5109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" iface="eth0" netns="" Jul 6 23:56:19.548415 containerd[1469]: 2025-07-06 23:56:19.253 [INFO][5109] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:56:19.548415 containerd[1469]: 2025-07-06 23:56:19.253 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:56:19.548415 containerd[1469]: 2025-07-06 23:56:19.509 [INFO][5116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" HandleID="k8s-pod-network.91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:19.548415 containerd[1469]: 2025-07-06 23:56:19.513 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:19.548415 containerd[1469]: 2025-07-06 23:56:19.513 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:19.548415 containerd[1469]: 2025-07-06 23:56:19.534 [WARNING][5116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" HandleID="k8s-pod-network.91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:19.548415 containerd[1469]: 2025-07-06 23:56:19.535 [INFO][5116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" HandleID="k8s-pod-network.91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:19.548415 containerd[1469]: 2025-07-06 23:56:19.537 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:19.548415 containerd[1469]: 2025-07-06 23:56:19.543 [INFO][5109] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:56:19.578294 containerd[1469]: time="2025-07-06T23:56:19.550527154Z" level=info msg="TearDown network for sandbox \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\" successfully" Jul 6 23:56:19.578294 containerd[1469]: time="2025-07-06T23:56:19.577829195Z" level=info msg="StopPodSandbox for \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\" returns successfully" Jul 6 23:56:19.749280 containerd[1469]: time="2025-07-06T23:56:19.748904299Z" level=info msg="RemovePodSandbox for \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\"" Jul 6 23:56:19.752424 containerd[1469]: time="2025-07-06T23:56:19.752048446Z" level=info msg="Forcibly stopping sandbox \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\"" Jul 6 23:56:19.880868 containerd[1469]: time="2025-07-06T23:56:19.880208251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:19.882943 containerd[1469]: time="2025-07-06T23:56:19.882812998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 6 23:56:19.883779 containerd[1469]: time="2025-07-06T23:56:19.883746856Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:19.898952 containerd[1469]: time="2025-07-06T23:56:19.898464325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:19.901089 containerd[1469]: time="2025-07-06T23:56:19.900865705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.48845719s" Jul 6 23:56:19.901089 containerd[1469]: time="2025-07-06T23:56:19.900928702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 6 23:56:19.907534 containerd[1469]: time="2025-07-06T23:56:19.904781686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:56:19.907534 containerd[1469]: time="2025-07-06T23:56:19.905871864Z" level=info msg="CreateContainer within sandbox \"c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:56:19.936421 containerd[1469]: time="2025-07-06T23:56:19.934812557Z" level=info msg="CreateContainer within sandbox \"c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"51ef14d7bd1cf4c97c69d3df8e89b7555ea31ec36d5ab349933fb43429d0b6fb\"" Jul 6 23:56:19.940325 containerd[1469]: time="2025-07-06T23:56:19.940281116Z" level=info msg="StartContainer for \"51ef14d7bd1cf4c97c69d3df8e89b7555ea31ec36d5ab349933fb43429d0b6fb\"" Jul 6 23:56:20.055867 containerd[1469]: 2025-07-06 23:56:19.880 [WARNING][5132] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0", GenerateName:"calico-apiserver-6d4c5f94cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4d620c4-ff0d-4798-9fbc-b59167726f3d", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c5f94cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab", Pod:"calico-apiserver-6d4c5f94cc-brq2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieefb6de53c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:20.055867 containerd[1469]: 2025-07-06 23:56:19.881 [INFO][5132] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:56:20.055867 containerd[1469]: 2025-07-06 23:56:19.881 [INFO][5132] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" iface="eth0" netns="" Jul 6 23:56:20.055867 containerd[1469]: 2025-07-06 23:56:19.881 [INFO][5132] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:56:20.055867 containerd[1469]: 2025-07-06 23:56:19.881 [INFO][5132] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:56:20.055867 containerd[1469]: 2025-07-06 23:56:19.991 [INFO][5146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" HandleID="k8s-pod-network.91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:20.055867 containerd[1469]: 2025-07-06 23:56:19.991 [INFO][5146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:20.055867 containerd[1469]: 2025-07-06 23:56:19.991 [INFO][5146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:20.055867 containerd[1469]: 2025-07-06 23:56:20.024 [WARNING][5146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" HandleID="k8s-pod-network.91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:20.055867 containerd[1469]: 2025-07-06 23:56:20.024 [INFO][5146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" HandleID="k8s-pod-network.91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--brq2m-eth0" Jul 6 23:56:20.055867 containerd[1469]: 2025-07-06 23:56:20.031 [INFO][5146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:20.055867 containerd[1469]: 2025-07-06 23:56:20.051 [INFO][5132] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b" Jul 6 23:56:20.058037 containerd[1469]: time="2025-07-06T23:56:20.056695770Z" level=info msg="TearDown network for sandbox \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\" successfully" Jul 6 23:56:20.101314 systemd[1]: Started cri-containerd-51ef14d7bd1cf4c97c69d3df8e89b7555ea31ec36d5ab349933fb43429d0b6fb.scope - libcontainer container 51ef14d7bd1cf4c97c69d3df8e89b7555ea31ec36d5ab349933fb43429d0b6fb. Jul 6 23:56:20.112936 containerd[1469]: time="2025-07-06T23:56:20.110113992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:20.170949 containerd[1469]: time="2025-07-06T23:56:20.170665779Z" level=info msg="RemovePodSandbox \"91c76cf00c352c2a9103993748682ce06067a750ec4efbbab536600f84afe70b\" returns successfully" Jul 6 23:56:20.187094 containerd[1469]: time="2025-07-06T23:56:20.187002516Z" level=info msg="StopPodSandbox for \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\"" Jul 6 23:56:20.218498 containerd[1469]: time="2025-07-06T23:56:20.218455848Z" level=info msg="StartContainer for \"51ef14d7bd1cf4c97c69d3df8e89b7555ea31ec36d5ab349933fb43429d0b6fb\" returns successfully" Jul 6 23:56:20.295180 kubelet[2497]: I0706 23:56:20.292771 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-wb46p" podStartSLOduration=29.890181632 podStartE2EDuration="40.292712842s" podCreationTimestamp="2025-07-06 23:55:40 +0000 UTC" firstStartedPulling="2025-07-06 23:56:09.499209629 +0000 UTC m=+50.889169935" lastFinishedPulling="2025-07-06 23:56:19.901740841 +0000 UTC m=+61.291701145" observedRunningTime="2025-07-06 23:56:20.290466376 +0000 UTC m=+61.680426700" watchObservedRunningTime="2025-07-06 23:56:20.292712842 +0000 UTC m=+61.682673166" Jul 6 23:56:20.484815 systemd[1]: run-containerd-runc-k8s.io-51ef14d7bd1cf4c97c69d3df8e89b7555ea31ec36d5ab349933fb43429d0b6fb-runc.F2sDiF.mount: Deactivated successfully. Jul 6 23:56:20.509835 containerd[1469]: 2025-07-06 23:56:20.291 [WARNING][5187] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a6b54d7d-c374-4342-81a5-36baa376812a", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391", Pod:"coredns-7c65d6cfc9-kxtht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec36ac81e75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:20.509835 containerd[1469]: 2025-07-06 23:56:20.291 [INFO][5187] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:56:20.509835 containerd[1469]: 2025-07-06 23:56:20.291 [INFO][5187] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" iface="eth0" netns="" Jul 6 23:56:20.509835 containerd[1469]: 2025-07-06 23:56:20.291 [INFO][5187] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:56:20.509835 containerd[1469]: 2025-07-06 23:56:20.291 [INFO][5187] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:56:20.509835 containerd[1469]: 2025-07-06 23:56:20.454 [INFO][5201] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" HandleID="k8s-pod-network.609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:20.509835 containerd[1469]: 2025-07-06 23:56:20.454 [INFO][5201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:20.509835 containerd[1469]: 2025-07-06 23:56:20.455 [INFO][5201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:20.509835 containerd[1469]: 2025-07-06 23:56:20.493 [WARNING][5201] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" HandleID="k8s-pod-network.609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:20.509835 containerd[1469]: 2025-07-06 23:56:20.493 [INFO][5201] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" HandleID="k8s-pod-network.609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:20.509835 containerd[1469]: 2025-07-06 23:56:20.502 [INFO][5201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:20.509835 containerd[1469]: 2025-07-06 23:56:20.507 [INFO][5187] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:56:20.512951 containerd[1469]: time="2025-07-06T23:56:20.509879608Z" level=info msg="TearDown network for sandbox \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\" successfully" Jul 6 23:56:20.512951 containerd[1469]: time="2025-07-06T23:56:20.509909033Z" level=info msg="StopPodSandbox for \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\" returns successfully" Jul 6 23:56:20.512951 containerd[1469]: time="2025-07-06T23:56:20.512310336Z" level=info msg="RemovePodSandbox for \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\"" Jul 6 23:56:20.512951 containerd[1469]: time="2025-07-06T23:56:20.512344961Z" level=info msg="Forcibly stopping sandbox \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\"" Jul 6 23:56:20.640822 containerd[1469]: 2025-07-06 23:56:20.572 [WARNING][5237] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a6b54d7d-c374-4342-81a5-36baa376812a", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"36955b9e347fa3ec26fc219460b61110b5caba88e496ed200c1933614fa69391", Pod:"coredns-7c65d6cfc9-kxtht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec36ac81e75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:20.640822 containerd[1469]: 2025-07-06 23:56:20.573 [INFO][5237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:56:20.640822 containerd[1469]: 2025-07-06 23:56:20.573 [INFO][5237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" iface="eth0" netns="" Jul 6 23:56:20.640822 containerd[1469]: 2025-07-06 23:56:20.573 [INFO][5237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:56:20.640822 containerd[1469]: 2025-07-06 23:56:20.573 [INFO][5237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:56:20.640822 containerd[1469]: 2025-07-06 23:56:20.619 [INFO][5247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" HandleID="k8s-pod-network.609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:20.640822 containerd[1469]: 2025-07-06 23:56:20.619 [INFO][5247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:20.640822 containerd[1469]: 2025-07-06 23:56:20.620 [INFO][5247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:20.640822 containerd[1469]: 2025-07-06 23:56:20.632 [WARNING][5247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" HandleID="k8s-pod-network.609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:20.640822 containerd[1469]: 2025-07-06 23:56:20.632 [INFO][5247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" HandleID="k8s-pod-network.609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--kxtht-eth0" Jul 6 23:56:20.640822 containerd[1469]: 2025-07-06 23:56:20.634 [INFO][5247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:20.640822 containerd[1469]: 2025-07-06 23:56:20.637 [INFO][5237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e" Jul 6 23:56:20.643296 containerd[1469]: time="2025-07-06T23:56:20.641259963Z" level=info msg="TearDown network for sandbox \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\" successfully" Jul 6 23:56:20.645465 containerd[1469]: time="2025-07-06T23:56:20.645364680Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:20.645465 containerd[1469]: time="2025-07-06T23:56:20.645465674Z" level=info msg="RemovePodSandbox \"609abf26fc8220b7cd9619c51db1d3e1e20e91e9a66943446ddc947d09440f0e\" returns successfully" Jul 6 23:56:20.646506 containerd[1469]: time="2025-07-06T23:56:20.646146671Z" level=info msg="StopPodSandbox for \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\"" Jul 6 23:56:20.744005 containerd[1469]: 2025-07-06 23:56:20.696 [WARNING][5262] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0", GenerateName:"calico-apiserver-6d4c5f94cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"440a2155-cfea-4aaa-b248-ccfd5a0a677a", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c5f94cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4", Pod:"calico-apiserver-6d4c5f94cc-28d9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c3e8d88057", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:20.744005 containerd[1469]: 2025-07-06 23:56:20.697 [INFO][5262] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:56:20.744005 containerd[1469]: 2025-07-06 23:56:20.697 [INFO][5262] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" iface="eth0" netns="" Jul 6 23:56:20.744005 containerd[1469]: 2025-07-06 23:56:20.697 [INFO][5262] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:56:20.744005 containerd[1469]: 2025-07-06 23:56:20.697 [INFO][5262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:56:20.744005 containerd[1469]: 2025-07-06 23:56:20.724 [INFO][5269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" HandleID="k8s-pod-network.70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:20.744005 containerd[1469]: 2025-07-06 23:56:20.724 [INFO][5269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:20.744005 containerd[1469]: 2025-07-06 23:56:20.725 [INFO][5269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:20.744005 containerd[1469]: 2025-07-06 23:56:20.736 [WARNING][5269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" HandleID="k8s-pod-network.70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:20.744005 containerd[1469]: 2025-07-06 23:56:20.736 [INFO][5269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" HandleID="k8s-pod-network.70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:20.744005 containerd[1469]: 2025-07-06 23:56:20.738 [INFO][5269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:20.744005 containerd[1469]: 2025-07-06 23:56:20.741 [INFO][5262] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:56:20.744005 containerd[1469]: time="2025-07-06T23:56:20.743706564Z" level=info msg="TearDown network for sandbox \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\" successfully" Jul 6 23:56:20.744005 containerd[1469]: time="2025-07-06T23:56:20.743732039Z" level=info msg="StopPodSandbox for \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\" returns successfully" Jul 6 23:56:20.745898 containerd[1469]: time="2025-07-06T23:56:20.745460742Z" level=info msg="RemovePodSandbox for \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\"" Jul 6 23:56:20.745898 containerd[1469]: time="2025-07-06T23:56:20.745507502Z" level=info msg="Forcibly stopping sandbox \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\"" Jul 6 23:56:20.846242 containerd[1469]: 2025-07-06 23:56:20.796 [WARNING][5284] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0", GenerateName:"calico-apiserver-6d4c5f94cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"440a2155-cfea-4aaa-b248-ccfd5a0a677a", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4c5f94cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4", Pod:"calico-apiserver-6d4c5f94cc-28d9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6c3e8d88057", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:20.846242 containerd[1469]: 2025-07-06 23:56:20.796 [INFO][5284] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:56:20.846242 containerd[1469]: 2025-07-06 23:56:20.796 [INFO][5284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" iface="eth0" netns="" Jul 6 23:56:20.846242 containerd[1469]: 2025-07-06 23:56:20.796 [INFO][5284] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:56:20.846242 containerd[1469]: 2025-07-06 23:56:20.796 [INFO][5284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:56:20.846242 containerd[1469]: 2025-07-06 23:56:20.830 [INFO][5291] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" HandleID="k8s-pod-network.70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:20.846242 containerd[1469]: 2025-07-06 23:56:20.831 [INFO][5291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:20.846242 containerd[1469]: 2025-07-06 23:56:20.831 [INFO][5291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:20.846242 containerd[1469]: 2025-07-06 23:56:20.839 [WARNING][5291] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" HandleID="k8s-pod-network.70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:20.846242 containerd[1469]: 2025-07-06 23:56:20.839 [INFO][5291] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" HandleID="k8s-pod-network.70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--apiserver--6d4c5f94cc--28d9v-eth0" Jul 6 23:56:20.846242 containerd[1469]: 2025-07-06 23:56:20.841 [INFO][5291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:20.846242 containerd[1469]: 2025-07-06 23:56:20.843 [INFO][5284] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3" Jul 6 23:56:20.848195 containerd[1469]: time="2025-07-06T23:56:20.846694370Z" level=info msg="TearDown network for sandbox \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\" successfully" Jul 6 23:56:20.849866 containerd[1469]: time="2025-07-06T23:56:20.849830886Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:20.850059 containerd[1469]: time="2025-07-06T23:56:20.850037277Z" level=info msg="RemovePodSandbox \"70127bdc6236478fc6e69c3cc9ecff6262261e841552d91d94d217b1ba7966d3\" returns successfully" Jul 6 23:56:20.850793 containerd[1469]: time="2025-07-06T23:56:20.850767896Z" level=info msg="StopPodSandbox for \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\"" Jul 6 23:56:20.951407 containerd[1469]: 2025-07-06 23:56:20.904 [WARNING][5306] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7abd3305-5de3-4e82-84ee-e697b6b22043", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5", Pod:"csi-node-driver-c9x6m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92d42368c2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:20.951407 containerd[1469]: 2025-07-06 23:56:20.904 [INFO][5306] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:56:20.951407 containerd[1469]: 2025-07-06 23:56:20.904 [INFO][5306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" iface="eth0" netns="" Jul 6 23:56:20.951407 containerd[1469]: 2025-07-06 23:56:20.904 [INFO][5306] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:56:20.951407 containerd[1469]: 2025-07-06 23:56:20.904 [INFO][5306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:56:20.951407 containerd[1469]: 2025-07-06 23:56:20.935 [INFO][5313] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" HandleID="k8s-pod-network.5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Workload="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:20.951407 containerd[1469]: 2025-07-06 23:56:20.936 [INFO][5313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:20.951407 containerd[1469]: 2025-07-06 23:56:20.936 [INFO][5313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:20.951407 containerd[1469]: 2025-07-06 23:56:20.943 [WARNING][5313] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" HandleID="k8s-pod-network.5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Workload="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:20.951407 containerd[1469]: 2025-07-06 23:56:20.943 [INFO][5313] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" HandleID="k8s-pod-network.5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Workload="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:20.951407 containerd[1469]: 2025-07-06 23:56:20.945 [INFO][5313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:20.951407 containerd[1469]: 2025-07-06 23:56:20.948 [INFO][5306] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:56:20.952617 containerd[1469]: time="2025-07-06T23:56:20.951469665Z" level=info msg="TearDown network for sandbox \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\" successfully" Jul 6 23:56:20.952617 containerd[1469]: time="2025-07-06T23:56:20.951503530Z" level=info msg="StopPodSandbox for \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\" returns successfully" Jul 6 23:56:20.952617 containerd[1469]: time="2025-07-06T23:56:20.952158236Z" level=info msg="RemovePodSandbox for \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\"" Jul 6 23:56:20.952617 containerd[1469]: time="2025-07-06T23:56:20.952200420Z" level=info msg="Forcibly stopping sandbox \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\"" Jul 6 23:56:21.091837 containerd[1469]: 2025-07-06 23:56:21.017 [WARNING][5327] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7abd3305-5de3-4e82-84ee-e697b6b22043", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5", Pod:"csi-node-driver-c9x6m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92d42368c2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:21.091837 containerd[1469]: 2025-07-06 23:56:21.018 [INFO][5327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:56:21.091837 containerd[1469]: 2025-07-06 23:56:21.018 [INFO][5327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" iface="eth0" netns="" Jul 6 23:56:21.091837 containerd[1469]: 2025-07-06 23:56:21.018 [INFO][5327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:56:21.091837 containerd[1469]: 2025-07-06 23:56:21.018 [INFO][5327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:56:21.091837 containerd[1469]: 2025-07-06 23:56:21.062 [INFO][5334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" HandleID="k8s-pod-network.5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Workload="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:21.091837 containerd[1469]: 2025-07-06 23:56:21.063 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:21.091837 containerd[1469]: 2025-07-06 23:56:21.063 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:21.091837 containerd[1469]: 2025-07-06 23:56:21.072 [WARNING][5334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" HandleID="k8s-pod-network.5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Workload="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:21.091837 containerd[1469]: 2025-07-06 23:56:21.072 [INFO][5334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" HandleID="k8s-pod-network.5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Workload="ci--4081.3.4--9--29085cf50e-k8s-csi--node--driver--c9x6m-eth0" Jul 6 23:56:21.091837 containerd[1469]: 2025-07-06 23:56:21.077 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:21.091837 containerd[1469]: 2025-07-06 23:56:21.083 [INFO][5327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2" Jul 6 23:56:21.093044 containerd[1469]: time="2025-07-06T23:56:21.091893969Z" level=info msg="TearDown network for sandbox \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\" successfully" Jul 6 23:56:21.095898 containerd[1469]: time="2025-07-06T23:56:21.095833069Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:21.096529 containerd[1469]: time="2025-07-06T23:56:21.096141856Z" level=info msg="RemovePodSandbox \"5befe2961faaee1016615675993875a8cd48ff5c693d54dffd0d1451ede6ade2\" returns successfully" Jul 6 23:56:21.096687 containerd[1469]: time="2025-07-06T23:56:21.096638047Z" level=info msg="StopPodSandbox for \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\"" Jul 6 23:56:21.201200 containerd[1469]: 2025-07-06 23:56:21.145 [WARNING][5348] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0", GenerateName:"calico-kube-controllers-7684c4899d-", Namespace:"calico-system", SelfLink:"", UID:"15633098-99cc-4da2-aa2e-7ce63afd2881", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7684c4899d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51", Pod:"calico-kube-controllers-7684c4899d-9vhnf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d2940d1b7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:21.201200 containerd[1469]: 2025-07-06 23:56:21.145 [INFO][5348] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:56:21.201200 containerd[1469]: 2025-07-06 23:56:21.145 [INFO][5348] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" iface="eth0" netns="" Jul 6 23:56:21.201200 containerd[1469]: 2025-07-06 23:56:21.145 [INFO][5348] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:56:21.201200 containerd[1469]: 2025-07-06 23:56:21.145 [INFO][5348] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:56:21.201200 containerd[1469]: 2025-07-06 23:56:21.184 [INFO][5355] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" HandleID="k8s-pod-network.d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:21.201200 containerd[1469]: 2025-07-06 23:56:21.185 [INFO][5355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:21.201200 containerd[1469]: 2025-07-06 23:56:21.185 [INFO][5355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:21.201200 containerd[1469]: 2025-07-06 23:56:21.193 [WARNING][5355] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" HandleID="k8s-pod-network.d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:21.201200 containerd[1469]: 2025-07-06 23:56:21.193 [INFO][5355] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" HandleID="k8s-pod-network.d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:21.201200 containerd[1469]: 2025-07-06 23:56:21.195 [INFO][5355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:21.201200 containerd[1469]: 2025-07-06 23:56:21.198 [INFO][5348] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:56:21.202816 containerd[1469]: time="2025-07-06T23:56:21.201266726Z" level=info msg="TearDown network for sandbox \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\" successfully" Jul 6 23:56:21.202816 containerd[1469]: time="2025-07-06T23:56:21.201308995Z" level=info msg="StopPodSandbox for \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\" returns successfully" Jul 6 23:56:21.202816 containerd[1469]: time="2025-07-06T23:56:21.201883263Z" level=info msg="RemovePodSandbox for \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\"" Jul 6 23:56:21.202816 containerd[1469]: time="2025-07-06T23:56:21.201934060Z" level=info msg="Forcibly stopping sandbox \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\"" Jul 6 23:56:21.354430 containerd[1469]: 2025-07-06 23:56:21.274 [WARNING][5369] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0", GenerateName:"calico-kube-controllers-7684c4899d-", Namespace:"calico-system", SelfLink:"", UID:"15633098-99cc-4da2-aa2e-7ce63afd2881", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7684c4899d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"293c7732872e3ca97ef493b5453da8ced80b0a2009f3dfb9db20cd8cab5fdd51", Pod:"calico-kube-controllers-7684c4899d-9vhnf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d2940d1b7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:21.354430 containerd[1469]: 2025-07-06 23:56:21.277 [INFO][5369] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:56:21.354430 containerd[1469]: 2025-07-06 23:56:21.277 [INFO][5369] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" iface="eth0" netns="" Jul 6 23:56:21.354430 containerd[1469]: 2025-07-06 23:56:21.277 [INFO][5369] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:56:21.354430 containerd[1469]: 2025-07-06 23:56:21.277 [INFO][5369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:56:21.354430 containerd[1469]: 2025-07-06 23:56:21.331 [INFO][5376] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" HandleID="k8s-pod-network.d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:21.354430 containerd[1469]: 2025-07-06 23:56:21.331 [INFO][5376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:21.354430 containerd[1469]: 2025-07-06 23:56:21.331 [INFO][5376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:21.354430 containerd[1469]: 2025-07-06 23:56:21.342 [WARNING][5376] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" HandleID="k8s-pod-network.d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:21.354430 containerd[1469]: 2025-07-06 23:56:21.342 [INFO][5376] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" HandleID="k8s-pod-network.d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Workload="ci--4081.3.4--9--29085cf50e-k8s-calico--kube--controllers--7684c4899d--9vhnf-eth0" Jul 6 23:56:21.354430 containerd[1469]: 2025-07-06 23:56:21.345 [INFO][5376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:21.354430 containerd[1469]: 2025-07-06 23:56:21.349 [INFO][5369] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f" Jul 6 23:56:21.354430 containerd[1469]: time="2025-07-06T23:56:21.353555047Z" level=info msg="TearDown network for sandbox \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\" successfully" Jul 6 23:56:21.358661 containerd[1469]: time="2025-07-06T23:56:21.358609788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:21.359274 containerd[1469]: time="2025-07-06T23:56:21.358949109Z" level=info msg="RemovePodSandbox \"d078cf10613d2c0bfccfc72d428af71a968995d3e318eee5edcf133395cfaa0f\" returns successfully" Jul 6 23:56:21.360102 containerd[1469]: time="2025-07-06T23:56:21.360005654Z" level=info msg="StopPodSandbox for \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\"" Jul 6 23:56:21.485553 containerd[1469]: 2025-07-06 23:56:21.419 [WARNING][5391] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18aa2971-3783-48e6-bae4-2b9283bfdea3", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d", Pod:"coredns-7c65d6cfc9-hlr8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8eca972de9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:21.485553 containerd[1469]: 2025-07-06 23:56:21.419 [INFO][5391] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:56:21.485553 containerd[1469]: 2025-07-06 23:56:21.419 [INFO][5391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" iface="eth0" netns="" Jul 6 23:56:21.485553 containerd[1469]: 2025-07-06 23:56:21.419 [INFO][5391] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:56:21.485553 containerd[1469]: 2025-07-06 23:56:21.419 [INFO][5391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:56:21.485553 containerd[1469]: 2025-07-06 23:56:21.457 [INFO][5398] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" HandleID="k8s-pod-network.8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:21.485553 containerd[1469]: 2025-07-06 23:56:21.457 [INFO][5398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:21.485553 containerd[1469]: 2025-07-06 23:56:21.457 [INFO][5398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:21.485553 containerd[1469]: 2025-07-06 23:56:21.471 [WARNING][5398] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" HandleID="k8s-pod-network.8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:21.485553 containerd[1469]: 2025-07-06 23:56:21.471 [INFO][5398] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" HandleID="k8s-pod-network.8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:21.485553 containerd[1469]: 2025-07-06 23:56:21.475 [INFO][5398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:21.485553 containerd[1469]: 2025-07-06 23:56:21.479 [INFO][5391] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:56:21.488408 containerd[1469]: time="2025-07-06T23:56:21.485517669Z" level=info msg="TearDown network for sandbox \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\" successfully" Jul 6 23:56:21.488408 containerd[1469]: time="2025-07-06T23:56:21.486031956Z" level=info msg="StopPodSandbox for \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\" returns successfully" Jul 6 23:56:21.488408 containerd[1469]: time="2025-07-06T23:56:21.486703740Z" level=info msg="RemovePodSandbox for \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\"" Jul 6 23:56:21.488408 containerd[1469]: time="2025-07-06T23:56:21.486741214Z" level=info msg="Forcibly stopping sandbox \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\"" Jul 6 23:56:21.726437 containerd[1469]: 2025-07-06 23:56:21.628 [WARNING][5413] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18aa2971-3783-48e6-bae4-2b9283bfdea3", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"5f60f1da6939c77e14a621b5c07211fdba65c31d4856c697384bd95976f43f6d", Pod:"coredns-7c65d6cfc9-hlr8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8eca972de9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:21.726437 containerd[1469]: 2025-07-06 23:56:21.629 [INFO][5413] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:56:21.726437 containerd[1469]: 2025-07-06 23:56:21.629 [INFO][5413] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" iface="eth0" netns="" Jul 6 23:56:21.726437 containerd[1469]: 2025-07-06 23:56:21.629 [INFO][5413] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:56:21.726437 containerd[1469]: 2025-07-06 23:56:21.629 [INFO][5413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:56:21.726437 containerd[1469]: 2025-07-06 23:56:21.700 [INFO][5441] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" HandleID="k8s-pod-network.8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:21.726437 containerd[1469]: 2025-07-06 23:56:21.700 [INFO][5441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:21.726437 containerd[1469]: 2025-07-06 23:56:21.700 [INFO][5441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:21.726437 containerd[1469]: 2025-07-06 23:56:21.713 [WARNING][5441] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" HandleID="k8s-pod-network.8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:21.726437 containerd[1469]: 2025-07-06 23:56:21.713 [INFO][5441] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" HandleID="k8s-pod-network.8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Workload="ci--4081.3.4--9--29085cf50e-k8s-coredns--7c65d6cfc9--hlr8x-eth0" Jul 6 23:56:21.726437 containerd[1469]: 2025-07-06 23:56:21.719 [INFO][5441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:21.726437 containerd[1469]: 2025-07-06 23:56:21.722 [INFO][5413] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5" Jul 6 23:56:21.726437 containerd[1469]: time="2025-07-06T23:56:21.726396382Z" level=info msg="TearDown network for sandbox \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\" successfully" Jul 6 23:56:21.753334 containerd[1469]: time="2025-07-06T23:56:21.753272798Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:21.754794 containerd[1469]: time="2025-07-06T23:56:21.753414151Z" level=info msg="RemovePodSandbox \"8e45b8f93d915c4a22d646e64b9521a207ce919f8214a86b82e546b9d413abb5\" returns successfully" Jul 6 23:56:21.755003 containerd[1469]: time="2025-07-06T23:56:21.754903438Z" level=info msg="StopPodSandbox for \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\"" Jul 6 23:56:21.919809 containerd[1469]: 2025-07-06 23:56:21.850 [WARNING][5458] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4e9dee5d-e24d-4799-b79a-36586ddb42a9", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4", Pod:"goldmane-58fd7646b9-wb46p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9975c01ad0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:21.919809 containerd[1469]: 2025-07-06 23:56:21.850 [INFO][5458] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:56:21.919809 containerd[1469]: 2025-07-06 23:56:21.850 [INFO][5458] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" iface="eth0" netns="" Jul 6 23:56:21.919809 containerd[1469]: 2025-07-06 23:56:21.850 [INFO][5458] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:56:21.919809 containerd[1469]: 2025-07-06 23:56:21.850 [INFO][5458] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:56:21.919809 containerd[1469]: 2025-07-06 23:56:21.890 [INFO][5466] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" HandleID="k8s-pod-network.4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Workload="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:21.919809 containerd[1469]: 2025-07-06 23:56:21.892 [INFO][5466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:21.919809 containerd[1469]: 2025-07-06 23:56:21.892 [INFO][5466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:21.919809 containerd[1469]: 2025-07-06 23:56:21.905 [WARNING][5466] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" HandleID="k8s-pod-network.4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Workload="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:21.919809 containerd[1469]: 2025-07-06 23:56:21.905 [INFO][5466] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" HandleID="k8s-pod-network.4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Workload="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:21.919809 containerd[1469]: 2025-07-06 23:56:21.907 [INFO][5466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:21.919809 containerd[1469]: 2025-07-06 23:56:21.910 [INFO][5458] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:56:21.919809 containerd[1469]: time="2025-07-06T23:56:21.919490224Z" level=info msg="TearDown network for sandbox \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\" successfully" Jul 6 23:56:21.919809 containerd[1469]: time="2025-07-06T23:56:21.919534805Z" level=info msg="StopPodSandbox for \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\" returns successfully" Jul 6 23:56:21.921312 containerd[1469]: time="2025-07-06T23:56:21.920751071Z" level=info msg="RemovePodSandbox for \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\"" Jul 6 23:56:21.921312 containerd[1469]: time="2025-07-06T23:56:21.920784972Z" level=info msg="Forcibly stopping sandbox \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\"" Jul 6 23:56:21.974695 systemd[1]: Started sshd@9-146.190.157.121:22-80.94.95.116:37140.service - OpenSSH per-connection server daemon (80.94.95.116:37140). Jul 6 23:56:22.104024 containerd[1469]: 2025-07-06 23:56:22.009 [WARNING][5480] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4e9dee5d-e24d-4799-b79a-36586ddb42a9", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-9-29085cf50e", ContainerID:"c3b41b2b76f563267499595b0dd6255d4588d22065212f0ccb88330a8f0fb0d4", Pod:"goldmane-58fd7646b9-wb46p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9975c01ad0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:22.104024 containerd[1469]: 2025-07-06 23:56:22.010 [INFO][5480] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:56:22.104024 containerd[1469]: 2025-07-06 23:56:22.010 [INFO][5480] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" iface="eth0" netns="" Jul 6 23:56:22.104024 containerd[1469]: 2025-07-06 23:56:22.010 [INFO][5480] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:56:22.104024 containerd[1469]: 2025-07-06 23:56:22.010 [INFO][5480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:56:22.104024 containerd[1469]: 2025-07-06 23:56:22.081 [INFO][5489] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" HandleID="k8s-pod-network.4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Workload="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:22.104024 containerd[1469]: 2025-07-06 23:56:22.082 [INFO][5489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:22.104024 containerd[1469]: 2025-07-06 23:56:22.082 [INFO][5489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:22.104024 containerd[1469]: 2025-07-06 23:56:22.094 [WARNING][5489] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" HandleID="k8s-pod-network.4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Workload="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:22.104024 containerd[1469]: 2025-07-06 23:56:22.094 [INFO][5489] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" HandleID="k8s-pod-network.4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Workload="ci--4081.3.4--9--29085cf50e-k8s-goldmane--58fd7646b9--wb46p-eth0" Jul 6 23:56:22.104024 containerd[1469]: 2025-07-06 23:56:22.096 [INFO][5489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:22.104024 containerd[1469]: 2025-07-06 23:56:22.100 [INFO][5480] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c" Jul 6 23:56:22.104024 containerd[1469]: time="2025-07-06T23:56:22.103802988Z" level=info msg="TearDown network for sandbox \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\" successfully" Jul 6 23:56:22.108267 containerd[1469]: time="2025-07-06T23:56:22.107844518Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:22.108267 containerd[1469]: time="2025-07-06T23:56:22.107964326Z" level=info msg="RemovePodSandbox \"4f3db29f4e9bf01ea870eb0de3b36aa16ad009d764f20315c1c5a99e33370c7c\" returns successfully" Jul 6 23:56:22.110098 containerd[1469]: time="2025-07-06T23:56:22.109757113Z" level=info msg="StopPodSandbox for \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\"" Jul 6 23:56:22.213979 containerd[1469]: 2025-07-06 23:56:22.158 [WARNING][5504] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-whisker--577555dc9b--7t5dc-eth0" Jul 6 23:56:22.213979 containerd[1469]: 2025-07-06 23:56:22.159 [INFO][5504] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:56:22.213979 containerd[1469]: 2025-07-06 23:56:22.159 [INFO][5504] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" iface="eth0" netns="" Jul 6 23:56:22.213979 containerd[1469]: 2025-07-06 23:56:22.159 [INFO][5504] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:56:22.213979 containerd[1469]: 2025-07-06 23:56:22.159 [INFO][5504] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:56:22.213979 containerd[1469]: 2025-07-06 23:56:22.195 [INFO][5511] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" HandleID="k8s-pod-network.f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Workload="ci--4081.3.4--9--29085cf50e-k8s-whisker--577555dc9b--7t5dc-eth0" Jul 6 23:56:22.213979 containerd[1469]: 2025-07-06 23:56:22.196 [INFO][5511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:22.213979 containerd[1469]: 2025-07-06 23:56:22.196 [INFO][5511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:22.213979 containerd[1469]: 2025-07-06 23:56:22.204 [WARNING][5511] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" HandleID="k8s-pod-network.f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Workload="ci--4081.3.4--9--29085cf50e-k8s-whisker--577555dc9b--7t5dc-eth0" Jul 6 23:56:22.213979 containerd[1469]: 2025-07-06 23:56:22.204 [INFO][5511] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" HandleID="k8s-pod-network.f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Workload="ci--4081.3.4--9--29085cf50e-k8s-whisker--577555dc9b--7t5dc-eth0" Jul 6 23:56:22.213979 containerd[1469]: 2025-07-06 23:56:22.207 [INFO][5511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:22.213979 containerd[1469]: 2025-07-06 23:56:22.210 [INFO][5504] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:56:22.215408 containerd[1469]: time="2025-07-06T23:56:22.214322555Z" level=info msg="TearDown network for sandbox \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\" successfully" Jul 6 23:56:22.215408 containerd[1469]: time="2025-07-06T23:56:22.214364725Z" level=info msg="StopPodSandbox for \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\" returns successfully" Jul 6 23:56:22.216662 containerd[1469]: time="2025-07-06T23:56:22.216149171Z" level=info msg="RemovePodSandbox for \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\"" Jul 6 23:56:22.216662 containerd[1469]: time="2025-07-06T23:56:22.216186529Z" level=info msg="Forcibly stopping sandbox \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\"" Jul 6 23:56:22.331851 containerd[1469]: 2025-07-06 23:56:22.274 [WARNING][5526] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" WorkloadEndpoint="ci--4081.3.4--9--29085cf50e-k8s-whisker--577555dc9b--7t5dc-eth0" Jul 6 23:56:22.331851 containerd[1469]: 2025-07-06 23:56:22.274 [INFO][5526] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:56:22.331851 containerd[1469]: 2025-07-06 23:56:22.274 [INFO][5526] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" iface="eth0" netns="" Jul 6 23:56:22.331851 containerd[1469]: 2025-07-06 23:56:22.274 [INFO][5526] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:56:22.331851 containerd[1469]: 2025-07-06 23:56:22.274 [INFO][5526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:56:22.331851 containerd[1469]: 2025-07-06 23:56:22.310 [INFO][5533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" HandleID="k8s-pod-network.f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Workload="ci--4081.3.4--9--29085cf50e-k8s-whisker--577555dc9b--7t5dc-eth0" Jul 6 23:56:22.331851 containerd[1469]: 2025-07-06 23:56:22.311 [INFO][5533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:22.331851 containerd[1469]: 2025-07-06 23:56:22.311 [INFO][5533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:22.331851 containerd[1469]: 2025-07-06 23:56:22.319 [WARNING][5533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" HandleID="k8s-pod-network.f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Workload="ci--4081.3.4--9--29085cf50e-k8s-whisker--577555dc9b--7t5dc-eth0" Jul 6 23:56:22.331851 containerd[1469]: 2025-07-06 23:56:22.320 [INFO][5533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" HandleID="k8s-pod-network.f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Workload="ci--4081.3.4--9--29085cf50e-k8s-whisker--577555dc9b--7t5dc-eth0" Jul 6 23:56:22.331851 containerd[1469]: 2025-07-06 23:56:22.323 [INFO][5533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:22.331851 containerd[1469]: 2025-07-06 23:56:22.326 [INFO][5526] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3" Jul 6 23:56:22.331851 containerd[1469]: time="2025-07-06T23:56:22.330360435Z" level=info msg="TearDown network for sandbox \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\" successfully" Jul 6 23:56:22.333461 containerd[1469]: time="2025-07-06T23:56:22.333420041Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:22.333673 containerd[1469]: time="2025-07-06T23:56:22.333650610Z" level=info msg="RemovePodSandbox \"f678cb9b01f8b7615b4ff150b92e6fb41749b98800b14bdd5262e13ab4f277e3\" returns successfully" Jul 6 23:56:23.054265 systemd[1]: Started sshd@10-146.190.157.121:22-139.178.89.65:52426.service - OpenSSH per-connection server daemon (139.178.89.65:52426). Jul 6 23:56:23.153701 sshd[5584]: Accepted publickey for core from 139.178.89.65 port 52426 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:23.157312 sshd[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:23.163652 systemd-logind[1443]: New session 10 of user core. Jul 6 23:56:23.173387 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:56:23.916352 sshd[5584]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:23.932570 systemd[1]: sshd@10-146.190.157.121:22-139.178.89.65:52426.service: Deactivated successfully. Jul 6 23:56:23.939045 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:56:23.948367 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:56:23.954730 systemd[1]: Started sshd@11-146.190.157.121:22-139.178.89.65:52434.service - OpenSSH per-connection server daemon (139.178.89.65:52434). Jul 6 23:56:23.957925 systemd-logind[1443]: Removed session 10. Jul 6 23:56:24.031215 sshd[5620]: Accepted publickey for core from 139.178.89.65 port 52434 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:24.032967 sshd[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:24.046584 systemd-logind[1443]: New session 11 of user core. Jul 6 23:56:24.052494 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:56:24.190854 sshd[5485]: Invalid user admin from 80.94.95.116 port 37140 Jul 6 23:56:24.512003 sshd[5620]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:24.530578 systemd[1]: Started sshd@12-146.190.157.121:22-139.178.89.65:52440.service - OpenSSH per-connection server daemon (139.178.89.65:52440). Jul 6 23:56:24.532383 systemd[1]: sshd@11-146.190.157.121:22-139.178.89.65:52434.service: Deactivated successfully. Jul 6 23:56:24.547371 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:56:24.554238 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:56:24.564995 systemd-logind[1443]: Removed session 11. Jul 6 23:56:24.670926 sshd[5629]: Accepted publickey for core from 139.178.89.65 port 52440 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:24.675823 sshd[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:24.685904 systemd-logind[1443]: New session 12 of user core. Jul 6 23:56:24.693147 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:56:25.002031 sshd[5629]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:25.012817 sshd[5485]: Connection closed by invalid user admin 80.94.95.116 port 37140 [preauth] Jul 6 23:56:25.010560 systemd[1]: sshd@9-146.190.157.121:22-80.94.95.116:37140.service: Deactivated successfully. Jul 6 23:56:25.014656 systemd[1]: sshd@12-146.190.157.121:22-139.178.89.65:52440.service: Deactivated successfully. Jul 6 23:56:25.020145 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:56:25.024407 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:56:25.027253 systemd-logind[1443]: Removed session 12. Jul 6 23:56:25.352504 containerd[1469]: time="2025-07-06T23:56:25.352372858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 6 23:56:25.353807 containerd[1469]: time="2025-07-06T23:56:25.353130905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:25.354747 containerd[1469]: time="2025-07-06T23:56:25.354626214Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:25.358140 containerd[1469]: time="2025-07-06T23:56:25.357774960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:25.359259 containerd[1469]: time="2025-07-06T23:56:25.359153988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.454327614s" Jul 6 23:56:25.359456 containerd[1469]: time="2025-07-06T23:56:25.359400245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:56:25.466654 containerd[1469]: time="2025-07-06T23:56:25.466354243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:56:25.559615 containerd[1469]: time="2025-07-06T23:56:25.559224106Z" level=info msg="CreateContainer within sandbox \"dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:56:25.574468 containerd[1469]: time="2025-07-06T23:56:25.574405413Z" level=info msg="CreateContainer within sandbox \"dfb9bb7939ee22fc4e4d70ee4e1167bfe584de625618bcfd160312340bcdb5ab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"feb95084a690a5c3a00c2ec3a408c596bc269425f24b825c32c7a15aa7e3c299\"" Jul 6 23:56:25.582215 containerd[1469]: time="2025-07-06T23:56:25.581782576Z" level=info msg="StartContainer for \"feb95084a690a5c3a00c2ec3a408c596bc269425f24b825c32c7a15aa7e3c299\"" Jul 6 23:56:25.684320 systemd[1]: Started cri-containerd-feb95084a690a5c3a00c2ec3a408c596bc269425f24b825c32c7a15aa7e3c299.scope - libcontainer container feb95084a690a5c3a00c2ec3a408c596bc269425f24b825c32c7a15aa7e3c299. Jul 6 23:56:25.760256 containerd[1469]: time="2025-07-06T23:56:25.759305358Z" level=info msg="StartContainer for \"feb95084a690a5c3a00c2ec3a408c596bc269425f24b825c32c7a15aa7e3c299\" returns successfully" Jul 6 23:56:25.871213 containerd[1469]: time="2025-07-06T23:56:25.870472600Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:25.872485 containerd[1469]: time="2025-07-06T23:56:25.872442586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:56:25.875017 containerd[1469]: time="2025-07-06T23:56:25.874963521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 408.553963ms" Jul 6 23:56:25.875295 containerd[1469]: time="2025-07-06T23:56:25.875183460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:56:25.878312 containerd[1469]: time="2025-07-06T23:56:25.878008147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:56:25.881657 containerd[1469]: time="2025-07-06T23:56:25.881530845Z" level=info msg="CreateContainer within sandbox \"02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:56:25.905946 containerd[1469]: time="2025-07-06T23:56:25.905782121Z" level=info msg="CreateContainer within sandbox \"02e00b6a1d43ef9c1ecfee4fcd21c67bb2be1d38507256d9cf54db854abe74f4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b0011fc5fd4edd350782a922f6b7e30721a46c705d0b8b286ceb26c59a69b0b7\"" Jul 6 23:56:25.907878 containerd[1469]: time="2025-07-06T23:56:25.907577328Z" level=info msg="StartContainer for \"b0011fc5fd4edd350782a922f6b7e30721a46c705d0b8b286ceb26c59a69b0b7\"" Jul 6 23:56:25.964411 systemd[1]: Started cri-containerd-b0011fc5fd4edd350782a922f6b7e30721a46c705d0b8b286ceb26c59a69b0b7.scope - libcontainer container b0011fc5fd4edd350782a922f6b7e30721a46c705d0b8b286ceb26c59a69b0b7. Jul 6 23:56:26.032768 containerd[1469]: time="2025-07-06T23:56:26.032253978Z" level=info msg="StartContainer for \"b0011fc5fd4edd350782a922f6b7e30721a46c705d0b8b286ceb26c59a69b0b7\" returns successfully" Jul 6 23:56:26.856761 kubelet[2497]: I0706 23:56:26.836504 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d4c5f94cc-28d9v" podStartSLOduration=34.731660687 podStartE2EDuration="50.794811897s" podCreationTimestamp="2025-07-06 23:55:36 +0000 UTC" firstStartedPulling="2025-07-06 23:56:09.814150442 +0000 UTC m=+51.204110745" lastFinishedPulling="2025-07-06 23:56:25.877301638 +0000 UTC m=+67.267261955" observedRunningTime="2025-07-06 23:56:26.757445745 +0000 UTC m=+68.147406070" watchObservedRunningTime="2025-07-06 23:56:26.794811897 +0000 UTC m=+68.184772224" Jul 6 23:56:26.878095 kubelet[2497]: I0706 23:56:26.877867 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d4c5f94cc-brq2m" podStartSLOduration=34.951612025 podStartE2EDuration="50.877840372s" podCreationTimestamp="2025-07-06 23:55:36 +0000 UTC" firstStartedPulling="2025-07-06 23:56:09.539658691 +0000 UTC m=+50.929619007" lastFinishedPulling="2025-07-06 23:56:25.46588703 +0000 UTC m=+66.855847354" observedRunningTime="2025-07-06 23:56:26.784448639 +0000 UTC m=+68.174408964" watchObservedRunningTime="2025-07-06 23:56:26.877840372 +0000 UTC m=+68.267800691" Jul 6 23:56:27.646916 kubelet[2497]: I0706 23:56:27.646529 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:56:28.735731 containerd[1469]: time="2025-07-06T23:56:28.734365014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:28.735731 containerd[1469]: time="2025-07-06T23:56:28.735647576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 6 23:56:28.736749 containerd[1469]: time="2025-07-06T23:56:28.736399121Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:28.739117 containerd[1469]: time="2025-07-06T23:56:28.738743943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:28.740047 containerd[1469]: time="2025-07-06T23:56:28.739822886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.86175871s" Jul 6 23:56:28.740047 containerd[1469]: time="2025-07-06T23:56:28.739870476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 6 23:56:28.756320 containerd[1469]: time="2025-07-06T23:56:28.756268496Z" level=info msg="CreateContainer within sandbox \"505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:56:28.866102 containerd[1469]: time="2025-07-06T23:56:28.865940031Z" level=info msg="CreateContainer within sandbox \"505cafd6037f86ce8986c31d4a316eb68d705383141e6c989dcf069251d7bcf5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"93eb5a99775d5a8ff33e15acce28dbc6b5b88e35102d201e79dac4eb60183012\"" Jul 6 23:56:28.868572 containerd[1469]: time="2025-07-06T23:56:28.868515601Z" level=info msg="StartContainer for \"93eb5a99775d5a8ff33e15acce28dbc6b5b88e35102d201e79dac4eb60183012\"" Jul 6 23:56:29.064525 systemd[1]: Started cri-containerd-93eb5a99775d5a8ff33e15acce28dbc6b5b88e35102d201e79dac4eb60183012.scope - libcontainer container 93eb5a99775d5a8ff33e15acce28dbc6b5b88e35102d201e79dac4eb60183012. Jul 6 23:56:29.141132 containerd[1469]: time="2025-07-06T23:56:29.139945027Z" level=info msg="StartContainer for \"93eb5a99775d5a8ff33e15acce28dbc6b5b88e35102d201e79dac4eb60183012\" returns successfully" Jul 6 23:56:30.045488 systemd[1]: Started sshd@13-146.190.157.121:22-139.178.89.65:58062.service - OpenSSH per-connection server daemon (139.178.89.65:58062). Jul 6 23:56:30.096652 kubelet[2497]: I0706 23:56:30.087115 2497 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:56:30.096652 kubelet[2497]: I0706 23:56:30.095060 2497 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:56:30.382617 sshd[5797]: Accepted publickey for core from 139.178.89.65 port 58062 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:30.387163 sshd[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:30.395491 systemd-logind[1443]: New session 13 of user core. Jul 6 23:56:30.402394 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:56:31.135697 sshd[5797]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:31.140884 systemd[1]: sshd@13-146.190.157.121:22-139.178.89.65:58062.service: Deactivated successfully. Jul 6 23:56:31.143902 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:56:31.147482 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:56:31.149446 systemd-logind[1443]: Removed session 13. Jul 6 23:56:33.787192 kubelet[2497]: E0706 23:56:33.784986 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:35.323526 kubelet[2497]: I0706 23:56:35.323480 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:56:35.382953 kubelet[2497]: I0706 23:56:35.381516 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-c9x6m" podStartSLOduration=31.932069191 podStartE2EDuration="54.378848369s" podCreationTimestamp="2025-07-06 23:55:41 +0000 UTC" firstStartedPulling="2025-07-06 23:56:06.294920949 +0000 UTC m=+47.684881252" lastFinishedPulling="2025-07-06 23:56:28.741700114 +0000 UTC m=+70.131660430" observedRunningTime="2025-07-06 23:56:29.754048263 +0000 UTC m=+71.144008587" watchObservedRunningTime="2025-07-06 23:56:35.378848369 +0000 UTC m=+76.768808697" Jul 6 23:56:36.159568 systemd[1]: Started sshd@14-146.190.157.121:22-139.178.89.65:58072.service - OpenSSH per-connection server daemon (139.178.89.65:58072). Jul 6 23:56:36.240579 sshd[5812]: Accepted publickey for core from 139.178.89.65 port 58072 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:36.243043 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:36.250145 systemd-logind[1443]: New session 14 of user core. Jul 6 23:56:36.259460 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:56:36.651483 sshd[5812]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:36.658208 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:56:36.658777 systemd[1]: sshd@14-146.190.157.121:22-139.178.89.65:58072.service: Deactivated successfully. Jul 6 23:56:36.662593 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:56:36.663919 systemd-logind[1443]: Removed session 14. Jul 6 23:56:41.675300 systemd[1]: Started sshd@15-146.190.157.121:22-139.178.89.65:46528.service - OpenSSH per-connection server daemon (139.178.89.65:46528). Jul 6 23:56:41.781407 sshd[5827]: Accepted publickey for core from 139.178.89.65 port 46528 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:41.784039 sshd[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:41.796004 systemd-logind[1443]: New session 15 of user core. Jul 6 23:56:41.798348 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:56:42.044286 sshd[5827]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:42.054700 systemd[1]: sshd@15-146.190.157.121:22-139.178.89.65:46528.service: Deactivated successfully. Jul 6 23:56:42.058347 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:56:42.061256 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:56:42.064675 systemd-logind[1443]: Removed session 15. Jul 6 23:56:42.807125 kubelet[2497]: E0706 23:56:42.806631 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:43.774759 kubelet[2497]: E0706 23:56:43.774644 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:45.781731 kubelet[2497]: E0706 23:56:45.781694 2497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:56:47.062832 systemd[1]: Started sshd@16-146.190.157.121:22-139.178.89.65:46532.service - OpenSSH per-connection server daemon (139.178.89.65:46532). Jul 6 23:56:47.211222 sshd[5846]: Accepted publickey for core from 139.178.89.65 port 46532 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:47.213566 sshd[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:47.222748 systemd-logind[1443]: New session 16 of user core. Jul 6 23:56:47.228290 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:56:47.711288 sshd[5846]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:47.722648 systemd[1]: sshd@16-146.190.157.121:22-139.178.89.65:46532.service: Deactivated successfully. Jul 6 23:56:47.726930 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:56:47.730855 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:56:47.736497 systemd[1]: Started sshd@17-146.190.157.121:22-139.178.89.65:46540.service - OpenSSH per-connection server daemon (139.178.89.65:46540). Jul 6 23:56:47.738284 systemd-logind[1443]: Removed session 16. Jul 6 23:56:47.794588 sshd[5859]: Accepted publickey for core from 139.178.89.65 port 46540 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:47.797397 sshd[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:47.804902 systemd-logind[1443]: New session 17 of user core. Jul 6 23:56:47.813381 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:56:48.210421 sshd[5859]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:48.224281 systemd[1]: sshd@17-146.190.157.121:22-139.178.89.65:46540.service: Deactivated successfully. Jul 6 23:56:48.229401 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:56:48.231456 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:56:48.242291 systemd[1]: Started sshd@18-146.190.157.121:22-139.178.89.65:46548.service - OpenSSH per-connection server daemon (139.178.89.65:46548). Jul 6 23:56:48.250592 systemd-logind[1443]: Removed session 17. Jul 6 23:56:48.318141 sshd[5870]: Accepted publickey for core from 139.178.89.65 port 46548 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:48.319892 sshd[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:48.330903 systemd-logind[1443]: New session 18 of user core. Jul 6 23:56:48.334407 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:56:51.466810 sshd[5870]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:51.519853 systemd[1]: sshd@18-146.190.157.121:22-139.178.89.65:46548.service: Deactivated successfully. Jul 6 23:56:51.524098 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:56:51.526933 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:56:51.535141 systemd[1]: Started sshd@19-146.190.157.121:22-139.178.89.65:47018.service - OpenSSH per-connection server daemon (139.178.89.65:47018). Jul 6 23:56:51.541593 systemd-logind[1443]: Removed session 18. Jul 6 23:56:51.699639 sshd[5911]: Accepted publickey for core from 139.178.89.65 port 47018 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:51.702584 sshd[5911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:51.709902 systemd-logind[1443]: New session 19 of user core. Jul 6 23:56:51.718527 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:56:52.494722 sshd[5911]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:52.512223 systemd[1]: sshd@19-146.190.157.121:22-139.178.89.65:47018.service: Deactivated successfully. Jul 6 23:56:52.516418 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:56:52.526195 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:56:52.534604 systemd[1]: Started sshd@20-146.190.157.121:22-139.178.89.65:47034.service - OpenSSH per-connection server daemon (139.178.89.65:47034). Jul 6 23:56:52.541431 systemd-logind[1443]: Removed session 19. Jul 6 23:56:52.611759 sshd[5924]: Accepted publickey for core from 139.178.89.65 port 47034 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:52.614630 sshd[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:52.621635 systemd-logind[1443]: New session 20 of user core. Jul 6 23:56:52.626378 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:56:52.870111 sshd[5924]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:52.881038 systemd[1]: sshd@20-146.190.157.121:22-139.178.89.65:47034.service: Deactivated successfully. Jul 6 23:56:52.888470 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:56:52.890846 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:56:52.893790 systemd-logind[1443]: Removed session 20. Jul 6 23:56:57.887539 systemd[1]: Started sshd@21-146.190.157.121:22-139.178.89.65:47038.service - OpenSSH per-connection server daemon (139.178.89.65:47038). Jul 6 23:56:58.043219 sshd[6005]: Accepted publickey for core from 139.178.89.65 port 47038 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:56:58.045565 sshd[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:58.054025 systemd-logind[1443]: New session 21 of user core. Jul 6 23:56:58.059357 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:56:58.515460 sshd[6005]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:58.524494 systemd[1]: sshd@21-146.190.157.121:22-139.178.89.65:47038.service: Deactivated successfully. Jul 6 23:56:58.528352 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:56:58.531602 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:56:58.534583 systemd-logind[1443]: Removed session 21. Jul 6 23:57:03.631994 systemd[1]: Started sshd@22-146.190.157.121:22-139.178.89.65:53486.service - OpenSSH per-connection server daemon (139.178.89.65:53486). Jul 6 23:57:03.849452 sshd[6045]: Accepted publickey for core from 139.178.89.65 port 53486 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:57:03.854228 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:03.863884 systemd-logind[1443]: New session 22 of user core. Jul 6 23:57:03.868295 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:57:04.484951 sshd[6045]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:04.492192 systemd[1]: sshd@22-146.190.157.121:22-139.178.89.65:53486.service: Deactivated successfully. Jul 6 23:57:04.492727 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:57:04.499427 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:57:04.506343 systemd-logind[1443]: Removed session 22. Jul 6 23:57:09.510770 systemd[1]: Started sshd@23-146.190.157.121:22-139.178.89.65:53500.service - OpenSSH per-connection server daemon (139.178.89.65:53500). Jul 6 23:57:09.608754 sshd[6058]: Accepted publickey for core from 139.178.89.65 port 53500 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:57:09.612417 sshd[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:09.618727 systemd-logind[1443]: New session 23 of user core. Jul 6 23:57:09.625326 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:57:10.264573 sshd[6058]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:10.271274 systemd[1]: sshd@23-146.190.157.121:22-139.178.89.65:53500.service: Deactivated successfully. Jul 6 23:57:10.275764 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:57:10.277750 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:57:10.283048 systemd-logind[1443]: Removed session 23.